Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Paper Copilot
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: review sidekick

Go to CCFDDL
Go to CSRankings
Go to OpenReview
  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Artificial intelligence & Machine Learning
  4. ICML 2025 Review Controversies Spark Academic Debate

ICML 2025 Review Controversies Spark Academic Debate

Scheduled Pinned Locked Moved Artificial intelligence & Machine Learning
icmlicml2025icml2025conferenceicml 2025 conferencereviewacademic debatecontroversiesrejectacceptacceptance rate
8 Posts 5 Posters 123 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • JoanneJ Offline
    JoanneJ Offline
    Joanne
    wrote last edited by
    #1

    4096b1e5-1a61-47b3-8caf-338b70eb4f31-image.png
    The ICML 2025 acceptance results have recently been announced, marking a historic high with 12,107 valid submissions, resulting in 3,260 accepted papers—an acceptance rate of 26.9%. Despite the impressive volume, numerous serious issues in the review process have emerged, sparking extensive discussions within the academic community.

    🔥 Highlighted Issues

    1. Inconsistency between review scores and acceptance outcomes
      Haifeng Xu, Professor at the University of Chicago, observed that review scores at ICML 2025 were oddly disconnected from acceptance outcomes. Of his four submissions, the paper with the lowest average score (2.75) was accepted as a poster, while the three papers with higher scores (3.0) were rejected.
      52270782-e939-43aa-ae52-75da7359f1fc-image.png
    2. Positive reviews yet inexplicable rejection
      A researcher from KAUST reported that his submission received uniformly positive reviews, clearly affirming its theoretical and empirical contributions, yet it was rejected without any negative feedback or explanation.
      207d3a85-dd66-4881-9741-73e2abc33178-image.png
    3. Errors in review-score documentation
      Zhiqiang Shen, Assistant Professor at MBZUAI, highlighted significant recording errors. One paper, clearly rated with two "4" scores, was mistakenly documented in the meta-review as having "three 3’s and one 4". Another paper suffered rejection based on outdated reviewer comments, ignoring the updated scores from reviewers during the rebuttal period.
      c7cc2185-3322-495d-9ee3-b48e9f8f8986-image.png
    4. Unjustified rejection by Area Chair
      Mengmi Zhang, Assistant Professor at NTU, experienced a perplexing case where her paper was rejected by the Area Chair despite unanimous approval from all reviewers, with no rationale provided.
      cddb2a5e-e3eb-4da0-8481-6b75fe940b5f-image.png
    5. Incomplete review submissions
      A doctoral student from York University reported incomplete reviews were submitted for his paper, yet the Area Chair cited these incomplete reviews as justification for rejection.
      9af341fb-4c04-422a-83c9-8df422586408-image.png
    6. Zero-sum game and unfair review criteria
      A reviewer from UT publicly criticized the reviewing criteria, lamenting overly lenient reviews in the past. He highlighted a troubling trend: submissions not employing at least 30 trillion tokens to train 671B MoE models risk rejection regardless of their theoretical strength.
      10393efa-95b3-4547-9662-f2d57978e66d-image.png

    Additionally, several researchers noted suspiciously AI-generated or carelessly copy-pasted reviews, causing contradictory feedback.

    🎉 Notable Achievements

    Despite these controversies, several research groups achieved remarkable outcomes among others:

    • Duke University (Prof. Yiran Chen’s team): 5 papers accepted, including 1 spotlight poster.
    • Peking University (Prof. Ming Zhang’s team): 4 papers accepted for the second consecutive year.
    • UC Berkeley (Dr. Xuandong Zhao): 3 papers accepted.

    💡 Open Discussion

    Given these significant reviewing issues—including reviewer negligence, procedural chaos, and immature AI-assisted review systems—how should top-tier academic conferences reform their processes to ensure fairness and enhance review quality?

    We invite everyone to share your thoughts, experiences, and constructive suggestions!

    1 Reply Last reply
    1
    • rootR root shared this topic
    • J Offline
      J Offline
      Joserffrey
      Super Users
      wrote last edited by
      #2

      It seems that reviewers do not have permission to view the ACs' meta-review and PCs' final decision this year. As a reviewer, I cannot see results of the submissions I reviewed.

      1 Reply Last reply
      1
      • cqsyfC Offline
        cqsyfC Offline
        cqsyf
        Super Users
        wrote last edited by
        #3

        My colleague is serving as a Program Committee (PC) member for this year’s ICML. According to her, some individuals were selected as reviewers solely based on having co-authored a previous ICML paper. Upon investigating the backgrounds of certain reviewers who appeared to submit problematic reviews, she discovered that many of them lacked even a bachelor’s degree; for instance, some were first-year undergraduate students 😨 😕 😯

        J 1 Reply Last reply
        2
        • cqsyfC cqsyf

          My colleague is serving as a Program Committee (PC) member for this year’s ICML. According to her, some individuals were selected as reviewers solely based on having co-authored a previous ICML paper. Upon investigating the backgrounds of certain reviewers who appeared to submit problematic reviews, she discovered that many of them lacked even a bachelor’s degree; for instance, some were first-year undergraduate students 😨 😕 😯

          J Offline
          J Offline
          Joserffrey
          Super Users
          wrote last edited by
          #4

          @cqsyf Perhaps we should prepare ourselves mentally for this to become the norm. AFAIK, NIPS'25 already has PhD students as ACs, and undergraduate reviewers are even more common as reviewers. This is really terrible.

          1 Reply Last reply
          1
          • C Offline
            C Offline
            cocktailfreedom
            Super Users
            wrote last edited by
            #5

            With such a pace of submission increase year-o-year, I can not think of a way how this manual review effort may work well!!

            1 Reply Last reply
            1
            • rootR Offline
              rootR Offline
              root
              wrote last edited by root
              #6

              This thread vividly highlights what seems to be an ironic paradox in the academic community: the more papers we submit, the less time we have left to properly review them!

              Think about it, researchers are now spending countless hours crafting submissions to reach record-breaking numbers at conferences like ICML 2025. Yet, this surge in submissions might be directly correlated with declining review quality. It's like we're baking thousands of cakes and then complaining that no one has time to taste them properly. 🍰 🤠

              Perhaps we’re witnessing a "submission-reviewer paradox": the energy invested in authoring more papers inevitably leaves us with fewer resources for thorough and careful reviewing.

              Could the solution be smarter automation, stricter reviewer qualifications, or maybe even rethinking how conferences handle volume altogether ❓

              1 Reply Last reply
              2
              • JoanneJ Offline
                JoanneJ Offline
                Joanne
                wrote last edited by
                #7

                Seriously, "first-year undergraduate students" as reviewer?!

                1 Reply Last reply
                1
                • JoanneJ Offline
                  JoanneJ Offline
                  Joanne
                  wrote last edited by
                  #8

                  EMNLP submissions could skyrocket past 10,000 this year. The speed of this growth is astonishing, reflecting just how rapidly the field is expanding. These top-tier conferences attract the best authors and have the privilege to have the most capable reviewers. Hopefully, this won’t discourage authors.

                  1 Reply Last reply
                  1
                  Reply
                  • Reply as topic
                  Log in to reply
                  • Oldest to Newest
                  • Newest to Oldest
                  • Most Votes


                  • Login

                  • Don't have an account? Register

                  • Login or register to search.
                  © 2025 CSPaper.org Sidekick of Peer Reviews
                  Debating the highs and lows of peer review in computer science.
                  • First post
                    Last post
                  0
                  • Categories
                  • Recent
                  • Tags
                  • Popular
                  • World
                  • Paper Copilot