ICML 2025 Review Controversies Spark Academic Debate
-
The ICML 2025 acceptance results have recently been announced, marking a historic high with 12,107 valid submissions, resulting in 3,260 accepted papers—an acceptance rate of 26.9%. Despite the impressive volume, numerous serious issues in the review process have emerged, sparking extensive discussions within the academic community.Highlighted Issues
- Inconsistency between review scores and acceptance outcomes
Haifeng Xu, Professor at the University of Chicago, observed that review scores at ICML 2025 were oddly disconnected from acceptance outcomes. Of his four submissions, the paper with the lowest average score (2.75) was accepted as a poster, while the three papers with higher scores (3.0) were rejected.
- Positive reviews yet inexplicable rejection
A researcher from KAUST reported that his submission received uniformly positive reviews, clearly affirming its theoretical and empirical contributions, yet it was rejected without any negative feedback or explanation.
- Errors in review-score documentation
Zhiqiang Shen, Assistant Professor at MBZUAI, highlighted significant recording errors. One paper, clearly rated with two "4" scores, was mistakenly documented in the meta-review as having "three 3’s and one 4". Another paper suffered rejection based on outdated reviewer comments, ignoring the updated scores from reviewers during the rebuttal period.
- Unjustified rejection by Area Chair
Mengmi Zhang, Assistant Professor at NTU, experienced a perplexing case where her paper was rejected by the Area Chair despite unanimous approval from all reviewers, with no rationale provided.
- Incomplete review submissions
A doctoral student from York University reported incomplete reviews were submitted for his paper, yet the Area Chair cited these incomplete reviews as justification for rejection.
- Zero-sum game and unfair review criteria
A reviewer from UT publicly criticized the reviewing criteria, lamenting overly lenient reviews in the past. He highlighted a troubling trend: submissions not employing at least 30 trillion tokens to train 671B MoE models risk rejection regardless of their theoretical strength.
Additionally, several researchers noted suspiciously AI-generated or carelessly copy-pasted reviews, causing contradictory feedback.
Notable Achievements
Despite these controversies, several research groups achieved remarkable outcomes among others:
- Duke University (Prof. Yiran Chen’s team): 5 papers accepted, including 1 spotlight poster.
- Peking University (Prof. Ming Zhang’s team): 4 papers accepted for the second consecutive year.
- UC Berkeley (Dr. Xuandong Zhao): 3 papers accepted.
Open Discussion
Given these significant reviewing issues—including reviewer negligence, procedural chaos, and immature AI-assisted review systems—how should top-tier academic conferences reform their processes to ensure fairness and enhance review quality?
We invite everyone to share your thoughts, experiences, and constructive suggestions!
- Inconsistency between review scores and acceptance outcomes
-
R root shared this topic
-
It seems that reviewers do not have permission to view the ACs' meta-review and PCs' final decision this year. As a reviewer, I cannot see results of the submissions I reviewed.
-
My colleague is serving as a Program Committee (PC) member for this year’s ICML. According to her, some individuals were selected as reviewers solely based on having co-authored a previous ICML paper. Upon investigating the backgrounds of certain reviewers who appeared to submit problematic reviews, she discovered that many of them lacked even a bachelor’s degree; for instance, some were first-year undergraduate students
-
My colleague is serving as a Program Committee (PC) member for this year’s ICML. According to her, some individuals were selected as reviewers solely based on having co-authored a previous ICML paper. Upon investigating the backgrounds of certain reviewers who appeared to submit problematic reviews, she discovered that many of them lacked even a bachelor’s degree; for instance, some were first-year undergraduate students
@cqsyf Perhaps we should prepare ourselves mentally for this to become the norm. AFAIK, NIPS'25 already has PhD students as ACs, and undergraduate reviewers are even more common as reviewers. This is really terrible.
-
With such a pace of submission increase year-o-year, I can not think of a way how this manual review effort may work well!!
-
This thread vividly highlights what seems to be an ironic paradox in the academic community: the more papers we submit, the less time we have left to properly review them!
Think about it, researchers are now spending countless hours crafting submissions to reach record-breaking numbers at conferences like ICML 2025. Yet, this surge in submissions might be directly correlated with declining review quality. It's like we're baking thousands of cakes and then complaining that no one has time to taste them properly.
Perhaps we’re witnessing a "submission-reviewer paradox": the energy invested in authoring more papers inevitably leaves us with fewer resources for thorough and careful reviewing.
Could the solution be smarter automation, stricter reviewer qualifications, or maybe even rethinking how conferences handle volume altogether
-
EMNLP submissions could skyrocket past 10,000 this year. The speed of this growth is astonishing, reflecting just how rapidly the field is expanding. These top-tier conferences attract the best authors and have the privilege to have the most capable reviewers. Hopefully, this won’t discourage authors.