Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Paper Copilot
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: review sidekick

Go to CCFDDL
Go to CSRankings
Go to OpenReview
  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Artificial intelligence & Machine Learning
  4. The ICML'25 Review Disaster: "What Does 'k' in k-NN Mean?" 😱

The ICML'25 Review Disaster: "What Does 'k' in k-NN Mean?" 😱

Scheduled Pinned Locked Moved Artificial intelligence & Machine Learning
1 Posts 1 Posters 112 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • rootR Offline
    rootR Offline
    root
    wrote on last edited by
    #1

    The recent ICML 2025 review cycle has sparked outrage and dark humor across the ML community. Here’s a compilation of jaw-dropping anecdotes from Zhihu (China’s Quora) exposing the chaos, ranging from clueless reviewers to systemic failures.

    Buckle up! 🤡


    1. The "k-NN" Incident

    User "写条知乎混日子" dropped this gem:

    "The reviewer asked me: ‘What does the ‘k’ in k-NN stand for?’"

    Yes, a reviewer at ICML, a top-tier ML conference, needed clarification on one of the most basic ML concepts.


    2. The "Pro vs. RPO" Mix-Up

    User "CpGD7" shared:

    "The reviewer misread ‘rpo’ as ‘pro’ and questioned why our ‘advanced version’ lost to baselines. Next time, should I rename my main experiments ‘Promax’ to get accepted?"


    3. The "I Didn’t Have Time to Check Proofs" Confession

    User "虚无", a reviewer, admitted:

    "I got assigned 5 theoretical papers. Checking proofs properly takes 7–10 days per paper. I only had time to verify the first two; the rest got high scores based on ‘intuition’ because I couldn’t validate the math."

    This raises a serious ethical concern: Papers are being accepted/rejected based on guesses, not rigor.


    4. The "Citation Mafia" Reviewer

    User "Jane" reported:

    "A 1-star reviewer demanded we cite 7 unrelated papers — 6 of which were by the same author. We withdrew the submission."


    5. The "I Review Papers in a Field I Don’t Understand" Dilemma

    User "Highlight" (a biochemist) was roped into reviewing ML theory:

    "I’m from a biochemistry background. They assigned me 5 ML papers. I’m scrambling to understand the math over the weekend. They must be desperate."


    6. The "R is Not the Real Numbers" Debacle

    User "better" vented:

    "A reviewer complained: ‘What is 𝐑? You never defined it. It can’t possibly mean the real numbers!’ …What else would it be?!"


    7. The "Dataset Police" Strike Again

    User "Reicholguk" faced this absurdity:

    "A reviewer demanded we test on ‘popular’ datasets like Cora/Citeseer, ignoring that we already used Amazon Computer and Roman Empire graphs (which are standard in our subfield). Is this reviewer an AI? Even AI wouldn’t be this clueless."


    8. The "I’ll Just Give Random Scores" Strategy

    Many users reported suspiciously patterned scores:

    • "877129391241": "One of my papers got no reviews at all (blank). Another got all 1s and 2s."
    • "陈靖邦": "Got 4443 after an ICLR desk reject. Is this luck or a sign reviewers just clicked randomly?"

    Why This Matters

    These aren’t just funny fails, they reveal deep flaws in peer review:

    • Overworked reviewers (5+ papers, no opt-out).
    • Mismatched expertise (biochemists judging theory).
    • Lazy/bad-faith reviews (no comments, citation demands).
    • Systemic randomness (scores with no justification).

    As User "虚无" warned:

    "If ICML keeps this up, no serious researcher will want to submit or review."


    The Big Question

    Should top conferences like ICML:
    ❓ Cap reviewer workloads?
    ❓ Allow expertise-based opt-outs?
    ❓ Penalize low-effort reviews?

    What’s your take? Share your worst review horror stories below!

    (Sources: Zhihu users "877129391241", "虚无", "CpGD7", "陈靖邦", "Jane", "Highlight", "better", "Reicholguk", "写条知乎混日子". Original posts here.)

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    © 2025 CSPaper.org Sidekick of Peer Reviews
    Debating the highs and lows of peer review in computer science.
    • First post
      Last post
    0
    • Categories
    • Recent
    • Tags
    • Popular
    • World
    • Paper Copilot