Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Paper Copilot
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: review sidekick

Go to CCFDDL
Go to CSRankings
Go to OpenReview
  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Natural Language Processing
  4. 🎪 ACL 2025 Reviews Are Out: To Rebut or Flee? The Harsh Reality of NLP’s "Publish or Perish" Circus

🎪 ACL 2025 Reviews Are Out: To Rebut or Flee? The Harsh Reality of NLP’s "Publish or Perish" Circus

Scheduled Pinned Locked Moved Natural Language Processing
11 Posts 8 Posters 1.1k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C Offline
    C Offline
    cocktailfreedom
    Super Users
    wrote on last edited by
    #2

    My initial scores are OA 2.5/2.5/2
    It got raised to OA 2.5/2.5/2.5 ... Well, become a bit better LOL!
    Any chance for findings?

    1 Reply Last reply
    0
    • M Offline
      M Offline
      magicparrots
      wrote on last edited by
      #3

      📈 Some review scores I have seen

      Stronger or Mid-range Submissions

      • OA: 4/4/4, C : 2/2/2
        Concern: Low confidence may hurt chances.

      • OA: 4, 4, 2.5, C : 4, 4, 4
        Community says likely for Findings.

      • OA: 3, 3, 3, C : 5, 4, 4
        Possibly borderline for Findings.

      • OA Average: 3.38, Excitement: 3.625
        Decent shot, though one reviewer gave 2.5.

      • OA average: 3.33
        Reported as the highest OA seen by one reviewer – suggests bar is low this cycle.


      Weaker Submissions

      • OA: 2.5, 2.5, 1.5, 😄 4, 3, 3
        Unlikely to be accepted.

      • OA: 2, 1.5, 2.5, 😄 4, 4, 4
        Most agree no chance for Findings.

      • OA: 3, 3, 2.5, 😄 4, 3, 4
        Marginal; some optimism for Findings.

      • Only two reviews, one with meaningless 1s and vague reasoning
        ACs often unresponsive in such cases.


      Some guessing from community

      Findings Track:

      • Informal threshold: OA ≥ 3.0
      • Strong confidence and soundness can help borderline cases

      Main Conference:

      • Informal threshold: OA ≥ 3.5 to 4.0
      • Very few reports of OA > 3.5 this cycle

      Score Changes During Rebuttal:

      • Rare but possible (e.g., 2 → 3)
      • No transparency or reasoning shared

      Info on review & rebuttal process

      • Reviews were released gradually, not all at once
      • Emergency reviews still being requested even after deadline
      • Author response period extended by 2 days
        • Confirmed via ACL 2025 website and ACL Rolling Review
      • Meta-reviews and decisions expected April 15

      To summarize

      • This cycle’s review scores seem low overall
      • OA 3.0 is a realistic bar for Findings track
      • OA 3.5+ likely needed for Main conference
      • First-time submitters often confused by lack of clear guidelines and inconsistent reviewer behavior
      1 Reply Last reply
      3
      • M Offline
        M Offline
        magicparrots
        wrote on last edited by
        #4

        Continued sample scores from zhihu


        🟩 Borderline to Promising Scores

        • 4 / 3 / 3 (Confidence: 3 / 4 / 4)

          Hoping for Main Conference acceptance.

        • 4 / 3 / 2.5 (Confidence: 3 / 4 / 4)

          Reviewer hinted score could increase after rebuttal.

        • 3.5 / 3.5 / 3 (Meta: 3)

          For December submission. Ask: is that enough for Findings?

        • 3.5 / 3.5 / 2.5 (Confidence: 3 / 3 / 4)

          Author in the middle of a tough rebuttal. Main may be ambitious.

        • 3.5 / 3 / 2.5

          Open question: what's the chance for Findings?

        • 3 / 3 / 2.5 (Confidence: 3 / 3 / 3)

          Undergraduate author. Aims for Findings. Rebuttal will clarify reproducibility.

        • 3.5 / 2.5 / 2.5 (Confidence: 3 / 2 / 4)

          Community sees this as borderline.


        🟨 Mediocre / Mixed Outcomes

        • 2 / 3 / 4

          One reviewer bumped the score after 6 minutes (!), but still borderline overall.

        • 2 / 2.5 / 4

          Rebuttal effort was made, but one reviewer already dropped. Probably withdrawn.

        • 2 / 3.5 / 4

          Surprisingly higher for a paper the author didn’t expect to succeed.


        🟥 Weak or Rejected Outcomes

        • 4 / 1.5 / 2 (Confidence: 5 / 3 / 4)

          Likely no chance. Community reaction: “Is it time to give up?”

        • 3 / 2.5 / 2.5 (Confidence: 3 / 3 / 5)

          Rebuttal might help, but outlook is dim.

        • 1 / 2.5, confidence 5

          Probably a confused or low-effort review.

        • OA 1 / 1 / 1

          A review like this existed (likely invalid). Community flagged it.


        🔍 Additional comment from community

        • Some reviewers are still clearly junior-level, or appear to use AI tools for review generation.
        • Findings threshold widely believed to be OA ≥ 2.5–3.0, assuming some confidence in reviews.
        • Review score inflation is low this round: average OA above 3.0 is rare, even among decent papers.
        • Several December and February round submissions are said to be evaluated independently due to evolving meta-review policies.

        ✍️ Summary

        • Score distributions reported in the Chinese community largely align with Reddit’s (see my previous post), which is 3.0 is the magic number for Findings, 3.5+ needed for Main.
        • Rebuttal might swing things, but expectations are tempered.
        • Many junior researchers are actively sharing scores to gauge chances and strategize next steps (rebut, withdraw, or resubmit elsewhere).
        1 Reply Last reply
        3
        • cqsyfC Offline
          cqsyfC Offline
          cqsyf
          Super Users
          wrote on last edited by
          #5

          See here for a crowd-sourced score distribution (biased ofc): https://papercopilot.com/statistics/acl-statistics/acl-2025-statistics/

          Screenshot 2025-04-03 at 00.47.22.png

          1 Reply Last reply
          2
          • rootR Offline
            rootR Offline
            root
            wrote on last edited by
            #6

            Got the ARR email — if your reviews came in late, are vague, low-effort, or seem AI-generated, you can now officially report them via OpenReview before April 7 (AOE). I think it’s worth flagging anything seriously wrong, since these reports influence both meta-review decisions and future review quality control. Details here: https://aclrollingreview.org/authors#step2.2

            H 1 Reply Last reply
            1
            • lelecaoL Offline
              lelecaoL Offline
              lelecao
              Super Users
              wrote on last edited by
              #7

              I honestly feel like one of my reviewers must’ve had their brain replaced by a potato. They gave my paper a 2 and listed some lukewarm, vague “weaknesses” that barely made sense. Meanwhile, I reviewed a paper that burned through over 200 H100s for a tiny performance gain.

              I thought it lacked both cost-efficiency and novelty, and somehow they got a 4 from another reviewer? Seriously? That thing gets a 4, and mine, which is already deployed in real-world LLM production systems, resource-efficient and effective, gets a 2? I’m speechless.

              Anyone know if it’s still viable to withdraw from Findings and submit to a journal instead? The work is pure NLP: Is TKDE still a good fit these days, or are there faster journals people would recommend?

              Also… is this a trend now? Reviewers saying “Good rebuttal, I liked it,” but still not adjusting the score? What’s the point then? I spent so much time running extra experiments and carefully writing a detailed rebuttal, and it’s treated the same as if I’d done nothing. If this continues, maybe it’s time to just scrap the rebuttal phase altogether. 😖

              1 Reply Last reply
              0
              • rootR root

                Got the ARR email — if your reviews came in late, are vague, low-effort, or seem AI-generated, you can now officially report them via OpenReview before April 7 (AOE). I think it’s worth flagging anything seriously wrong, since these reports influence both meta-review decisions and future review quality control. Details here: https://aclrollingreview.org/authors#step2.2

                H Offline
                H Offline
                Hu8kKo34
                Super Users
                wrote on last edited by
                #8

                @root Totally agree!

                Just to add: for top ML conferences like NeurIPS, ICML, and ICLR, it's also good practice to use the “confidential comments to AC” field when something seems off. That includes suspected plagiarism, conflicts of interest, or if you think a review is clearly low-effort or AI-generated but don’t want to make that accusation publicly. It helps ACs and PCs take appropriate action, and those comments are taken seriously during meta-review and future reviewer assignments.

                1 Reply Last reply
                1
                • lelecaoL Offline
                  lelecaoL Offline
                  lelecao
                  Super Users
                  wrote on last edited by lelecao
                  #9

                  Here are the historical acceptance rate of ACL conference:

                  Venue Long papers Short papers
                  ACL'14 26.2% (146/572) 26.1% (139/551)
                  ACL'15 25.0% (173/692) 22.4% (145/648)
                  ACL'16 28.0% (231/825) 21.0% (97/463)
                  ACL'17 25.0% (195/751) 18.9% (107/567)
                  ACL'18 25.3% (258/1018) 24.0% (126/526)
                  ACL'19 25.7% (447/1737) 18.2% (213/1168)
                  ACL'20 25.4% (571/2244) 17.6% (208/1185)
                  ACL'21 24.5% (571/2327) 13.6% (139/1023)
                  ACL'21 Findings 14.6% (339/2327) 11.5% (118/1023)
                  ACL'22 ? (604/?) ? (97/?)
                  ACL'22 Findings ? (361/?) ? (361/?)
                  ACL'23 23.5% (910/3872) 16.5% (164/992)
                  ACL'23 Findings 18.4% (712/3872) 19.1% (189/992)
                  1 Reply Last reply
                  1
                  • H Offline
                    H Offline
                    Hu8kKo34
                    Super Users
                    wrote last edited by
                    #10

                    The meta review has come out. I provide one data point:

                    Review scores: 2.5 2.5 2.5
                    Meta review 2.5

                    Well ....

                    1 Reply Last reply
                    0
                    • J Offline
                      J Offline
                      Joserffrey
                      Super Users
                      wrote last edited by Joserffrey
                      #11

                      We also have one submission for ARR Feb. Hope it can be accepted as main conference at ACL 2025.

                      Review scores: 3, 3.5, 3.5
                      Meta review: 3.5

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      © 2025 CSPaper.org Sidekick of Peer Reviews
                      Debating the highs and lows of peer review in computer science.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Paper Copilot