Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Paper Copilot
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: review sidekick

Go to CCFDDL
Go to CSRankings
Go to OpenReview
  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Computer Vision, Graphics & Robotics
  4. CVPR Reviewer Said: "This Work Isn't Fit for NeurIPS, Try CVPR Instead"

CVPR Reviewer Said: "This Work Isn't Fit for NeurIPS, Try CVPR Instead"

Scheduled Pinned Locked Moved Computer Vision, Graphics & Robotics
cvpr2025peer reviewiccveccv
2 Posts 2 Posters 112 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • SylviaS Offline
    SylviaS Offline
    Sylvia
    Super Users
    wrote last edited by Sylvia
    #1

    Author: Tianfan Xue (CUHK MMLab Assistant Professor)
    Date: April 2, 2025
    Reposted from: Xiaohongshu User Tianfan Xue’s Profile

    The struggles of peer review - 2025, 12_28_59 PM.jpg

    🤯 The Struggles of Peer Review: sharing some pain to ease yours

    Let’s talk about some unbelievable peer review experiences I (and people I know) have encountered when submitting to CV/ML conferences. Consider this a bit of academic therapy.

    You might find some humor here, but also some solace in knowing you're not alone.

    Before diving into these stories, let me reiterate my stance: even among massive conferences like CVPR, ICCV, ECCV, the peer review process in computer vision is still one of the best across all fields. Most reviewers are serious and responsible. But in a field this large, occasional bad reviews are unavoidable.

    So take these tales lightly, after all, science is full of uncertainty, and peer review is just one part of the journey. A good piece of work will shine eventually. For example, Prof. Xue's most cited paper, Vimeo90K, was rejected three times by NeurIPS, CVPR, and ECCV before finally landing in a journal.


    😩 Example 1: "Image Super-Resolution Is More Important Than Denoising"

    We once submitted a paper on image denoising network design. One reviewer commented:

    “Why do this experiment on image denoising? Why not test network efficiency on other tasks?”

    Okay, fair enough. That’s somewhat constructive.

    But then they added:

    “Image denoising is not an important task; image super-resolution is.”

    This is where it got ridiculous. That single sentence undermined the entire field of image denoising. Seriously?


    🤔 Example 2: "What Is PSNR?"

    A reviewer once asked:

    “What is PSNR, and why didn’t you define it?”

    From that point onward, I always made sure to write out:
    PSNR (peak signal to noise ratio).

    It felt like being asked: “What is CNN, and why didn’t you define it?” in a deep learning paper...


    😐 Example 3: “Not a Significant Improvement”

    In one paper, we did a user study comparing our method to a baseline. 87% of participants preferred ours.

    The reviewer said:

    “Improvement not significant.”

    Come on! That’s a 6:1 ratio!
    Would a football game need to end 4:0 for the win to be considered “clear”?

    We dug deeper. In another setting with 90% user preference, the reviewer still said the improvement was “not significant.”
    Guess 87% to 90% just wasn’t enough. 🙄


    😵‍💫 Example 4: "Not NeurIPS Material, Try CVPR"

    A friend submitted a paper to CVPR, and the reviewer wrote:

    “This work is not suitable for NeurIPS. I suggest submitting to CVPR.”

    Wait... it was submitted to CVPR.

    To be clear: this was a CVPR reviewer saying this, suggesting... the paper be submitted to CVPR.
    Make it make sense!


    🫥 Example 5: "Your New Method Is Too Obvious"

    We proposed a new image capture method that could improve image quality with proper post-processing.

    The reviewer said:

    “This paper makes no contribution. The results show that this processing improves image quality, but any method A would do the same.”

    In short: You’re not wrong, you’re just too obvious. 🤷


    🧠 Final Thoughts

    The peer review process can be frustrating; but remember, you're not alone. Even good work sometimes gets caught in bad reviews. What matters most is persistence.

    "Good research always finds its light."

    So next time you get an absurd review, maybe just laugh it off... and keep going.


    👋 Please register (verified or anonymous) to join the discussion below!

    1 Reply Last reply
    1
    • N Offline
      N Offline
      ntk01-pku
      Super Users
      wrote last edited by
      #2

      Been there, felt that. Sometimes peer review feels more like roulette than rigor; but hey, good science endures beyond a stray reviewer’s “hot take.”

      1 Reply Last reply
      0
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Don't have an account? Register

      • Login or register to search.
      © 2025 CSPaper.org Sidekick of Peer Reviews
      Debating the highs and lows of peer review in computer science.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • World
      • Paper Copilot