Skip to content

Peer Review in Computer Science: good, bad & broken

Discuss everything about peer review in computer science research: its successes, failures, and the challenges in between.

This category can be followed from the open social web via the handle cs-peer-review-general@cspaper.org:443

57 Topics 196 Posts

Subcategories


  • Discuss peer review challenges in AI/ML research — submission, review quality, bias, and decision appeals at ICLR, ICML, NeurIPS, AAAI, IJCAI, AISTATS and COLT.

    26 116
    26 Topics
    116 Posts
    JoanneJ
    [image: 1747251137230-8dcf7653-c86c-4a3e-ba7a-8e59f25566ee-image.png] Pulling a NeurIPS all-nighter? I’ve already seen friends lose papers to instant rejections this week, so run through the checklist below before you lock in your submission. 1. “Placeholder” Title & Abstract NeurIPS explicitly warns that titles or abstracts with little real content will be binned on sight. A single sentence teaser like “We introduce a new semi-supervised algorithm” isn’t enough. Quick rescue Open your draft in OpenReview. Expand the abstract into a concise but information-rich paragraph: What problem, what method, what result? Save. Don’t wait until the final deadline. Desk rejections for this reason are already rolling out. 2. Incomplete Author Profiles Every author must have a complete OpenReview profile before the deadline. Required: Field What to do Affiliations List current + last 3 years DBLP link & publications Import via the DBLP URL Advisor / Relations Add supervisors, frequent co-authors, etc. Email Prefer institutional addresses DBLP 30-second guide Search your name at https://dblp.org. Copy your author page URL. In OpenReview → Edit Profile → paste into “DBLP” and click Import. Tick your papers, save. No publications yet? That’s fine — the profile can still be “complete” as long as you have shown a best-effort in filling the fields above and your experiences. 3. Missing Checklist in the PDF The NeurIPS “paper checklist” must live inside the main PDF. Append it after references (or after the appendix if you have one). Copy the checklist block from neurips2025.tex; comment out the instruction lines between %%% BEGIN INSTRUCTIONS %%% and %%% END INSTRUCTIONS %%%. Answer every item. Skip this, and your paper may never even reach a reviewer. Spot Anything Else? If you know another desk-reject booby trap, drop a note below — your tip might save someone’s semester. Good luck!
  • Discuss peer review challenges, submission experiences, decision fairness, reviewer quality, and biases at CVPR, ICCV, ECCV, VR, SIGGRAPH, EUROGRAPHICS, ICRA, IROS, RSS etc.

    5 11
    5 Topics
    11 Posts
    JoanneJ
    I also heard some negative ones. [image: 1747115726301-9ec54026-13df-43c4-90cf-d0c1b5e651f2-image.png] [image: 1747115784562-df6bcec1-bf5b-446d-a626-2037490ea36f-image.png] but it seems iccv doesn't allow the usage of chatgpt etc in review process: [image: 1747115959677-6e4057c2-55e9-45c7-b1e7-4b6ff2645928-image.png] [image: 1747115805501-2b07b416-a61c-48b9-9039-c45b7322dbe3-image.png] [image: 1747115830768-cd7ee90a-e148-4a88-9045-ed712cf087d3-image.png] Let's hear the reviewer side of story, but didn't say why: [image: 1747116023274-8365cfc3-cceb-4c42-9b81-27623bd26a7e-image.png]
  • Discuss peer review, submission experiences, and decision challenges for NLP research at ACL, EMNLP, NAACL, and COLING.

    6 19
    6 Topics
    19 Posts
    JoanneJ
    The original post from EMNLP 2025 Criteria for Determining Irresponsible Reviewers This post accompanies the ARR post Changes to reviewer volunteering requirement and incentives, and defines the specific criteria for what we will deem as “Highly Irresponsible” reviewers. While the focus here is reviewers, we will use similar criteria for determining irresponsible Area chairs (ACs). Non-submitted reviews If a reviewer fails to submit their reviews by the official deadline and has not submitted a personal emergency declaration (note: declaring a personal emergency after the review deadline will not be considered) will automatically be flagged as “Highly Irresponsible”. Extremely terse or unprofessional reviews Where the submissions are good-faith work that merits a serious review (otherwise a short review can suffice, assuming it clearly explains the fundamental problems with that work), reviews that only contain a single argument (1-2 sentences) and no constructive feedback should be flagged. We may also penalize reviews that are extremely unprofessional in tone (e.g., rude, racist, sexist, ableist, etc. content; I4 in the list of 12 commonly reported issues), even if they are otherwise detailed. Here are some guidelines for determining whether to consider a submission to be in good faith: At minimum, a good faith article states the contribution up front and provides an evaluation that supports that. If the writing is so poor that the intended contribution can’t be identified or the article is missing an evaluation positioned as supporting that, then the article does not warrant a serious review. If the issue is just that the stated contribution is not clear, or the evaluation is not sufficient or rigorous enough, that does warrant a serious review. Furthermore, if the paper shows a naivete about the state of the art, the paper still warrants a serious review, but if the paper shows a complete lack of awareness of work in the field (for example, if virtually all of the citations are from another field), then the paper is not a good faith submission. Even interdisciplinary papers should show an awareness of the audience they are submitting their work to. LLM-generated reviews As per the ACL Policy on Publication Ethics, it is acceptable to use LLMs for paraphrasing, grammatical checks and proof-reading, but not for the content of the (meta-)reviews. Furthermore, the content of both the submission and (meta-)review is confidential. Therefore, even for acceptable purposes such as proofreading, it must not be passed on to non-privacy-preserving third parties, such as commercial LLM services, which may store it. Authors will be able to flag such cases and present any evidence they have to support their allegation. While there is no definitive way of determining whether a review was (entirely) generated by an LLM, the Program Chairs will review the evidence and only proceed in cases where there is no reasonable doubt. Flagging review process The process is specified in the ARR post. Ultimately, all decisions will be made by the Program Chairs after a careful review of all evidence. Reviewers/ACs will be able to appeal to the publication ethics committee1 if they want to dispute the Program Chairs decisions. https://www.aclweb.org/adminwiki/index.php/Process_for_ACL_Publication_Ethics_Review Updated: May 06, 2025
  • SIGKDD, SIGMOD, ICDE, CIKM, WSDM, VLDB, ICDM and PODS

    3 22
    3 Topics
    22 Posts
    Hsi Ping LiH
    @river Many thanks for your details!
  • ICSE, OSDI, SOSP, POPL, PLDI, FSE/ESEC, ISSTA, OOPSLA and ASE

    1 2
    1 Topics
    2 Posts
    rootR
    It seems CCF is revising the list again: https://www.ccf.org.cn/Academic_Evaluation/By_category/2025-05-09/841985.shtml
  • HCI, CSCW, UniComp, UIST, EuroVis and IEEE VIS

    0 0
    0 Topics
    0 Posts
    No new posts.
  • Anything around peer review for conferences such as SIGIR, WWW, ICMR, ICME, ECIR, ICASSP and ACM MM.

    1 1
    1 Topics
    1 Posts
    riverR
    Recently, someone surfaced (again) a method to query the decision status of a paper submission before the official release for ICME 2025. By sending requests to a specific API (https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)) endpoint in the CMT system, one can see the submission status via a StatusId field, where 1 means pending, 2 indicates acceptance, and 3 indicates rejection. This trick is not limited to ICME 2025. It appears that the same method can be applied to several other conferences, including: IJCAI, ICME, ICASSP, IJCNN and ICMR. However, it is important to emphasize that using this technique violates the fairness and integrity of the peer-review process. Exploiting such a loophole undermines the confidentiality and impartiality that are essential to academic evaluations. This is a potential breach of academic ethics, and an official fix is needed to prevent abuse. Below is a simplified Python script that demonstrates how this status monitoring might work. Warning: This code is provided solely for educational purposes to illustrate the vulnerability. It should not be used to bypass proper review procedures. import requests import time import smtplib from email.mime.text import MIMEText from email.header import Header import logging # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("submission_monitor.log"), logging.StreamHandler() ] ) # List of submission URLs to monitor (replace 'Your_paper_id' accordingly) SUBMISSION_URLS = [ "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)", "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)" ] # Email configuration (replace with your actual details) EMAIL_CONFIG = { "smtp_server": "smtp.qq.com", "smtp_port": 587, "sender": "your_email@example.com", "password": "your_email_password", "receiver": "recipient@example.com" } def get_status(url): """ Check the submission status from the provided URL. Returns the status ID and a success flag. """ try: headers = { 'User-Agent': 'Mozilla/5.0', 'Accept': 'application/json', 'Referer': 'https://cmt3.research.microsoft.com/ICME2025/', # Insert your cookie here after logging in to CMT 'Cookie': 'your_full_cookie' } response = requests.get(url, headers=headers, timeout=30) if response.status_code == 200: data = response.json() status_id = data.get("StatusId") logging.info(f"URL: {url}, StatusId: {status_id}") return status_id, True else: logging.error(f"Failed request. Status code: {response.status_code} for URL: {url}") return None, False except Exception as e: logging.error(f"Error while checking status for URL: {url} - {e}") return None, False def send_notification(subject, message): """ Send an email notification with the provided subject and message. """ try: msg = MIMEText(message, 'plain', 'utf-8') msg['Subject'] = Header(subject, 'utf-8') msg['From'] = EMAIL_CONFIG["sender"] msg['To'] = EMAIL_CONFIG["receiver"] server = smtplib.SMTP(EMAIL_CONFIG["smtp_server"], EMAIL_CONFIG["smtp_port"]) server.starttls() server.login(EMAIL_CONFIG["sender"], EMAIL_CONFIG["password"]) server.sendmail(EMAIL_CONFIG["sender"], [EMAIL_CONFIG["receiver"]], msg.as_string()) server.quit() logging.info(f"Email sent successfully: {subject}") return True except Exception as e: logging.error(f"Failed to send email: {e}") return False def monitor_submissions(): """ Monitor the status of submissions continuously. """ notified = set() logging.info("Starting submission monitoring...") while True: for url in SUBMISSION_URLS: if url in notified: continue status, success = get_status(url) if success and status is not None and status != 1: email_subject = f"Submission Update: {url}" email_message = f"New StatusId: {status}" if send_notification(email_subject, email_message): notified.add(url) logging.info(f"Notification sent for URL: {url} with StatusId: {status}") if all(url in notified for url in SUBMISSION_URLS): logging.info("All submission statuses updated. Ending monitoring.") break time.sleep(60) # Wait for 60 seconds before checking again if __name__ == "__main__": monitor_submissions() Parting thoughts While the discovery of this loophole may seem like an ingenious workaround, it is fundamentally unethical and a clear violation of the fairness expected in academic peer review. Exploiting such vulnerabilities not only compromises the integrity of the review process but also undermines the trust in scholarly communications. We recommend the CMT system administrators to implement an official fix to close this gap. The academic community should prioritize fairness and the preservation of rigorous, unbiased review standards over any short-term gains that might come from exploiting such flaws.
  • Anything around peer review for conferences such as ISCA, FAST, ASPLOS, EuroSys, HPCA, SIGMETRICS, FPGA and MICRO.

    1 2
    1 Topics
    2 Posts
    rootR
    R.I.P. USENIX ATC ...