Skip to content
👋 Welcome! Feel free to register (verified or anonymous) and share your thoughts or story — your voice matters here! 🗣️💬
Review Service Icon 🚀 Now Live: Our AI-powered paper review tool is available in beta! Perfect for CS conference submissions — get fast, targeted feedback to improve your chances of acceptance.
👉 Try it now at review.cspaper.org
  • Official announcement from CSPaper.org

    2 3
    2 Topics
    3 Posts
    rootR
    Tired of submitting your CS paper and waiting months for conference reviews? Wish you had early feedback tailored to ICML, NeurIPS, ICLR, or KDD before submission? We’re excited to launch CSPaper Reviews — a new AI-powered reviewer tool built to help researchers quickly understand how their paper might fare at top-tier computer science conferences. [image: 1751218629044-b9f8bf55-6f55-4f27-a7e5-459e97da10a7.png] What is CSPaper Reviews? CSPaper Reviews simulates the peer review process using AI agents tailored to mimic the style and expectations of academic conferences. Whether you're submitting to ICML, NeurIPS, ICLR, KDD, or others, CSPaper offers: Structured, conference-style feedback Strengths and weaknesses of your draft Actionable revision suggestions All in under one minute Keep your review history Just specify an arXiv link or upload your PDF, select your target conference, and hit Review. Try it now https://review.cspaper.org [image: 1751487499761-cspaper-attention-all-u-need.png] 🧪 Why Early Access? This is a preview release, and we’re actively seeking feedback to improve. We welcome input from: Authors who want to test how useful the feedback is before official submission ‍️ Reviewers who can assess if this could be a useful aid (or threat!) to the peer review process We're especially interested in how it helps, where it falls short, and how it could evolve responsibly. 🧠 Sample Use Cases Early feedback on clarity, novelty, and relevance Dry-run for positioning papers for different venues Mentorship tool for first-time authors and student researchers Reviewer assistant for triaging or mentoring We Need Your Voice If you're part of the CS research community, your insights are invaluable. Please try it out and share your thoughts: Feedback: support@cspaper.org Try it here: review.cspaper.org Community Forum: cspaper.org ️ A Note on Responsible Use This tool does not replace human reviewers — nor is it meant to. Instead, it aims to support iteration and raise the baseline before submission. Like any new technology in peer review, transparency and critique are key. We hope to make the submission process a bit less opaque and a lot more accessible. — The CSPaper Team Built with and for the research community.
  • AI-powered paper reviews for top CS conferences — fast, targeted insights to help boost your acceptance odds. Discuss anything related to the CSPaper Review Tool at review.cspaper.org: ask questions, report issues, or suggest improvements.

    1 1
    1 Topics
    1 Posts
    rootR
    Tired of submitting your CS paper and waiting months for conference reviews? Wish you had early feedback tailored to ICML, NeurIPS, ICLR, or KDD before submission? [image: 1751219345698-1750025891361.jpeg] We’re excited to launch CSPaper Reviews — a new AI-powered reviewer tool built to help researchers quickly understand how their paper might fare at top-tier computer science conferences. [image: 1751218629044-b9f8bf55-6f55-4f27-a7e5-459e97da10a7.png] What is CSPaper Reviews? CSPaper Reviews simulates the peer review process using AI agents tailored to mimic the style and expectations of academic conferences. Whether you're submitting to ICML, NeurIPS, ICLR, KDD, or others, CSPaper offers: Structured, conference-style feedback Strengths and weaknesses of your draft Actionable revision suggestions All in under one minute Keep your review history Just specify an arXiv link or upload your PDF, select your target conference, and hit Review. Try it now https://review.cspaper.org [image: 1751487499761-cspaper-attention-all-u-need.png] 🧪 Why Early Access? This is a preview release, and we’re actively seeking feedback to improve. We welcome input from: Authors who want to test how useful the feedback is before official submission ‍️ Reviewers who can assess if this could be a useful aid (or threat!) to the peer review process We're especially interested in how it helps, where it falls short, and how it could evolve responsibly. 🧠 Sample Use Cases Early feedback on clarity, novelty, and relevance Dry-run for positioning papers for different venues Mentorship tool for first-time authors and student researchers Reviewer assistant for triaging or mentoring We Need Your Voice If you're part of the CS research community, your insights are invaluable. Please try it out and share your thoughts: Feedback: support@cspaper.org Try it here: review.cspaper.org Community Forum: cspaper.org ️ A Note on Responsible Use This tool does not replace human reviewers — nor is it meant to. Instead, it aims to support iteration and raise the baseline before submission. Like any new technology in peer review, transparency and critique are key. We hope to make the submission process a bit less opaque and a lot more accessible. — The CSPaper Team Built with and for the research community.
  • 74 Topics
    244 Posts
    rootR
    ️ By CSPaper.org, based on the position paper "The Artificial Intelligence and Machine Learning Community Should Adopt a More Transparent and Regulated Peer Review Process" (Jing Yang, ICML 2025) As submission numbers to top AI and ML conferences exceed 10,000 annually, the peer review system is under unprecedented strain. In response, a growing movement advocates for a more transparent, participatory, and regulated approach to peer review — anchored by tools like Paper Copilot, a community-driven analytics platform that aggregates and visualizes review process data from conferences such as ICLR, NeurIPS, CVPR, and ICML. This article unpacks the findings from the ICML 2025 position paper authored by Jing Yang, which leverages two years of insights from Paper Copilot, and outlines a compelling case for open and structured review systems in AI/ML. 🧭 What is Paper Copilot? Paper Copilot is an independently developed platform designed to democratize access to peer review metrics. Built by a PhD student without institutional backing, it has reached 200,000+ active users from 177 countries, especially early-career researchers aged 18–34. Core Features: Community-submitted and API-collected review scores, confidence levels, and discussion logs. Visualizations of review timelines, score distributions, and author statistics. Interactive analysis of conference-level engagement, user demographics, and score evolution over time. [image: 1751555329365-screenshot-2025-07-03-at-17.08.37.png] Global user distribution map [image: 1751555426367-screenshot-2025-07-03-at-17.10.12.png] Engagement by age/gender Review Models: Fully Open, Partially Open, and Closed The paper categorizes conferences into three disclosure modes: Fully Open (e.g., ICLR): All reviews and discussions visible from the start. Partially Open (e.g., NeurIPS): Reviews released post-decision. Fully Closed (e.g., ICML, CVPR): No public review content at any stage. [image: 1751555523329-screenshot-2025-07-03-at-17.11.51.png] Review disclosure preferences across conferences and years Despite the rise of platforms like OpenReview, many conferences still opt for closed or partially open settings, often due to concerns about reviewer anonymity, misuse of ideas, or company IP protection. Community Engagement and Evidence of Demand The paper uses traffic analytics to validate the appetite for transparency: Organic Search Dominance: 59.9% of traffic comes from search engines—researchers are actively seeking peer review statistics. User Behavior: Conferences with open review modes (like ICLR) see 4–6x more engagement (views, active users, session duration) than closed ones. [image: 1751555633799-screenshot-2025-07-03-at-17.13.40.png] Views, engagement time, CTR by review model 🧠 Benefits of Fully Open Reviews The paper documents several benefits tied to open reviewing: Increased Discussion Depth: ICLR features broader and more active discussion threads than NeurIPS or ICML, with some threads reaching over 70 replies. Mitigated Reviewer Overconfidence: Public exposure leads to more careful, measured reviews — confidence scores are more balanced in open settings. Transparent Dialogue: Real-time visibility facilitates constructive debate and reproducibility. [image: 1751555697806-screenshot-2025-07-03-at-17.14.45.png] Challenges with Closed Review Systems The paper identifies systemic flaws in closed reviews: Inexperienced Reviewers: Younger researchers (aged 18–24) are often overburdened without training, leading to uneven review quality. AI-Generated Reviews: The opaque nature of closed systems makes it difficult to detect LLM-generated or plagiarized content. Authorship Inconsistencies: Name changes post-acceptance have gone untracked, highlighting accountability gaps. Community Speaks: Survey Results A user survey on Paper Copilot revealed that 57% of respondents would willingly share their review scores even for closed-review venues like CVPR. This indicates a clear grassroots demand for transparency across conference formats and subfields. 🧭 Addressing Concerns: Balancing Openness and Protection While supporting transparency, the paper acknowledges valid counterarguments: Plagiarism Risks: Open submissions might expose novel ideas prematurely. IP Concerns for Industry: Open preprints can jeopardize patents in "first-to-file" jurisdictions like the U.S. Reviewer Reluctance: Public visibility may discourage bold, critical feedback. The authors suggest that default transparency with opt-out protections, especially for industrial or high-risk research, offers a feasible compromise. 🧾 Conclusion This position paper doesn't merely propose transparency for its own sake. It provides a data-backed argument showing that transparent peer review: Encourages richer academic discourse, Reduces opacity and potential misconduct, Empowers early-career researchers, Aligns with community-driven values of open science. As AI/ML continues to scale, the research community must evolve its review mechanisms accordingly — embracing openness not just as a feature, but as a foundational norm.
  • Anonymously share data, results, or materials. Useful for rebuttals, blind submissions and more. Only unverified users can post (and edit or delete anytime afterwards).

    4 4
    4 Topics
    4 Posts
    H
    Impl. based on nr0034je9.zip . Table A: Model Performance on NLP Benchmarks Model SST-2 (Acc) MNLI (Acc) QNLI (Acc) CoLA (Matthews) Avg Score BERT-Base 91.2 84.6 90.1 58.2 81.0 RoBERTa-Base 92.3 87.4 91.8 63.1 83.7 GPT-3 (175B) 94.1 88.9 93.0 66.4 85.6 Our Method 94.8 89.7 93.5 68.9 86.7 Table B: Ablation Study on Model Components (Evaluated on MNLI) Configuration Attention Mechanism Pretraining Corpus MNLI (Acc) Full Model Multi-head Self-Attn Custom + Public 89.7 – w/o Custom Corpus Multi-head Self-Attn Public Only 87.1 – w/o Attention Refinement Block Basic Self-Attn Custom + Public 86.5 – w/o Positional Embeddings Multi-head Self-Attn Custom + Public 85.2 – Random Initialization — — 72.4
Popular Tags