Skip to content
👋 Welcome! Feel free to register (verified or anonymous) and share your thoughts or story — your voice matters here! 🗣️💬
  • Official announcement from CSPaper.org

    1 2
    1 Topics
    2 Posts
    riverR
    Love it! seems pretty simple to use! Thanks!
  • 57 Topics
    196 Posts
    JoanneJ
    [image: 1747251137230-8dcf7653-c86c-4a3e-ba7a-8e59f25566ee-image.png] Pulling a NeurIPS all-nighter? I’ve already seen friends lose papers to instant rejections this week, so run through the checklist below before you lock in your submission. 1. “Placeholder” Title & Abstract NeurIPS explicitly warns that titles or abstracts with little real content will be binned on sight. A single sentence teaser like “We introduce a new semi-supervised algorithm” isn’t enough. Quick rescue Open your draft in OpenReview. Expand the abstract into a concise but information-rich paragraph: What problem, what method, what result? Save. Don’t wait until the final deadline. Desk rejections for this reason are already rolling out. 2. Incomplete Author Profiles Every author must have a complete OpenReview profile before the deadline. Required: Field What to do Affiliations List current + last 3 years DBLP link & publications Import via the DBLP URL Advisor / Relations Add supervisors, frequent co-authors, etc. Email Prefer institutional addresses DBLP 30-second guide Search your name at https://dblp.org. Copy your author page URL. In OpenReview → Edit Profile → paste into “DBLP” and click Import. Tick your papers, save. No publications yet? That’s fine — the profile can still be “complete” as long as you have shown a best-effort in filling the fields above and your experiences. 3. Missing Checklist in the PDF The NeurIPS “paper checklist” must live inside the main PDF. Append it after references (or after the appendix if you have one). Copy the checklist block from neurips2025.tex; comment out the instruction lines between %%% BEGIN INSTRUCTIONS %%% and %%% END INSTRUCTIONS %%%. Answer every item. Skip this, and your paper may never even reach a reviewer. Spot Anything Else? If you know another desk-reject booby trap, drop a note below — your tip might save someone’s semester. Good luck!
  • Anonymously share data, results, or materials. Useful for rebuttals, blind submissions and more. Only unverified users can post (and edit or delete anytime afterwards).

    4 4
    4 Topics
    4 Posts
    H
    Impl. based on nr0034je9.zip . Table A: Model Performance on NLP Benchmarks Model SST-2 (Acc) MNLI (Acc) QNLI (Acc) CoLA (Matthews) Avg Score BERT-Base 91.2 84.6 90.1 58.2 81.0 RoBERTa-Base 92.3 87.4 91.8 63.1 83.7 GPT-3 (175B) 94.1 88.9 93.0 66.4 85.6 Our Method 94.8 89.7 93.5 68.9 86.7 Table B: Ablation Study on Model Components (Evaluated on MNLI) Configuration Attention Mechanism Pretraining Corpus MNLI (Acc) Full Model Multi-head Self-Attn Custom + Public 89.7 – w/o Custom Corpus Multi-head Self-Attn Public Only 87.1 – w/o Attention Refinement Block Basic Self-Attn Custom + Public 86.5 – w/o Positional Embeddings Multi-head Self-Attn Custom + Public 85.2 – Random Initialization — — 72.4
Popular Tags