Skip to content
πŸ‘‹ Welcome! Feel free to register (verified or anonymous) and share your thoughts or story β€” your voice matters here! πŸ—£οΈπŸ’¬
Review Service Icon πŸš€ Now Live: Our AI-powered paper review tool is available in beta! Perfect for CS conference submissions β€” get fast, targeted feedback to improve your chances of acceptance.
πŸ‘‰ Try it now at review.cspaper.org
  • Official announcement from CSPaper.org

    4 9
    4 Topics
    9 Posts
    JoanneJ
    Thank you for information and update. It's back online in so quickly, amazing.
  • AI-powered paper reviews for top CS conferences β€” fast, targeted insights to help boost your acceptance odds. Discuss anything related to the CSPaper Review Tool at review.cspaper.org: ask questions, report issues, or suggest improvements.

    16 20
    16 Topics
    20 Posts
    rootR
    We’ve noticed an issue in the AAAI 2026 Main Techincal review system: the score distribution deviates from a realistic distribution. This deviation is likely due to the fact that we currently do not yet have a large-scale benchmarking dataset in place. Our team has been actively monitoring the distribution and qualitatively discovered this inconsistency. We are now working to locate the root cause and implement a fix. Please stay tuned for updates. Importantly, the review content itself is reliable. However, for now, we recommend treating the numeric β€œRating” with a grain of salt until the issue is resolved.
  • 94 Topics
    287 Posts
    C
    lol a few days ago, AAAI just casually dropped the first-round β€œreviewer gift pack” and it was wild… people opened their dashboard and saw 8 papers at once. hit refresh? suddenly 11. felt like a surprise loot box nobody wanted then on the same day, the system started pulling them back, like β€œoops, jk.” now it seems to be 3 reviewers per paper… but yeah, still mostly reviewing direct competitors so not exactly a win. and if you only got 1 paper in round one, chances are you’ll be drafted again in round two. mental prep advised the backdrop: 22k+ submissions this year. absolutely bonkers. the random assignment thing made a lot of us worried about another β€œWho is Adam” moment. the new reciprocal review rules might help a bit, but it still feels messy funny part tho… some folks i know haven’t been assigned a single paper yet, while others are buried alive. truly RNG (Random Number Generator) reviewer roulette at its finest. so yeah, any other reviewers got their β€œbig pack” yanked overnight? would you rather deal with the madness of 8–11 papers, or this slow-burn redistribution? curious how y’all are coping.
  • Anonymously share data, results, or materials. Useful for rebuttals, blind submissions and more. Only unverified users can post (and edit or delete anytime afterwards).

    4 4
    4 Topics
    4 Posts
    H
    Impl. based on nr0034je9.zip . Table A: Model Performance on NLP Benchmarks Model SST-2 (Acc) MNLI (Acc) QNLI (Acc) CoLA (Matthews) Avg Score BERT-Base 91.2 84.6 90.1 58.2 81.0 RoBERTa-Base 92.3 87.4 91.8 63.1 83.7 GPT-3 (175B) 94.1 88.9 93.0 66.4 85.6 Our Method 94.8 89.7 93.5 68.9 86.7 Table B: Ablation Study on Model Components (Evaluated on MNLI) Configuration Attention Mechanism Pretraining Corpus MNLI (Acc) Full Model Multi-head Self-Attn Custom + Public 89.7 – w/o Custom Corpus Multi-head Self-Attn Public Only 87.1 – w/o Attention Refinement Block Basic Self-Attn Custom + Public 86.5 – w/o Positional Embeddings Multi-head Self-Attn Custom + Public 85.2 – Random Initialization β€” β€” 72.4
Popular Tags