Where do we go from here â through the lens of the CS top-tier conference rules?
Many flagship venues have now staked out clear positions.
ICML and AAAI, for instance, continue to prohibit any significant LLM-generated text in submissions unless itâs explicitly part of the paperâs experiments (in other words, no undisclosed LLM-written paragraphs).
NeurIPS and the ACL family of conferences permit the use of generative AI tools but insist on transparency â authors must openly describe how such tools were used, especially if they influenced the research methodology or content.
Meanwhile, ICLR adopts a more permissive stance, allowing AI-assisted writing with only gentle encouragement toward responsible use (there is no formal disclosure requirement beyond not listing an AI as an author).
With that in place, what will the next phase could look like? could it be this following? :
One disclosure form to rule them all â expect a standard section (akin to ACLâs Responsible NLP Checklist, but applied across venues) where authors tick boxes: what tool was used, what prompt given, at which stage, and what human edits were applied.
Built-in AI-trace scanners at submission â Springer Natureâs âGeppettoâ tool has shown itâs feasible to detect AI-generated textďżź; conference submission platforms (CMT/OpenReview) might adopt similar detectors to nudge authors towards honesty before reviewers ever see the paper.
Fine-grained permission tiers â âgrammar-onlyâ AI assistance stays exempt from reporting, but any AI involvement in drafting ideas, claims, or code would trigger a mandatory appendix detailing the prompts used and the post-editing steps taken.
Authorship statements 2.0 â weâll likely keep forbidding LLMs as listed authors, yet author contribution checklists could expand to include items like âAI-verified output,â âdataset curated via AI,â or âAI-assisted experiment design,â acknowledging more nuanced roles of AI in the research.
Cross-venue integrity task-forces â program chairs from NeurIPSICMLACL could share a blacklist of repeat violators (much as journals share plagiarism data) and harmonize sanctions across conferences to present a united front on misconduct.
Or⌠will we settle for a loose system, with policies diverging year by year and enforcement struggling to keep pace?
Your call: Is the field marching toward transparent, template-driven co-writing with AI, or are we gearing up for the next round of cat-and-mouse?