Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Paper Copilot
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: review sidekick

Go to CCFDDL
Go to CSRankings
Go to OpenReview
  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. When Bias Slips the Peer Review Net: reflections on the NeurIPS 2024 keynote incident

When Bias Slips the Peer Review Net: reflections on the NeurIPS 2024 keynote incident

Scheduled Pinned Locked Moved Peer Review in Computer Science: good, bad & broken
neurips2024keynoteincident
1 Posts 1 Posters 55 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lelecaoL Offline
    lelecaoL Offline
    lelecao
    Super Users
    wrote on last edited by lelecao
    #1

    The peer review process has long been upheld as the gatekeeper of quality, rigor, and integrity in academic discourse. Yet, as the recent NeurIPS 2024 keynote incident has reminded us, peer review does not extend its reach to one of the most visible stages in our field: the invited keynote.

    Youtube Video

    In her keynote talk titled "How to optimize what matters most", MIT professor Rosalind Picard presented a slide that quoted a Chinese student, expelled from a top university for AI misuse, claiming, "Nobody at my school taught us morals or values". The slide included Picard’s commentary: "Most Chinese who I know are honest and morally upright".

    This statement triggered widespread backlash, with AI researchers such as Jiao Sun of Google DeepMind and Yuandong Tian of Meta calling out the racial undertones and singling out of nationality as discriminatory.

    Picard has since issued a public apology, acknowledging that referencing the student’s nationality was "unnecessary" and "caused unintended negative associations". NeurIPS organizers also released a statement clarifying that the comment violated their code of conduct and would be addressed internally.

    But this raises a deeper issue: How did such a racially loaded narrative make it into one of the most prestigious speaking slots in AI without any checks?

    In peer-reviewed papers, reviewers scrutinize phrasing, bias, and even the implications of an algorithm’s performance on diverse populations. Yet keynotes, arguably more visible and influential, operate outside the peer review system. A keynote speaker’s reputation is often deemed sufficient vetting. This incident, however, illustrates how even well-meaning researchers can unintentionally perpetuate harmful stereotypes when unchecked.

    What’s more concerning is the narrative framing: invoking a student’s nationality to make a moral point, while generalizing from a single anecdote to an entire culture. It underscores a blind spot we often fail to address in AI research: that ethical responsibility is not just about the fairness of our models, but also the fairness of our words and stories.

    If NeurIPS had subjected keynote slides or abstracts to a light peer review (or even a DEI advisory group), this incident might have been avoided. The community deserves keynote talks that not only inspire technically, but also model the inclusive values we preach in our papers.

    This is a wake-up call. If we expect AI models to mitigate bias, we must also ask whether our academic institutions and conferences are holding themselves to the same standard. Bias is not just a data problem; it’s a human problem. And no, peer review won’t solve everything, but perhaps it should go a bit further than we thought.


    🛎 💡 Your thoughts? How can major conferences better safeguard against such missteps? Should keynotes be more rigorously reviewed, both ethically and technically?

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    © 2025 CSPaper.org Sidekick of Peer Reviews
    Debating the highs and lows of peer review in computer science.
    • First post
      Last post
    0
    • Categories
    • Recent
    • Tags
    • Popular
    • World
    • Paper Copilot