(Top)

List of p(doom) values

p(doom) is the probability of very bad outcomes (e.g. human extinction) as a result of AI. This most often refers to the likelihood of AI taking over from humanity, but different scenarios can also constitute "doom". For example, a large portion of the population dying due to a novel biological weapon created by AI, social collapse due to a large-scale cyber attack, or AI causing a nuclear war. Note that not everyone is using the same definition when talking about their p(doom) values. Most notably the time horizon is often not specified, which makes comparing a bit difficult.

Press the p(doom) percentage to open the source.
  • Yann LeCun
    one of three godfathers of AI, works at Meta

    (less likely than an asteroid)
  • Forecasting Research Institute Superforecasters

    (From the same study: Domain experts estimated 3% AI x-risk, and AI catastrophe at 12%)
  • Vitalik Buterin
    Ethereum founder

  • Machine learning researchers

    (Mean in 2023, depending on the question design, median values: 5-10%)
  • Lina Khan
    head of FTC

  • Elon Musk
    CEO of Tesla, SpaceX, X

  • Dario Amodei
    CEO of Anthropic

  • Yoshua Bengio
    one of three godfathers of AI

  • Emmett Shear
    Co-founder of Twitch, former interim CEO of OpenAI

  • AI Safety Researchers

    (Mean from 44 AI safety researchers in 2021)
  • Geoff Hinton
    one of three godfathers of AI

    (Recently said "Kinda 50-50" on good outcomes for humanity. Earlier he mentioned 10%.)
  • Scott Alexander
    Popular Internet blogger at Astral Codex Ten

  • Eli Lifland
    Top competitive forecaster

  • AI engineer

    (Estimate mean value, survey methodology may be flawed)
  • Joep Meindertsma
    Founder of PauseAI

    (The remaining 60% consists largely of "we can pause".)
  • Paul Christiano
    Head of AI safety, US AI Safety Institute, formerly OpenAI, founded ARC

  • Holden Karnofsky
    Co-founder of Open Philanthropy

  • Jan Leike
    Former alignment lead at OpenAI

  • Zvi Mowshowitz
    Independent AI safety journalist

  • Daniel Kokotajlo
    Forecaster & former OpenAI researcher

  • Dan Hendrycks
    Head of Center for AI Safety

  • Eliezer Yudkowsky
    Founder of MIRI

  • Roman Yampolskiy
    AI safety scientist

Do something about it

However high your p(doom) is, you probably agree that we should not allow AI companies to gamble with our future. Join PauseAI to prevent them from doing so.