Skip to main content

Posts

Showing posts with the label explained

What Is AI P(Doom)? A Clear Explanation

P(doom) is shorthand for "probability of doom," a term widely used in artificial intelligence safety, existential risk, and longtermist communities to describe the estimated likelihood that advanced AI systems could lead to catastrophic outcomes for humanity. It is not a formal scientific theory, mathematical model, or empirically validated forecast. Instead, it is a conversational and strategic shorthand—a way to compress deep uncertainty about AI's long-term trajectory into a single number for discussion, prioritization, and decision-making. The phrase gained traction in online forums like LessWrong, within the Effective Altruism movement, and among AI alignment researchers. When someone cites their p(doom)—say, 10% or 50%—they are expressing a subjective belief about how likely it is that the development of highly capable, potentially autonomous AI systems could result in human extinction, permanent loss of human control over critical systems, irreversible societal col...