P(doom) is shorthand for "probability of doom," a term widely used in artificial intelligence safety, existential risk, and longtermist communities to describe the estimated likelihood that advanced AI systems could lead to catastrophic outcomes for humanity. It is not a formal scientific theory, mathematical model, or empirically validated forecast. Instead, it is a conversational and strategic shorthand—a way to compress deep uncertainty about AI's long-term trajectory into a single number for discussion, prioritization, and decision-making.
The phrase gained traction in online forums like LessWrong, within the Effective Altruism movement, and among AI alignment researchers. When someone cites their p(doom)—say, 10% or 50%—they are expressing a subjective belief about how likely it is that the development of highly capable, potentially autonomous AI systems could result in human extinction, permanent loss of human control over critical systems, irreversible societal collapse, or other outcomes from which recovery is impossible.
What counts as "doom" varies by context. For some, it means literal human extinction. For others, it includes scenarios where AI systems permanently disempower humanity, lock in authoritarian governance, or cause irreversible value drift—where the future no longer reflects human intentions or flourishing. Because the definition is flexible, p(doom) estimates are highly sensitive to how one frames the risk.
Estimates of p(doom) span an enormous range. Some researchers and technologists assign it a probability below 1%, arguing that AI progress will remain incremental, that safety research is keeping pace, and that human institutions can adapt. Others, particularly in technical AI safety, assign probabilities above 50%, citing concerns about deceptive alignment, rapid capability jumps, optimization pressure, and the difficulty of specifying human values in a way that scales to superintelligent systems.
Several key factors shape where someone lands on this spectrum. Takeoff speed matters: a slow, predictable progression of AI capabilities allows time for course correction, while a fast or discontinuous takeoff could outpace governance and safety efforts. Alignment progress is another major variable—if we cannot reliably ensure that advanced AI systems pursue goals that are robustly beneficial and interpretable, risk rises. Deployment models also influence estimates: tightly controlled, tool-like AI poses different risks than autonomous agents that can self-improve, replicate, or operate in open environments. Finally, the state of global coordination—whether nations and companies can cooperate on safety standards, compute tracking, and emergency protocols—plays a decisive role in many risk assessments.
Critics argue that p(doom) is too vague to be useful. Because it lacks empirical grounding and depends heavily on philosophical priors, it can become a rhetorical device rather than a tool for action. Some worry that focusing on extreme tail risks distracts from pressing, near-term AI challenges like bias, misinformation, labor disruption, cybersecurity threats, and concentration of power. Others caution that high p(doom) estimates can foster fatalism or justify overly restrictive policies that stifle beneficial innovation.
Despite these limitations, p(doom) serves several practical functions. It helps researchers and funders prioritize work on the most consequential problems. It enables policymakers to gauge the urgency of regulatory intervention. And for executives and investors, it acts as a proxy for an organization's risk posture—revealing whether a team is thinking seriously about second-order consequences and long-term accountability.
By 2026, the conversation around p(doom) has evolved. Rather than debating precise probabilities, many in the field now emphasize empirical safety research, scalable oversight techniques, interpretability tools, and governance frameworks that can adapt to unexpected capability gains. Scenario planning, red-teaming, and stress-testing have become more valuable than assigning a single number to an inherently uncertain future.
In essence, AI p(doom) is less about predicting the end of the world and more about asking the right questions early: What could go wrong at scale? How do we build systems that remain controllable, transparent, and aligned as they grow more capable? And what safeguards must be in place before deployment? The number itself may never be settled, but the discipline of confronting catastrophic risk—however unlikely—is what makes the concept enduringly relevant. For anyone shaping the future of AI, engaging with p(doom) is not about embracing doomism. It is about practicing strategic humility, investing in resilience, and ensuring that the most powerful technologies we create remain firmly in the service of human flourishing.
