In boardrooms, venture capital firms, and regulatory hearings alike, a single shorthand phrase has taken root: p(doom). It sounds like a cinematic exaggeration, but in the world of artificial intelligence strategy, it is a serious metric. Short for probability of doom, it represents the estimated chance that advanced AI systems could trigger catastrophic outcomes ranging from irreversible loss of human agency to systemic civilizational disruption. Despite its dramatic name, p(doom) is not a formal scientific theory. It is a decision-making heuristic, a risk posture indicator, and increasingly, a strategic conversation starter for executives, investors, and policymakers navigating an unprecedented technological inflection point. The concept emerged from AI safety and longtermist research communities, where analysts needed a way to compress complex uncertainty into a single number for discussion, resource allocation, and policy prioritization. Unlike climate models or epidemiological for...