Newcomb’s Paradox is a decision-theoretic puzzle in which a highly reliable predictor’s prior choice confronts an agent with two boxes, creating a conflict between causal and evidential conceptions of rational choice.
At a Glance
- Type
- paradox
- Attributed To
- William Newcomb (popularized by Robert Nozick)
- Period
- 1960s–1970
- Validity
- controversial
Setup and Basic Structure
Newcomb’s Paradox is a thought experiment in decision theory involving prediction, rational choice, and free will. It was originally formulated by the physicist William Newcomb and became widely known through a 1970 paper by philosopher Robert Nozick. The paradox arises from a tension between two plausible principles of rational choice.
The scenario runs as follows. You are presented with two boxes:
- Box A (transparent): visibly contains $1,000.
- Box B (opaque): contains either $1,000,000 or nothing.
A nearly infallible Predictor (sometimes described as a superintelligent being or advanced computer) has already predicted what you will do:
- If the Predictor predicted you would take only box B (one-boxing), it placed $1,000,000 in box B.
- If the Predictor predicted you would take both boxes A and B (two-boxing), it left box B empty.
The key stipulation is that by the time you choose, the Predictor’s action is complete and the contents of box B are fixed. You know the rules and the Predictor’s extraordinary reliability.
You now face a choice:
- One-box: take only box B.
- Two-box: take both boxes A and B.
The tension is that two apparently sound lines of reasoning recommend different choices:
- One line says you should one-box to “match” the Predictor and get $1,000,000.
- Another says you should two-box because whatever is in B is already determined, so you might as well also take the guaranteed $1,000.
Evidential vs. Causal Reasoning
The paradox is commonly framed as a conflict between evidential decision theory (EDT) and causal decision theory (CDT).
Evidential decision theory evaluates actions by the expected utility conditional on performing that action, treating the action as evidence about the state of the world.
- If you one-box, this is strong evidence that the Predictor predicted one-boxing and thus put $1,000,000 in B. So your expected payoff is close to $1,000,000.
- If you two-box, this is strong evidence that the Predictor predicted two-boxing and thus put nothing in B. So your expected payoff is about $1,000.
Because the Predictor is assumed highly reliable, EDT recommends one-boxing: the action that tends to “go together” with the million-dollar outcome.
Causal decision theory, by contrast, evaluates actions by their causal consequences. It asks: holding fixed all facts about the past (including the Predictor’s completed action), which choice causally produces the best outcome?
From the CDT perspective:
- The contents of B are already fixed when you choose.
- Your present choice cannot causally influence what is in B.
- If B contains $1,000,000, then taking both boxes gives $1,001,000 and taking only B gives $1,000,000.
- If B is empty, then taking both boxes gives $1,000 and taking only B gives $0.
In either case, two-boxing weakly dominates one-boxing: it never yields less money and sometimes yields more. Thus CDT recommends two-boxing.
This yields the paradoxical clash:
- EDT: “Take only box B.”
- CDT: “Take both boxes.”
Both recommendations appear supported by seemingly reasonable decision principles, pushing theorists to reconsider what “rationality” requires.
Philosophical Significance and Responses
Newcomb’s Paradox has become a central case study in rational choice theory, philosophy of science, and debates about free will, prediction, and time asymmetry.
1. Free will and prediction
The paradox invites questions about how a Predictor could be so accurate and what that implies about freedom:
- Some interpretations assume near-determinism or highly accurate psychological/physical laws.
- Others treat the Predictor’s reliability as a stipulation and focus on the logical tension rather than metaphysical feasibility.
Critics sometimes argue that the setup is inconsistent or unrealistic; defenders respond that decision theory routinely uses idealized scenarios to probe principles.
2. Rationality and dominance
Supporters of two-boxing emphasize dominance reasoning: if one action is at least as good in every possible outcome and better in some, it seems irrational to refuse it. On this view, the one-boxer is letting “magical thinking” about influencing the past override causal structure.
Supporters of one-boxing counter that rationality concerns global patterns of choice: an agent disposed to one-box would, in such environments, systematically end up richer than a two-boxer. They argue that a good decision theory should recommend the type of choice that, when coupled with accurate predictors, actually leads to better outcomes across similar situations.
3. Alternative frameworks
In response to the paradox, philosophers and decision theorists have developed or refined several approaches:
- Refined EDT and CDT: Some attempt to clarify the conditions under which each theory applies, or modify them to avoid counterintuitive implications.
- Functional decision theory (FDT) or timeless decision theory (TDT) (notably in contemporary literature) treat your choice as the output of a mathematical function that the Predictor has modeled. On such views, you choose the policy whose output, given that it is predictable, yields the best overall outcome—often thereby endorsing one-boxing while still respecting a kind of causal reasoning.
- Game-theoretic treatments see the agent and Predictor as players in a (possibly asymmetric) game, exploring equilibrium concepts and the role of commitments and correlations.
4. Status of the paradox
There is no consensus resolution. The paradox is generally regarded as:
- A genuine tension within intuitive principles of rational choice.
- A useful tool to distinguish and test decision theories.
- A probe of how to treat correlations without causation in rational deliberation.
Some philosophers argue that Newcomb’s Paradox shows EDT is mistaken; others think it reveals a deficiency in CDT; others still hold that both capture different aspects of rationality or that the scenario smuggles in incoherent assumptions.
In contemporary discussions, Newcomb’s Paradox remains a standard reference point for exploring how agents should reason when their choices are predictable and correlated with past events that they cannot causally influence, but which nonetheless bear on the outcomes they care about. It continues to shape debates about what, if anything, a uniquely correct theory of rational decision-making can be.
How to Cite This Entry
Use these citation formats to reference this argument entry in your academic work. Click the copy button to copy the citation to your clipboard.
Philopedia. (2025). Newcomb Paradox. Philopedia. https://philopedia.com/arguments/newcomb-paradox/
"Newcomb Paradox." Philopedia, 2025, https://philopedia.com/arguments/newcomb-paradox/.
Philopedia. "Newcomb Paradox." Philopedia. Accessed December 11, 2025. https://philopedia.com/arguments/newcomb-paradox/.
@online{philopedia_newcomb_paradox,
title = {Newcomb Paradox},
author = {Philopedia},
year = {2025},
url = {https://philopedia.com/arguments/newcomb-paradox/},
urldate = {December 11, 2025}
}