Full Program »
Sleeping Beauty and Logical Decision TheoryIn “Self Locating Belief and the Sleeping Beauty Problem” Adam Elga presents a thought experiment in which an ideally rational agent is woken up once or twice depending on the toss of a coin. The agent forgets that they were woken up the first time if or when they are woken up the second time. The first time the agent is woken up they are asked the credence they assign to the coin toss having yielded heads. They are then told that it is Monday and asked the same question. If they are woken up on Tuesday, they will again be asked the same question. The problem is to figure out what credence sleeping beauty should assign to the belief that the coin toss yielded heads at each stage of the thought experiment.
Adam Elga argued for one distribution; David Lewis and Nick Bostrom in “Sleeping Beauty: reply to Elga” and Anthropic Bias respectively, have argued for another. I will argue that in a simulation of the problem, agents that take bets according to Elga’s distribution have a higher expected pay off than agents that take bets according to the distribution suggested by Lewis and Bostrom. However, the distribution that Elga argues for violates a basic tenet of Bayesian epistemology: one should only change one’s credence in a proposition when one learns something new that is more probable given the proposition in question. Elga has Beauty assigning “the coin came up heads” a 1/2 credence before they are sedated, and a 1/3 credence upon waking. Because no information is gained (or lost) in this process that credence change is not a valid Bayesian update.
By thinking of the problem as a decision theoretic problem and using the formal and conceptual resources of logical decision theory I will argue that we could construct an agent that accepts all and only the bets that an agent using Elga’s distribution would accept, but does not violate any tenets of Bayesian epistemology, i.e., we could construct an agent that assigns 1/2 at each stage of the thought experiment but retains rational betting behavior. By “logical decision theory” I mean formalisms like timeless decision theory developed by Eliezer Yudkowsky, and updateless decision theory developed by Wei Dai. In such theories, all versions of an agent are considered when making a choice regardless of location in space or time, and as such Sleeping Beauty would make choices that maximized the expected sum utility of all of their instances across time. This is not an ad hoc features of these decision theories, but rather a central feature of their advantage over traditional decision theories.
University of Maryland Baltimore County