More pages ...

17 June 2008

Decisions

I first ran across Newcomb's Problem when I was a teenager. Eliezer Yudkowsky at Overcoming Bias describes it thus:
A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.
  • Box A is transparent and contains a thousand dollars.
  • Box B is opaque, and contains either a million dollars, or nothing.
You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far — everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

Me, I take only box B, no doubt. But this is a pretty vexing problem in decision theory. Mr Yudkowsky describes the classic arguments thus:
One-boxer: “I take only box B, of course. I'd rather have a million than a thousand.”

Two-boxer: “Omega has already left. Either box B is already full or already empty. If box B is already empty, then taking both boxes nets me $1000, taking only box B nets me $0. If box B is already full, then taking both boxes nets $1,001,000, taking only box B nets $1,000,000. In either case I do better by taking both boxes, and worse by leaving a thousand dollars on the table — so I will be rational, and take both boxes.”

One-boxer: “If you're so rational, why ain'cha rich?”

Two-boxer: “It's not my fault Omega chooses to reward only people with irrational dispositions, but it's already too late for me to do anything about that.”

He goes on to observe something I didn't know.
“Verbal arguments for one-boxing are easy to come by, what's hard is developing a good decision theory that one-boxes” — coherent math which one-boxes on Newcomb's Problem without producing absurd results elsewhere.
I can see how that might be. He then goes on to dig into what the heck we mean by rationality in the first place, and to quote Miyamoto Musashi. Very cool, if you like that sort of thing.

2 comments:

  1. The version of this dilemma first told to me had the alien's prediction powers as absolutely perfect. Even in your version, two-boxer's argument is negating the premise, no?

    Of course we've also got to look at the utility functions as well. A thousand dollars would be awfully nice, but won't change my life in any significant way. One million dollars would be life-changing.

    ReplyDelete
  2. Anonymous17 June, 2008

    I would choose box B regardless. Would rather have a big risk than a small gain.

    But then, look at my life.

    ReplyDelete

Note: Only a member of this blog may post a comment.