Here’s a puzzle from a recent issue of Quanta, an online science magazine:
Puzzle 1: I write down two different numbers that are completely unknown to you, and hold one in my left hand and one in my right. You have absolutely no idea how I generated these two numbers. Which is larger?
You can point to one of my hands, and I will show you the number in it. Then you can decide to either select the number you have seen or switch to the number you have not seen, held in the other hand. Is there a strategy that will give you a greater than 50% chance of choosing the larger number, no matter which two numbers I write down?
At first it seems the answer is no. Whatever number you see, the other number could be larger or smaller. There’s no way to tell. So obviously you can’t get a better than 50% chance of picking the hand with the largest number—even if you’ve seen one of those numbers!
But “obviously” is not a proof. Sometimes “obvious” things are wrong!
It turns out that, amazingly, the answer to the puzzle is yes! You can find a strategy to do better than 50%. But the strategy uses randomness. So, this puzzle is a great illustration of the power of randomness.
If you want to solve it yourself, stop now or read Quanta magazine for some clues—they offered a small prize for the best answer:
• Pradeep Mutalik, Can information rise from randomness?, Quanta, 7 July 2015.
Greg Egan gave a nice solution in the comments to this magazine article, and I’ll reprint it below along with two followup puzzles. So don’t look down there unless you want a spoiler.
I should add: the most common mistake among educated readers seems to be assuming that the first player, the one who chooses the two numbers, chooses them according to some probability distribution. Don’t assume that. They are simply arbitrary numbers.
The history of this puzzle
I’d seen this puzzle before—do you know who invented it? On G+, Hans Havermann wrote:
I believe the origin of this puzzle goes back to (at least) John Fox and Gerald Marnie’s 1958 betting game ‘Googol’. Martin Gardner mentioned it in his February 1960 column in Scientific American. Wikipedia mentions it under the heading ‘Secretary problem’. Gardner suggested that a variant of the game was proposed by Arthur Cayley in 1875.
Actually the game of Googol is a generalization of the puzzle that we’ve been discussing. Martin Gardner explained it thus:
Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of a googol (1 followed by a hundred 0s) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.
So, the puzzle I just showed you is the special case when there are just 2 slips of paper. I seem to recall that Gardner incorrectly dismissed this case as trivial!
There’s been a lot of work on Googol. Julien Berestycki writes:
I heard about this puzzle a few years ago from Sasha Gnedin. He has a very nice paper about this
• Alexander V. Gnedin, A solution to the game of Googol, Annals of Probability (1994), 1588–1595.
One of the many beautiful ideas in this paper is that it asks what is the best strategy for the guy who writes the numbers! It also cites a paper by Gnedin and Berezowskyi (of oligarchic fame).
Egan’s solution
Okay, here is Greg Egan’s solution, paraphrased a bit:
Pick some function such that:
•
•
• is strictly increasing: if
then
There are lots of functions like this, for example
Next, pick one of the first player’s hands at random. If the number you are shown is compute
Then generate a uniformly distributed random number
between 0 and 1. If
is less than or equal to
guess that
is the larger number, but if
is greater than
guess that the larger number is in the other hand.
The probability of guessing correctly can be calculated as the probability of seeing the larger number initially and then, correctly, sticking with it, plus the probability of seeing the smaller number initially and then, correctly, choosing the other hand.
Say the larger number is and the smaller one is
Then the probability of guessing correctly is
This is strictly greater than since
so
.
So, you have a more than 50% chance of winning! But as you play the game, there’s no way to tell how much more than 50%. If the numbers on the other players hands are very large, or very small, your chance will be just slightly more than 50%.
Followup puzzles
Here are two more puzzles:
Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.
Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?
But watch out—here come Egan’s solutions to those!
Solutions
Egan writes:
Here are my answers to your two puzzles on G+.
Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.
Answer: If we adopt a deterministic strategy, that means there is a function
that tells us whether on not we stick with the number x when we see it. If
we stick with it, if
we swap it for the other number.
If the two numbers are
and
with
then the probability of success will be:
This is exactly the same as the formula we obtained when we stuck with
with probability
but we have specialised to functions
valued in
We can only guarantee a more than 50% chance of choosing the larger number if
is monotonically increasing everywhere, i.e.
whenever
But this is impossible for a function valued in
To prove this, define
to be any number in
such that
such an
must exist, otherwise
would be constant on
and hence not monotonically increasing. Similarly define
to be any number in
such that
We then have
but
Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?
Answer: As Philip Gibbs noted, a deterministic pseudo-random number generator is still deterministic. Using a specific sequence of algorithmically random bits
![]()
to construct a number
between
and
means
takes on the specific value:
So rather than sticking with
with probability
for our monotonically increasing function
we end up always sticking with
if
and always swapping if
This is just using a function
as in Puzzle 2, with:
if
if
So all the same consequences as in Puzzle 2 apply, and we cannot guarantee a more than 50% chance of choosing the larger number.
Puzzle 3 emphasizes the huge gulf between ‘true randomness’, where we only have a probability distribution of numbers and the situation where we have a specific number
generated by any means whatsoever.
We could generate using a pseudorandom number generator, radioactive decay of atoms, an oracle whose randomness is certified by all the Greek gods, or whatever. No matter how randomly
is generated, once we have it, we know there exist choices for the first player that will guarantee our defeat!
This may seem weird at first, but if you think about simple games of luck you’ll see it’s completely ordinary. We can have a more than 50% chance of winning such a game even if for any particular play we make the other player has a move that ensures our defeat. That’s just how randomness works.