Okay, we’re almost done! We’ve been studying Nash equilibria for zero-sum 2-player normal form games. We proved a lot of things about them, but now we’ll wrap up the story by proving this:
Grand Theorem. For every zero-sum 2-player normal-form game, a Nash equilibrium exists. Moreover, a pair of mixed strategies for the two players is a Nash equilibrium if and only if each strategy is a maximin strategy.
Review
Let’s remember what we’ve proved in Part 16 and Part 18:
Theorem 1. For any zero-sum 2-player normal form game,
Theorem 2. Given a zero-sum 2-player normal form game for which a Nash equilibrium exists, we have
Theorem 3. If is a Nash equilibrium for a zero-sum 2-player normal-form game, then is a maximin strategy for player A and is a maximin strategy for player B.
Theorem 4. Suppose we have a zero-sum 2-player normal form game for which ★ holds. If is a maximin strategy for player A and is a maximin strategy for player B, then is a Nash equilibrium.
The plan
Today we’ll prove two more results. The first one is easy if you know some topology. The second one is the real heart of the whole subject:
Theorem 5. For every zero-sum 2-player normal-form game, a maximin strategy exists for each player.
Theorem 6. For every zero-sum 2-player normal-form game, ★ holds.
Putting all these results together, it’s easy to get our final result:
Grand Theorem. For every zero-sum 2-player normal-form game, a Nash equilibrium exists. Moreover, a pair of mixed strategies for the two players is a Nash equilibrium if and only if each strategy is a maximin strategy.
Proof. By Theorem 6 we know that ★ holds. By Theorem 5 we know that there exist maximin strategies for each player, say and . Theorem 4 says that if and are maximin strategies and ★ holds, then is a Nash equilibrium. So, a Nash equilibrium exists.
Moreover, if is any Nash equilibrium, Theorem 3 says and are maximin strategies. Conversely, since ★ holds, Theorem 4 says that if and are maximin strategies, is a Nash equilibrium. █
Minimax strategies exist
Okay, let’s dive in and get to work:
Theorem 5. For every zero-sum 2-player normal-form game, a maximin strategy exists for each player.
Proof. We’ll prove this only for player A, since the proof for player B is similar. Remember that a maximin strategy for player A is a mixed strategy that maximizes A’s security level, which is a function
So, we just need to show that this function really has a maximum. To do this, we note that
is a continuous function defined on a compact set. As mentioned at the start of Part 17, this guarantees that has a maximum. █
I apologize if this proof is hard to understand. All this stuff is standard if you know some topology, and a huge digression if you don’t, so I won’t go through the details. This is a nice example of how topology can be useful in other subjects!
The key theorem
Now we finally reach the heart of the whole subject: von Neumann’s minimax theorem. Our proof will be a condensed version of the one in Andrew Colman’s 1982 book Game Theory and its Applications in the Social and Biological Sciences.
Theorem 6. For every zero-sum 2-player normal-form game,
holds.
Proof. Let’s write
and
Our goal is to prove ★, which says By Theorem 1 we know
So, we just need to prove
Here’s how we will do this. We will prove
Since we’ll prove this for any game of the sort we’re studying, it’ll be true even if we add some real number to each entry of the payoff matrix Doing this adds to the expected payoff so it adds to and So, it will follow that
for any real number and this implies
So, let’s get going.
Assume To prove that , remember that
To show this is greater than or equal to zero, we just need to find some mixed strategy for player A such that
In other words, we need to find such that
for all mixed strategies for player B.
How can we find for which this ★★ is true? The key is to consider the set
For example, if
then looks like this:
Since , for any we have
so there must exist with
Since is a mixed strategy, we have for all But since we’ve just seen
at least one of the numbers must be positive. In other words, if we define a set by
then can’t be in this set:
So, we’ve seen that no point in can be in :
Here’s what it looks like in our example:
Now the trick is to:
• let be a point in that’s as close as possible to
and
• let be a point in that’s as close as possible to
We need to use a bit of topology to be sure these points exist, since it means finding the minima of certain functions (namely, distances). But let’s not worry about that now! We’ll complete the proof with two lemmas:
Lemma 1. for all and for at least one
Lemma 2. If is any point in , then
Here’s what the points and and the vector look like in our example:
Check to see that Lemmas 1 and 2 are true in this example! We’ll prove the lemmas later; right now let’s see how they get the job done.
First, by Lemma 1, the numbers are nonnegative and at least one is positive. So, we can define a mixed strategy for player A by defining
where is a number chosen to make sure (Remember, the probabilities must be and must sum to 1.) In other words,
Now, for any mixed strategy for player B, we have and thus by Lemma 1
Dividing by , we get
for all But this is ★★, which is what we wanted to prove! So we are done! █
I will give the proofs of Lemmas 1 and 2 in the next part.