Last time we tackled von Neumann’s minimax theorem:
Theorem. For every zero-sum 2-player normal form game,
where ranges over player A’s mixed strategies and
ranges over player B’s mixed strategies.
We reduced the proof to two geometrical lemmas. Now let’s prove those… and finish up the course!
But first, let me chat a bit about this theorem. Von Neumann first proved it in 1928. He later wrote:
As far as I can see, there could be no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax Theorem was proved.
Von Neumann’s gave several proofs of this result:
• Tinne Hoff Kjeldesen, John von Neumann’s conception of the minimax theorem: a journey through different mathematical contexts, Arch. Hist. Exact Sci. 56 (2001) 39–68.
In 1937 he gave a proof which became quite famous, based on an important result in topology: Brouwer’s fixed point theorem. This says that if you have a ball
and a continuous function
then this function has a fixed point, meaning a point with
You’ll often seen Brouwer’s fixed point theorem in a first course on algebraic topology, though John Milnor came up with a proof using just multivariable calculus and a bit more.
After von Neumann proved his minimax theorem using Brouwer’s fixed point theorem, the mathematician Shizuo Kakutani proved another fixed-point theorem in 1941, which let him get the minimax theorem in a different way. This is now called Kakutani fixed-point theorem.
In 1949, John Nash generalized von Neumann’s result to nonzero-sum games with any number of players: they all have Nash equilibria if we let ourselves use mixed strategies! His proof is just one page long, and it won him the Nobel prize!
Nash’s proof used the Kakutani fixed-point theorem. There is also a proof of Nash’s theorem using Brouwer’s fixed-point theorem; see here for the 2-player case and here for the n-player case.
Apparently when Nash explained his result to von Neumann, the latter said:
That’s trivial, you know. That’s just a fixed point theorem.
Maybe von Neumann was a bit jealous?
I don’t know a proof of Nash’s theorem that doesn’t use a fixed-point theorem. But von Neumann’s original minimax theorem seems to be easier. The proof I showed you last time comes from Andrew Colman’s book Game Theory and its Applications in the Social and Biological Sciences. In it, he writes:
In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.
There are other proofs that avoid fixed-point theorems: for example, there’s one in Ken Binmore’s book Playing for Real. But this one uses transfinite induction, which seems a bit scary and distracting! So far, Colman’s proof seems simplest, but I’ll keep trying to do better.
The lemmas
Now let’s prove the two lemmas from last time. A lemma is an unglamorous result which we use to prove a theorem we’re interested in. The mathematician Paul Taylor has written:
Lemmas do the work in mathematics: theorems, like management, just take the credit.
Let’s remember what we were doing. We had a zero-sum 2-player normal-form game with an payoff matrix
. The entry
of this matrix says A’s payoff when player A makes choice
and player B makes choice
. We defined this set:
For example, if
then looks like this:

We assumed that
This means there exists with
and this implies that at least one of the numbers must be positive. So, if we define a set
by
then can’t be in this set:
In other words, the set is empty.
Here’s what and
look like in our example:

Next, we choose a point in and a point in
:
• let be a point in
that’s as close as possible to
and
• let be a point in
that’s as close as possible to
These points and
need to be different, since
is empty. Here’s what these points and the vector
look like in our example:

To finish the job, we need to prove two lemmas:
Lemma 1.
for all
and
for at least one
Proof. Suppose is any point in
whose coordinates are all the same those of
except perhaps one, namely the
th coordinate for one particular choice of
By the way we’ve defined
and
, this point
can’t be closer to
than
is:
This means that
But since except when
this implies
Now, if we can take
In this case we get
so On the other hand, if
we can take
and get
which simplifies to
But and
so this can only be true if
In short, we know that either
•
or
• and
So, either way we get
Since was arbitrary, this implies
which is the first thing we wanted to show. Also, either way we get
which is the second thing we wanted. Finally, but we know
so
for at least one choice of And this is the third thing we wanted! █
Lemma 2. If is any point in
, then
Proof. Let’s write
for short. For any number between
and
, the point
is on the line segment connecting the points and
Since both these points are in
, so is this point
because the set
is convex. So, by the way we’ve defined
and
, this point can’t be closer to
than
is:
This means that
With some algebra, this gives
Since we can make as small as we want, this implies that
or
or
By Lemma 1 we have and the dot product of any vector with itself is nonnegative, so it follows that
And this is what we wanted to show! █
Conclusion
Proving lemmas is hard work, and unglamorous. But if you remember the big picture, you’ll see how great this stuff is.
We started with a very general concept of two-person game. Then we introduced probability theory and the concept of ‘mixed strategy’. Then we realized that the expected payoff of each player could be computed using a dot product! This brings geometry into the subject. Using geometry, we’ve seen that every zero-sum game has at least one ‘Nash equilibrium’, where neither player is motivated to change what they do—at least if they’re rational agents.
And this is how math works: by taking a simple concept and thinking about it very hard, over a long time, we can figure out things that are not at all obvious.
For game theory, the story goes much further than we went in this course. For starters, we should look at nonzero-sum games, and games with more than two players. John Nash showed these more general games still have Nash equilibria!
Then we should think about how to actually find these equilibria. Merely knowing that they exist is not good enough! For zero-sum games, finding the equilibria uses a subject called linear programming. This is a way to maximize a linear function given a bunch of linear constraints. It’s used all over the place—in planning, routing, scheduling, and so on.
Game theory is used a lot by economists, for example in studying competition between firms, and in setting up antitrust regulations. For that, try this book:
• Lynne Pepall, Dan Richards and George Norman, Industrial Organization: Contemporary Theory and Empirical Applications, Blackwell, 2008.
For these applications, we need to think about how people actually play games and make economic decisions. We aren’t always rational agents! So, psychologists, sociologists and economists do experiments to study what people actually do. The book above has a lot of case studies, and you can learn more here:
• Andrew Colman, Game Theory and its Applications in the Social and Biological Sciences, Routledge, London, 1982.
As this book title hints, we should also think about how game theory enters into biology. Evolution can be seen as a game where the winning genes reproduce and the losers don’t. But it’s not all about competition: there’s a lot of cooperation involved. Life is not a zero-sum game! Here’s a good introduction to some of the math:
• William H. Sandholm, Evolutionary game theory, 12 November 2007.
For more on the biology, get ahold of this classic text:
• John Maynard Smith, Evolution and the Theory of Games, Cambridge University Press, 1982.
And so on. We’ve just scratched the surface!