15295 Spring 2025 #9 Review of the Last Three Themes -- Problem Discussion
March 19, 2025
A. A Constrained Matrix
B. Conway's Coins
This game is actually called Dawson’s Kayles, and is analyzed in Winning Ways for your Mathematical Plays, a book by John Conway, Elwin Berlekamp, and Richard Guy.
It’s also analyzed in this lecture I prepared for 15-251 a few years ago. Look at page 5. You have to know a little bit about the beautiful theory of nimbers, which is explained in that lecture. This analysis allows you to compute the nimber of a row of n pins in a Dawson’s Kayles position in O(n^2) time. And the nimber is non-zero if and only if the next player to move has a winning strategy. Since n at most 5000, this is fast enough.
Actually, if you read Winning Ways, you’ll see that the N(n) function (the nimber of a row of n pins in Dawson’s Kayles) is eventually periodic. So you could actually solve this problem in O(1) time storing the pre-computed table of the first 69 values, along with the periodic block of 34 values.
–Danny
C. Simulating Move-to-Front
D. A Simple Game (Version 2)
Alpha-beta pruning is a powerful algorithm for pruning minimax search of a game tree. It is a key ingredient in all powerful chess programs. (It seems to be the case that it is no longer taught in courses at CMU, and it does not seem to be acknowledged in the competitive programming community. Go figure.)
Check out my lecture notes on it here.
Consider a tree of depth D and branching factor B. Brute force will evaluate B^D leaves. Using Alpha-Beta, and assuming the moves are considered in a random order, the number of leaf evaluations is roughly B^(D/2). This allows you to solve this problem in O(Sqrt(10^11)). Which is plenty fast enough.
See this Wikipedia article for some of the analysis of the algorithm.
–Danny
E. Making n Matchings
Just messing around with counting matchings, I tried this kind of
graph, where each node i on the left is connected to i and i+1 and i-1
on the right. (We ignore out of bounds indices, so every node
connects to 3 except the ones on the top and the bottom.) It's easy
to prove by induction that this gives the Fibonacci numbers.
This leads to the idea considering what happens if you take a graph G,
and add two more nodes to it. Add the two nodes on the top and
connect them together, and also to the two top nodes of G. The number
of matchings is a + b, where a is the number of matchings of G, and b
is the number of matchings in G when the top two nodes are deleted (or
matched to something else).
So let's denote by [a,b] a graph that has a matchings and if you
remove the top two nodes it has b matchings. Define the operator
A to mean adding a new connected pair to the top of the graph.
Then A[a,b] = [a+b,a].
On the other hand, suppose we added a pair of nodes to the top of
[a,b] (connected to the two top nodes of the graph) that were not
connected to each other. Call this operator B. Then B[a,b] = [b,a].
Let's start with a single edge, [1,1]. Let's see what we can get. By
applying A repeatedly we get [2,1], [3,2], [5,3], etc Fibonacci
numbers. But if we throw in some B operators we can, it turns out,
get anything we want.
To get any desired number, we need to reverse this process. Say we
want to get [n,a] for n>a. Well if we could get [a,n-a] then we can
apply A to it A[a,n-a] = [n,a]. How do we get [a,n-a]? Well, if a >
n-a, then we can do A inverse again. So A[n-a,2a-n] = [a,n-a]. If
a<n-a then we can construct [n-a,a], and apply B as in B[n-a,a] =
[a,n-a]. If n and a are relatively prime then this will eventually
result in [1,1]. All we're doing, essentially, is running the
subtractive version of Euclid's GCD algorithm.
Other than being relatively prime to n, how do we pick a? If we pick
a=1 then this process will be too slow (takes n-1 steps). So we want
an a that works fast. Pick a close to n/1.618..., then the first few
steps make huge progress (no swaps are necessary for a while).
So my program just tests a few numbers near n/1.618 and finds the
choice of a that leads to the smallest number of steps to [1,1]. The
number of iterations (nodes on a side) is usually quite close to log n
(base 1.618). The resulting matrix is tridiagonal.
—Danny
There is a bijection between:
Every perfect matching on the bipartite graph with the same number of vertices in both of its parts
⇔
Let M be the adjacency matrix. Exists permutation (p1, p2, …, pn) such that M[i][pi] = 1 for all i
Therefore, the number of perfect matchings is the number of permutations P such that M[i][pi] = 1 for all i.
If we construct the matrix like this:
Define i_x as the i-th vertex on the left side (row-wise), and j_y as the j-th vertex on the right side (column-wise). Suppose also we are 0-indexed. Then, ei is the vertex {i + 1}_y on the right side.
Setting e_i to 1 will contribute 2^i to our final answer. Suppose we have edge (0_x, {i +1}_y) in our matching, then for vertices 1_x ~ i_x we have 2 options each. For vertices {i + x}_x ~ {n_x} we have 1 option each.
For example, consider setting e_2 = 1 and let (0_x, 3_y) be an edge in our perfect matching. We have the following options:
1_x -> 0_y, 1_y
2_x -> 0_y, 1_y, 2_y
3_x -> 0_y, 1_y, 2_y
Therefore after we choose (1_x, ??), we have 2 options left for 2_x. And then after we fix (2_x, ???), there are 2 options left for 3_x. For vertices 4_x ~ n_x, they each will have 1 option.
–Ivan
F. Backward Analysis?
Notice the order of elements doesn't matter. Therefore, we are only concerned with “number of cards with 0 - 4 module 5” for every player. This is equivalent to putting 8 balls into 5 buckets with {12 choose 4} = 495 states. So the whole game has at most 495^2 states and some are unreachable.
View the whole game as a graph where each state is a vertex. Every vertex has <50 outgoing edges. Then we can apply the minimax idea on this graph.