A little bit ago I wrote about the Reconnaissance Blind Chess tournament and how my bot worked.
What you might notice is that for this game of RBC I used a chess engine, and had to have code around it to make up for the fact that RBC isn't really straight chess. Because of that there are some notable flaws:
If you were to try to write a tree search which took into account the sets of potential boards you would run into another problem. The imperfect information version of chess has an exponentially larger tree than regular chess, which is already a pretty massive game.
In order to even approach searching that tree you would need some way of accelerating your tree search.
To do this I developed a small algorithm which in my very preliminary testing gives around a 3 times speedup with logarithmic time and poly-logarithmic space complexity
The full writeup can be found here, but I will give a quick overview.
The method came out of observing that you could lift state valuations from the perfect information variant of a game, and that the valuation of a set of states should correspond to the minimum valuation of the states in it. All this means is that if the opponent is playing optimally you will end up in your worst case scenario.
This has a nice property if the sets of states are finite. If we have two sets of states X and Y, and X is contained in Y, then the value of X is greater than or equal to the value of Y. In math:
$$ X \subseteq Y \implies val(X) \ge val(Y)$$
$$ X \subseteq Y \subseteq Z \implies val(X) \ge val(Y) \ge val(Z)$$
This is really useful, as to actually compute val(X) you have to traverse the entire game tree after X, which is an extremely expensive operation. For many tree searches we can short cut certain paths if we know that there is a better one available already, so if we can find close enough bounds on the value of X we might not even need to search past that point in the game tree.
There are two major issues here though.
First, even figuring out if X is a subset of Y can be expensive. To compute that takes time linear with respect to the size of the sets we are talking about, and in the case of RBC those sets can be sized in the millions or tens of millions.
Secondly, we encounter and value millions and millions of different information sets, to store all of their values and iterate across them all to find the smallest superset and largest subset would be incredibly expensive.
For those two issues there are two approximate solutions:
1) Computing subsets and supersets
To fix this first problem we will weaken our requirements slightly. We don't care about finding an exact subset or exact superset, in fact we don't fully need one.
All we need for our bounds to exist is that the minimally valued state in our set is in the intersection of our sets. We can estimate the likelihood of this happening if we know the size of the intersection and the size of our sets. Storing the size of our sets has very little cost, as it is a single integer, so all we need is a method of estimating the size of the intersection of two sets.
That is where Jaccard Similarity and MinHash come in.
Jaccard Similarity is the ratio of the size of the intersection of two sets to the size of their union. MinHash is a clever algorithm that can quickly approximate the Jaccard Similarity.
How MinHash works is by taking a single hash function, hashing all of the elements and storing the k smallest hash values. To estimate the Jaccard Similarity you just look at how many of their k smallest values overlap, the ratio of that overlap to k will approximate the Jaccard Similarity.
Once we have the Jaccard Similarity, if we know the size of our sets, using a tiny bit of algebraic manipulation we can get an estimation of the size of the intersection of our sets, and once we have that we can find the probability that a given set is an upper bound or lower bound in value of another set.
2) Finding which information sets to keep
Just as above, once we separate our problem out into these smaller pieces we find that they have already been tackled very adeptly.
All we are searching for here is to find the sets which most often appear in our tree or appear as an upper or lower bound. That is the exact same as the APPROX-TOP problem, the problem of finding the approximate most frequent elements in a stream of data.
A very clever and cool solution to that problem is called Count Sketch. Count Sketch works by keeping a matrix of counts. Each row has two hash functions, one to assign elements to a column in that row and one to give them either a positive or negative sign.
When we encounter an element, we go through each row and find the corresponding column given by the hash function, and we increment or decrement the count there depending on the sign given by the row's second hash function.
To estimate the count of an element, we go through and on each row we find the corresponding column, multiply it by the sign that the second hash function gives, and we take the median of all such elements.
The expected value of that median will be the frequency of that element.
Using this we can quickly estimate the most frequent sets we encounter, and keep a list of only the most frequent elements which are most likely to serve as upper or lower bounds.
Using these two solutions in conjunction I obtained a 3x speedup when searching the RBC game tree.
While the algorithm developed still has quite a few restrictions and is bound to certain classes of games and tree searches, I believe it can be extended to a wider class, and that the speedup obtained on the preliminary trials can even further be improved.