March 24th, 2014

I am preparing to release a new major version of Arasan, version 17.0. This is not a lot stronger than the last release but has quite a bit of code cleanup and bug-fixing. And some significant search changes have been made, so overall I expect this version to play quite differently from the previous version, 16.3.

Recently I have been investigating some new approaches for automatic optimization of the program's algorithms. There was a discussion of this topic on talkchess.com recently. Like many others I have been using CLOP by Rémi Coulom for some time to tune evaluation function parameters, but it is a local optimizer that cannot handle more than a small number of variables, and even then, my experience with it is that often it does not produce a tuned result set that passes verification tests.

Still, CLOP has basically the right idea and I think there is a lot of potential merit to automatic tuning. Unfortunately, tuning chess evaluation parameters is a hard problem. In technical terms, it is a non-linear, possibly non-convex, high dimensional, constrained, mixed integer, noisy black box optimization problem. Many optimization algorithms do not work well for this class of problem. However, there is active research in this area, partly because there are quite a few real-world applications, in areas like biotech, finance, and industrial design. There is a recent review article I found (see also Belotti et. al). Some of the algorithms I have looked at include StoSOO, MADS, Particle Swarm Optimization, Ant Colony Optimization, and various surrogate-based methods, for example see Thomas Hemker's research. These are global optimization algorithms and have the potential to handle simultaneous tuning of more parameters than CLOP can handle. There are quite a few software implementations of these and related algorithms, some of them expensive commercial systems like OptQuest. Some that are free and/or open source include NOMAD, Coin-OR, SCIP, OpenOpt (Python), and CONDOR, among others.

I am planning to do some experimentation in this area. One interesting point from talkchess is that, while the best objective function to maximize is win percentage from actual games, that is very time consuming since large numbers of games are needed to get results that are not widely uncertain. Some programmers have been using instead either the evaluation function itself or a short-depth search, and minimizing the least-squares error between that result and known results from games. That is much faster although still the computation consumes non-negligible time.

Yet another consideration is that these techniques only optimize existing parameters. If you know that several factors influence the result, but possibly some are redundant, and you don't know exactly how to combine them (linearly or non-linearly), then you need some other kind of analysis such as nonlinear regression or maybe a neural network to find significant relationships. I think this approach might be helpful for things like king safety, where you know intuitively that things like damaging the pawn cover of the king is bad, but you don't know exactly how much or how this relates to pieces that attack you, escape squares you have, etc. Something to think about.

March 25th, 2014

Other interesting free optimization software packages are Hopspack from Sandia National Laboratories, based on some recent research by Griffin et. al. (Hemker is one of the co-authors) and Minotaur, developed at Argonne National Laboratory and the University of Wisconsin-Madison. According to some benchmarks, Minotaur is quite competitive with commercial solvers.

See also SMAC (Sequential Model-based Algorithm Configuration), specifically for parameter tuning, software in Java.