The way in which artificial intelligence systems make decisions is typically drastically different from the approach a human takes. Thus, the operation of an AI system may be difficult to interpret and to learn from. While the search algorithms in traditional chess engines are easy for a human to understand (algorithmic transparency), they lack simulatability, i.e. a human cannot reproduce calculations necessary for the result, and thus this knowledge does not lead to a better understanding of why a particular move (or a tactical line) are good.
This project will employ a variety of modern deep-learning techniques, in combination with traditional tools for chess analysis, towards building an inherently interpretable chess AI. Operating, by construction, at different levels of abstraction -- mimicking the way humans conceptualise chess -- this AI system will be able to "explain" its decisions in a more interpretable way.
This is a family of several projects, each of which will address one of the many sub-challenges towards this overall goal. Please contact me to discuss details of a particular sub-project.
Prerequisites: interest in machine learning and chess, good programming skills, and a degree of ingenuity.