Maia doesn’t try to make the perfect moves
AI has been kicking our collective butt in just about every classic board game imaginable for many years now. That’s no surprise, though — when you tell an AI to learn from the best with no checks or balances, that’s precisely what it will do. Now, though, researchers are looking for a way to handicap Chess-playing AI and teach a new model to make more human-like decisions.
This is certainly a novel concept: again, most chess and board game-playing AIs seek to beat out the best of the best. Indeed, in some cases, AI players have been so good that they’ve driven some pros out of the gaming community entirely.
Maia, on the other hand, is a new chess engine that seeks to emulate, not surpass, human-level chess performance. As researchers point out, this could lead to a more “enjoyable chess-playing experience” for any humans an AI is matched up against, while also allowing those players to learn and improve their skills.
“Current chess AIs don’t have any conception of what mistakes people typically make at a particular ability level,” University of Toronto researcher Ashton Anderson explains. “They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can’t separate out what you should work on.”
For a novice or medium-tier player, it can be difficult to determine your pain points if you’re getting crushed by your opponent. However, when the challenge is fair and the playing field is level, it’s easier to find those small spots where you could’ve done better.
“Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn’t, because they are still too difficult,” Anderson continues.
So far, Maia has been able to match human moves more than 50 percent of the time. That’s not a great number yet, but it’s a start.
Maia was introduced to lichess.org, a free online chess service, a few weeks ago. In its first week of availability, the model played a whopping 40,000 games, but that number has risen to 116,370 games now.
Breaking that figure down, the bot has won 66,000 games, drawn 9,000, and lost 40,000. Before its lichess debut, the model was trained on 9 sets of 500,000 “positions” in real human chess games.
It’s allegedly possible to play against the bot, though I cannot figure out how to do so, since its profile doesn’t appear to have a “challenge” button of any kind. However, since “maia1” appears to be constantly playing at least 20 games at any given time, you can spectate whenever you like.