Researchers are working on a chess-playing AI that emulates human-level skill

Polycount

Posts: 3,017   +590
Staff
In brief: AI has been kicking our collective butt in just about every classic board game imaginable for many years now. That's no surprise, though -- when you tell an AI to learn from the best with no checks or balances, that's precisely what it will do. Now, though, researchers are looking for a way to handicap Chess-playing AI and teach a new model to make more human-like decisions.

This is certainly a novel concept: again, most chess and board game-playing AIs seek to beat out the best of the best. Indeed, in some cases, AI players have been so good that they've driven some pros out of the gaming community entirely.

Maia, on the other hand, is a new chess engine that seeks to emulate, not surpass, human-level chess performance. As researchers point out, this could lead to a more "enjoyable chess-playing experience" for any humans an AI is matched up against, while also allowing those players to learn and improve their skills.

"Current chess AIs don't have any conception of what mistakes people typically make at a particular ability level," University of Toronto researcher Ashton Anderson explains. "They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can't separate out what you should work on."

For a novice or medium-tier player, it can be difficult to determine your pain points if you're getting crushed by your opponent. However, when the challenge is fair and the playing field is level, it's easier to find those small spots where you could've done better.

"Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn't, because they are still too difficult," Anderson continues.

So far, Maia has been able to match human moves more than 50 percent of the time. That's not a great number yet, but it's a start.

Maia was introduced to lichess.org, a free online chess service, a few weeks ago. In its first week of availability, the model played a whopping 40,000 games, but that number has risen to 116,370 games now.

Breaking that figure down, the bot has won 66,000 games, drawn 9,000, and lost 40,000. Before its lichess debut, the model was trained on 9 sets of 500,000 "positions" in real human chess games.

It's allegedly possible to play against the bot, though I cannot figure out how to do so, since its profile doesn't appear to have a "challenge" button of any kind. However, since "maia1" appears to be constantly playing at least 20 games at any given time, you can spectate whenever you like.

Middle image credit: Andrey Popov

Permalink to story.

 
I play speed chess regularly. The only way to beat the computer is to make completely counterintuitive moves in order to throw off the AI.

I would say a average player is better against the AI than a master player.

Computer can see what’s coming based on mathematical equations.
 
I play speed chess regularly. The only way to beat the computer is to make completely counterintuitive moves in order to throw off the AI.

I would say a average player is better against the AI than a master player.

Computer can see what’s coming based on mathematical equations.
Where are you playing speed chess, and also what level is it in.

Computers are at a level that it's basically, 100% impossible for any human that ever lived to beat it. No matter their rating or skill level. Unless you lower the computer's difficulty level to something that's at a human level.

Anand said that a cellphone from 2007 or so is better than what he was at the time Super Grand Master World Champion.

Back in the day Grand Master's could trick certain computer programs by setting up a particular position that the particular chess software could not calculate. But it was usually fixed during the next update. That's not possible anymore.

Here's Hikaru Nakamura going over the game where he beat the computer by exploiting the flaw in the software of a chess engine called Rybka. The game is from back in 2008. It took 270 moves which is insane. Skip to 2:35 to hear him talk about flaw, and how they fixed it for the next update.


But yeah, what I find most fascinating thing about chess is that it's a reality based kind of puzzle. I once read a comment from somebody talking about how they would create a chess game, and he was basically going over it as if it was a fighting video game, where he's going to make moves more powerful. And it just doesn't work that way, because it's strictly a cause and effect type of thing. It's literally making reality based decisions.
 
Computers are at a level that it's basically, 100% impossible


But yeah, what I find most fascinating thing about chess is that it's a reality based kind of puzzle. I once read a comment from somebody talking about how they would create a chess game, and he was basically going over it as if it was a fighting video game, where he's going to make moves more powerful. And it just doesn't work that way, because it's strictly a cause and effect type of thing. It's literally making reality based decisions.


First of all, an AI's limitation is the CPU processing power. When you play speed chess, a Brilliant AI can still lose simply because of a time out.

AI are limited to mathematical logic and once again, it is possible to throw traps at them that they haven't anticipated.

Put a super computer against another supercomputer of the same power and the outcome should always be a draw.
 
First of all, an AI's limitation is the CPU processing power. When you play speed chess, a Brilliant AI can still lose simply because of a time out.

AI are limited to mathematical logic and once again, it is possible to throw traps at them that they haven't anticipated.

Put a super computer against another supercomputer of the same power and the outcome should always be a draw.
I disagree. The "computer" you can verse on most chess websites is based on algorithms and planning, where researchers have "taught" it stuff. They can absolutely be fooled by unexpected traps. They won't time out, but this is where you encounter the horizon limit (mentioned in the video) which is how far in advance the computer can calculate possibilities.

Modern AI doesn't use strategies as input, it just messes around with random moves (oversimplification), plays a lot of games, and sometimes a little extra data but not always. These AI don't really have any exploits, and although they can be limited by processing power, they can be designed to use very little
 
Why are they making machine mimic human error and why are they not making machines do machine things instead?
 
It's very difficult to beat modern AI at chess unless you handicap the program severely. If you play completely counterintuitive moves then those moves are probably just wrong. I wrote a chess program to play in the same way humans play a few years back and I was fairly pleased with the results. If you're stronger than 1850 then you'll probably smash it. By default it plays like a fairly standard club player. It's called Fun Chess and you need java to run it. http://www.bikesandkites.com/Chess/
 
Humans with no experience in chess they play local battles. They can’t wrap their mind around the macroscopic state of the game (big picture). This is the reason why experienced players when play each other they trigger resign for a position lose or a pawn lose. They know that even if the local state is viable they will lost in the macroscopic scale.

So a good engine with shallow decision tree, some blur with randomness when state is more complicate and some slight different weights in the value of pieces (humans most times they like to play one type of pieces more) will play like a human.
 
Back