Human Go player thoroughly beats AI after a computer program finds its weakness

midian182

Posts: 9,662   +121
Staff member
What just happened? It seems the world doesn't have to worry about artificial intelligence dominating all aspects of our lives just yet. A human player has thoroughly beaten a top AI in board game Go by exploiting a weakness within the system, though humanity's triumph could be tarnished by the fact it took another computer to identify the flaw.

The current AI revolution can trace many of its roots back to 2016 when Google's DeepMind AI beat a top-ranked Go player in five straight matches. While computers had been beating humans at chess for a long time before then, Google said the Go victory was significant as the possible moves in the game outnumber the atoms in the universe.

The human player defeated by DeepMind in 2016 retired from professional play three years later over the increasing dominance of AI, calling it "an entity that cannot be defeated." But Kellin Pelrine, an American amateur player who's one level below the top amateur ranking, won 14 of 15 games against the KataGo AI system.

While Pelrine won the games without the direct help of a computer, it was a program developed by a research firm called FAR AI that showed how KataGo could be defeated. Pelrine said it's "not completely trivial, but it's not super-difficult" to learn the method, which he also used to beat Leela Zero, another top AI-powered Go player.

The Financial Times writes that FAR AI played more than one million games against KataGo to learn its weakness that could be exploited by an intermediate-level or better player.

Go involves players placing an unlimited supply of black and white stones on a 19 x 19 board in an attempt to encircle their opponent's pieces and enclose the largest amount of space. Pelrine's strategy involved creating a loop of stones to encircle his opponent's group while distracting the AI with moves in other sections of the board. "As a human, it would be quite easy to spot," Perline said, but the AI didn't realize what was happening, even when the loop was almost completed.

Stuart Russell, a computer science professor at the University of California, Berkeley, said Pelrine's victory illustrated the flaws in deep learning systems behind many of today's AIs, in that they are limited by what they're trained on and cannot think for themselves; it's what's caused some of the recent weird responses from ChatGPT and other AI services.

The flaw has been exploited by Go players for several months now. Engadget reports that Lightvector, which developed of KataGo, said it has been working on a fix for various attacks that use the exploit.

Masthead: HermanHiddema

Permalink to story.

 
AI - A bad fad.
I think ChatGPT shows we are the the cusp of an AI revolution. I would agree before that, AI in the everyday world was little more than marketing. ChatGPT has showed that the bizarre fictions of AI might no longer be fiction. We've been developing deep learning for almost 15 years now and I'd say we're finally starting to see something of substance come from it.
 
I think ChatGPT shows we are the the cusp of an AI revolution. I would agree before that, AI in the everyday world was little more than marketing. ChatGPT has showed that the bizarre fictions of AI might no longer be fiction. We've been developing deep learning for almost 15 years now and I'd say we're finally starting to see something of substance come from it.

I’ve been impressed, but also not impressed by chatGPT tbh. It essentially requires a degree on the topic you’re asking it questions in to sort it out when it makes egregious mistakes, and some topics it absolutely sucks at, such as anything that relies on visual information, as well as judicial topics.

It’s cool, and somewhat useful, but it’s still far too prone to giving very convincing, entirely wrong, answers.
 
I’ve been impressed, but also not impressed by chatGPT tbh. It essentially requires a degree on the topic you’re asking it questions in to sort it out when it makes egregious mistakes, and some topics it absolutely sucks at, such as anything that relies on visual information, as well as judicial topics.

It’s cool, and somewhat useful, but it’s still far too prone to giving very convincing, entirely wrong, answers.
My argument is that I'm impressed by it compared to watch existed even a year ago and that we're only going to see tech like it improve. I'm going to use IPv4 as an example, they could have never predicted the world we live in today so imagine what could be possible with this tech given the time between IPv4 and now
 
I’ve been impressed, but also not impressed by chatGPT tbh. It essentially requires a degree on the topic you’re asking it questions in to sort it out when it makes egregious mistakes, and some topics it absolutely sucks at, such as anything that relies on visual information, as well as judicial topics.

It’s cool, and somewhat useful, but it’s still far too prone to giving very convincing, entirely wrong, answers.


That's exactly how modern media or search engines work. You search for articles on a topic and it shows you those most popular. Not necessarily those accurate. But you won't know the difference unless you're an expert. Though, sometimes it's enough to just have common sense.
 
Back