AI doesn't just beat humans in Quake III, it's more cooperative, too

mongeese

Posts: 643   +123
Staff
Why it matters: Having taken the crown in the board game genre, DeepMind has shifted to something a little more ambitious. Gone are simple sets of rules, two dimensions, and defined grids to enter the madness of uncontrolled 3D movement, randomly generated maps, and teamwork. Given just a single metric, victory or defeat, DeepMind’s "FTW" AI managed to safely secure victory after victory in a human tournament.

Even in a complex game such as Go, DeepMind was able to teach their AI, AlphaGo, a set of rules, possible moves, and what each position on the board means.

In the classic shooter Quake III's Capture the Flag mode, the AI was required to analyze the raw image sent through the display cable to figure out what the rules are and how to win. To match an “average human” it required 140,000 games, to match a “strong human” required 175,000, and when the researchers stopped at 450,000 training games the AI was significantly better than all human players.

Throughout the course of a tournament, human teams captured an average 16 fewer flags than AI pairs, and a pair of professional gamers able to talk to each other could only beat the AI 25% of the time after 12 hours’ practice. To rub salt in the wound, the forty human players in the tournament rated the AI as being more cooperative than other human players. What does that say about us?

Despite never being told the rules, nor trained with a pre-made human game dataset, the AI taught itself much as a human would. After grasping the concepts that the AI had its own base, the enemy had one, too, and taking their flag back to yours won a point, and points mean victory, the AI slowly figured out how to kill enemy players and claim flags. Trained against offshoots of itself, it discovered basic strategies, like following the other player and camping the enemy spawn point. Much like a human though, it abandoned some strategies as it improved in favor of new ones, like self-defense.

The researchers built two layers into the AI, the ‘thinking’ layer, which was responsible for meta strategies, and the ‘doing’ layer, which interpreted those strategies into specific actions. It developed dedicated neurons for checking if it had the flag, if its teammate had the flag, if there was an enemy in sight, and where the enemy base was, for example.

For the tournament, the researchers increased the AI’s reaction time by 267 milliseconds, what they calculated as the average player’s reaction time, and it made very little difference to the AI’s performance. The AI originally had an accuracy of 80% as well, compared to human’s 50%, but the researchers decreased that, too. Our AI overlords are just simply smarter than us.

One of the study’s most interesting findings was that the best teammate combo was one human and one AI. Despite not being able to communicate, like a pair of humans, nor anticipate each other’s moves as an AI would be expected to do, the unlikely duo had a 5% higher probability of winning than a pure AI pair.

This shows that the AI can still be developed further, though it’s unclear what future training might entail. It’s also interesting to note that while the AI was able to quickly beat players in Go and chess, it won by a much slimmer margin in Quake III. Humans will likely retain the edge in modern first-person shooters where environments, character classes, and weapon options further complicate strategies for some time.

Permalink to story.

 
I believe cooperative in the sense of an AI is a misnomer. The idea of cooperation is disregarding the sense of one's self for the sense of the group but in the case of an AI there is no sense of self. They don't have a personality, they aren't going to chase a frag because they want that kill, and they aren't going to throw because they don't like their teammates. These's AIs can't not work together so it's less cooperation and more being forced by it's programming. Let me know when the AI actually has a choice to work with it's teammates or go solo.
 
I beat XAERO in Quake 3 on the hardest difficulty simply by hiding behind a pillar and taking potshots at him with my Railgun. the coputer is deadly accurate with any weapon, so beyond that, I have no idea who could beat a Nightmare difficulty bot with a railgun.
 
I'd like to face this opponent. I haven't played video games competitively for years, but I'm still confident I could channel enough energy to defeat this AI.

Never underestimate the power of human intuition.
 
It's a shame stupid AI cant figure out how not to direct me down a closed street in google maps though.
 
I believe cooperative in the sense of an AI is a misnomer. The idea of cooperation is disregarding the sense of one's self for the sense of the group but in the case of an AI there is no sense of self. They don't have a personality, they aren't going to chase a frag because they want that kill, and they aren't going to throw because they don't like their teammates. These's AIs can't not work together so it's less cooperation and more being forced by it's programming. Let me know when the AI actually has a choice to work with it's teammates or go solo.
I believe cooperative in the sense of an AI is a misnomer. The idea of cooperation is disregarding the sense of one's self for the sense of the group but in the case of an AI there is no sense of self. They don't have a personality, they aren't going to chase a frag because they want that kill, and they aren't going to throw because they don't like their teammates. These's AIs can't not work together so it's less cooperation and more being forced by it's programming. Let me know when the AI actually has a choice to work with it's teammates or go solo.


AI is stupid. It's one reason google's image recognition algorithms are horrible. Or why Alexa cant understand you, or understand context (example, I can say "alexa set thermostat to 72" and it might say "sorry, thermostat is off" or something stupid, and I cant reply "turn it on" and have it understand my obvious context. No, I have to tell it in exact syntax "alexa, turn thermostat to cool"). Or why self driving cars are always crashing.

Sure AI has it's uses, but they're narrow.

AI can only win games that basically can be reduced to huge math problems, like Go or Chess. And that only with human minders.
 
You people really don't understand AI. You're used to so called "bots", which you wrongly call "AI", which only work for that specific game (they aren't general purpose). Ordinary bots can see the entire map data in a simple digital form, which is easy to process. Bots can work on an ordinary laptop, because they have only a few pre-programmed actions. They don't really learn, in a human sense of that word.

Google AI is totally different from any bot you've ever played against. It does NOT have access to the game internals. It CANNOT see the internal map data. It has to learn each map from scratch, just by looking at it. Like a human. Which is a lot harder. It doesn't even know the game tactics. It has to learn that too. Which means this AI is a lot smarter. It really learns. It doesn't have a few pre-programmed routines like your ordinary bot.

So, your tactics of "hiding behind a pillar" wouldn't work for very long. AI would figure out what you're doing, adapt and shoot you.

And yes, AI will be better than any human player, ever. Pick your best human hero, he's gonna lose the battle against AI. If not today, then in 2 months. And once he starts losing, he'll never recover. Because humans don't really evolve. While AI does. Constantly.
 
Last edited:
I believe cooperative in the sense of an AI is a misnomer. The idea of cooperation is disregarding the sense of one's self for the sense of the group but in the case of an AI there is no sense of self. They don't have a personality, they aren't going to chase a frag because they want that kill, and they aren't going to throw because they don't like their teammates. These's AIs can't not work together so it's less cooperation and more being forced by it's programming. Let me know when the AI actually has a choice to work with it's teammates or go solo.

Well that depends where you stand on the idea of free will VS determinism. It's been shown that human's decisions are actually made subconsciously before we are even aware of it. In that way you can argue we cannot "choose" what we do any more than a programmed AI can. We are completely forced by the programming (biological makeup) of our brain, which is bound by the laws of physics, unless you believe there's something else besides our brain which is making choices (a "soul"). Our brains are just so much more complex and we have awareness of ourselves, so we create the illusion of choice even though we, arguably, have none.
But as AI's start to become more complex, and certainly when they develop better and better self-learning capabilities, they will start to resemble humans (or at least some kind of living beings) more and more. Once you start having multiple learning AI's faced with slightly different inputs and learning paths, they will surely develop different "personalities" just like humans or animals. And there's nothing that says AI's won't become "conscious" at some level of evolution when they get sufficiently complex (though I'm sure that's still some way off).
 
Back