The DeepMind division of Google has something to celebrate. It's AI just beat a top ranked Go player in five straight matches. Computers have been beating humans in chess and other games for a long time, but Go is different. According to the Google blog post, the possible moves in the game outnumber the atoms in the universe - that's "more than a googol times larger than chess."

The system behind the Go dominance is AlphaGo. DeepMind built the system and then filled its neural network with 30 million moves from professional players. The next step was for AlphaGo to learn to create its own strategies. How? By playing itself (the only worthy opponent) and using reinforcement learning (a kind of trial and error process.) Not surprisingly, that took a ton of processing power so Google used the cloud extensively (a nice way for them to plug another part of their company.) There's no mention if they played the Rocky soundtrack while AlphaGo was in training.

Next, Google invited reigning three-time European Go champion Fan Hui to their London office to play behind-closed-doors. AlphaGo lived up to its name and won 5 games to 0. Google tried to not be too braggy and said the victories were "just one rung on the ladder to solving artificial intelligence." The computer's next match is in March against world champion Lee Sedol.