AlphaGo!

goFinal_JC-01

First chess, scrabble, then Jeopardy. Robots must have a real inferiority complex – they’re constantly challenging humans at games.

This week an artificial intelligence beat the world champion Lee Sedol by 4–1 at the boardgame Go. The 2,500 year old Chinese board game is exponentially more complex than chess, and in addition to brute-force calculation, was thought to require the extremely human trait of intuition. The event is significant because the victor, the AI AlphaGo by Google DeepMind, is built on a neural network, a technology modeled from the human brain that also powers the likes of Twitter and Facebook.

If you show enough faces to a neural network, it can learn to recognise them. If you talk to it enough, it will learn to hold a decent conversation. If you feed it thirty million moves from expert Go players, and make it play copies of itself, it will not only learn the rules of the game, but also learn how anticipate opponents. AlphaGo didn’t just teach itself to play Go – it taught itself how to improve. The machines are learning, and it’s a sign of things to come.

The next challenge for the team behind AlphaGo is to teach an AI to play the popular video game Starcraft. Yet The Guardian’s Michael Cook argues that for real advances in AI, researchers must look beyond trivial games to more complex challenges – beyond beating humans and towards working with them. Could an AI learn to play a team sport, or improvise and co-create with a human? These are the real challenges that may provide truly meaningful advances in AI.

Image credit: Darja Trifonova

This originally appeared in Moving Brands Wednesday 20160316.

Subscribe to Moving World Wednesday here.

Comments