An anonymous reader quotes a report from VentureBeat: Google and Blizzard are opening up StarCraft II to anyone who wants to teach artificial intelligence systems how to conduct warfare. Researchers can now use Google's DeepMind A.I. to test various theories for ways that machines can learn to make sense of complicated systems, in this case Blizzard's beloved real-time strategy game. In StarCraft II, players fight against one another by gathering resources to pay for defensive and offensive units. It has a healthy competitive community that is known for having a ludicrously high skill level. But considering that DeepMind A.I. has previously conquered complicated turn-based games like chess and go, a real-time strategy game makes sense as the next frontier. The companies announced the collaboration today at the BlizzCon fan event in Anaheim, California, and Google's DeepMind A.I. division posted a blog about the partnership and why StarCraft II is so ideal for machine-learning research. If you're wondering how much humans will have to teach A.I. about how to play and win at StarCraft, the answer is very little. DeepMind learned to beat the best go players in the world by teaching itself through trial and error. All the researchers had to do was explain how to determine success, and the A.I. can then begin playing games against itself on a loop while always reinforcing any strategies that lead to more success. For StarCraft, that will likely mean asking the A.I. to prioritize how long it survives and/or how much damage it does to the enemy's primary base. Or, maybe, researchers will find that defining success in a more abstract way will lead to better results, discovering the answers to all of this is the entire point of Google and Blizzard teaming up.