×
Patents

Activision Patents Pay-To-Win Matchmaker (rollingstone.com) 133

New submitter EndlessNameless writes: If you like fair play, you might not like future Activision games. They will cross the line to encourage microtransactions, specifically matching players to both encourage and reward purchase. Rewarding the purchase, in particular, is an explicit and egregious elimination of any claim to fair play. "For example, if the player purchased a particular weapon, the microtransaction engine may match the player in a gameplay session in which the particular weapon is highly effective, giving the player an impression that the particular weapon was a good purchase," according to the patent. "This may encourage the player to make future purchases to achieve similar gameplay results." Even though the patent's examples are all for a first-person-shooter game, the system could be used across a wide variety of titles. "This was an exploratory patent filed in 2015 by an R&D team working independently from our game studios," an Activision spokesperson tells Rolling Stone. "It has not been implemented in-game." Bungie also confirmed that the technology isn't being used in games currently on the market, mentioning specifically Destiny 2.
AI

DeepMind's Go-Playing AI Doesn't Need Human Help To Beat Us Anymore (theverge.com) 133

An anonymous reader quotes a report from The Verge: Google's AI subsidiary DeepMind has unveiled the latest version of its Go-playing software, AlphaGo Zero. The new program is a significantly better player than the version that beat the game's world champion earlier this year, but, more importantly, it's also entirely self-taught. DeepMind says this means the company is one step closer to creating general purpose algorithms that can intelligently tackle some of the hardest problems in science, from designing new drugs to more accurately modeling the effects of climate change. The original AlphaGo demonstrated superhuman Go-playing ability, but needed the expertise of human players to get there. Namely, it used a dataset of more than 100,000 Go games as a starting point for its own knowledge. AlphaGo Zero, by comparison, has only been programmed with the basic rules of Go. Everything else it learned from scratch. As described in a paper published in Nature today, Zero developed its Go skills by competing against itself. It started with random moves on the board, but every time it won, Zero updated its own system, and played itself again. And again. Millions of times over. After three days of self-play, Zero was strong enough to defeat the version of itself that beat 18-time world champion Lee Se-dol, winning handily -- 100 games to nil. After 40 days, it had a 90 percent win rate against the most advanced version of the original AlphaGo software. DeepMind says this makes it arguably the strongest Go player in history.

Slashdot Top Deals