Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Games

AI System Invents New Card Games (For Humans) 112

jtogel writes "This New Scientist article describes our AI system that automatically generates card games. The article contains a description of a playable card game generated by our system. But card games are just the beginning... The card game generator is a part of a larger project to automatise all of game development using artificial intelligence methods — we're also working on level generation for a variety of different games, and on rule generation for simple arcade-like games."
This discussion has been archived. No new comments can be posted.

AI System Invents New Card Games (For Humans)

Comments Filter:
  • by clam666 ( 1178429 ) on Saturday May 04, 2013 @01:20AM (#43627281)

    On a tangentally related idea, we're working on a project of machine learning to take games and the rules of play, then derive strategy based on the rules.

    Nothing particularly new, except we don't define what winning is, just the rules of the game. No hint is given to what constitutes good play, or even what "playing" is. Although it is a very slow process depending on game complexity (learning can take weeks and sometimes months of processing time), it requires no real programming effort, beause we don't have to know what "good" play is or some series of algorithms; it produces better and better tactics and strategies of play during the learning process, by experimenting with the rules, how to play, and such.

    What's cool about this, is that you can watch it teaching itself different strategies and tactics. Some of the "tactics" it creates are many times counter intuitive or plain bizarre, but based on the overall strategies it develops, allows for some really different playing experiences as it doesn't follow human game logic based on experience with "similar" games or "intuition".

  • The Past, also: (Score:5, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Saturday May 04, 2013 @01:41AM (#43627341) Journal

    I see MMO expansions someday taking this route to expedite content generation. Players complain there's not enough content? Drag and drop your quest generator with a bit of human tweaking and you're good to go. I'm sure some of the systems in Eve were generated partly through random generation.

    It turns out that procedural generation is conceptually pretty easy; but making it good is much harder. Early videogames(from the era where memory and storage constraints were Serious Business) and demoscene stuff(where the constraints are wholly artificial; but that's the point of the exercise) are pretty much forced to rely on it heavily because they simply didn't have the option of storing canned content.

    Today, though, you see games with substantially greater amounts of (not inexpensive) artists and designers thrown at them, and gigabytes of art assets, with hand-tweaking especially evident in places where the player is likely to look closely(eg. generic NPCs will be thrown together from parts, giving the world a varied population without requiring the art people to hand-model 10,000 different 'bandit' characters; but the risk of output that just looks a little off, or hit a few branches of the ugly tree on the way down, means that those critical NPCs that follow you around for half the game had their appearance nailed down precisely). The fact that artists are slow and expensive has created a demand for procedural generation tools, and quite a few exist(I'll just mention SpeedTree, purely because the phrase "SpeedTree for Games has been the gaming industry's premier vegetation solution since 2002" amuses me); but the problem of creating really good environments continues to be vexing enough that titles that can afford it throw a lot of humans at the problem.

  • by clam666 ( 1178429 ) on Saturday May 04, 2013 @01:48AM (#43627359)

    Let me clarify that, as that statement was misleading.

    We don't program what winning is as any function of the strategy. The system comes up with several strategies, which all play against each other. At the end of a series of competitions, a strategy is told "Hey, you played against a bunch of different people, you won more than the rest. We don't define what winning is, how it won, or even what winning is, we just tell the system that strategy 1532 was the best. The system knows what strategies work better than others, so it can learn what methods are more successful. The system doesn't know why it won, just that when it made certain decisions it won more often. We don't even tell it on each game, we tell it after an aggregation of multiple competitions how it did. By comparing all the strategies it tried, then it develops better and more complex ways to win (which we didn't tell it how to do).

    Even more interesting is when it comes up with what is considered doctrinal tactics that humans have arrived at to win as well (or statistically increase the chances of such) although no such logic was included in the programming.

    The benefit to this is that although it takes a LONG time to develop "good" strategies, it comes up with completely unique and novel approaches to winning, even though it doesn't know how exactly it won, only that its strategy wins more than everyone else.

    The benefit to us is we just tell it the game rules, we don't have to come up with any specific playing algorithm, the learning system figures that out. We just tell it the rules, whether they are concrete like in chess (bishops move diagonally, pawns move one, or start with two, etc) or variable rules based on other complexity factors. Whether its poker or chess or military tactics, the systems job is to come up with the strategy. How good or complex that strategy is allowed to be, is a function of how much processing time we want to give the system to learn the best way to win.

  • by clam666 ( 1178429 ) on Saturday May 04, 2013 @03:58AM (#43627707)

    We use several forms of evolutionary programming in several sections of the learning systems' areas.

    There are hybridized genetic algorithms in the portions involving the strategy blending evolution system, which does a few different forms of strategy selection pressure and evolution controls, which is critical due to training time to not cause premature convergence or genetic instability.

    Additionally, we introduce additonal factors such a genetic drift and migration so that out competing strategies can evolve independently as the explore the strategy plane.

    There are macro level evolution techniques to handle the complexity growth of the strategy species, so that the complexity can be altered depending on how "advanced" the system needs to be. In a simple sense of a turn based game, it would equate to the number of plies or analysis depth you would go. For more complex multiobjective systems, like military tactics involving minimizing casualties, civilian losses, maximizing kill or capture of enemy units, minimizing structural damage to infrastructure, etc., then it modifies the strategy complexity. For example, you could send eveyone with guns to kill everyone, or you could parallel it on intelligence gathering with drone units to direct fire, long range snipers or diversionary tactics, or factoring logistical support costs.

    A lot of the core work is maximizing the efficiency of the evolutionary strategies, as they are the biggest fator in learning time. It's really easy to write inefficient logic that ends up taking much longer to arrive at good solutions without getting lost due to too much noise or oscillation in the system.

    Another method that is used is a version of PSO, which is used to optimize subsections of the strategy (depending on what we are trying to find a solution to) that further get to optimal solutions.

    So a lot of bachelors level CS is used. Although a lot of customization has been done, the benefit is it uses a lot of basic concepts, and utilized processing power rather than trying to algorithmically come up with solutions. Also, it can be continuously adaptable so it adjusts to situational changes. The strategy isn't locked, it can be reacts based on changes to frontier so to speak. If your opponent changes what they're doing, or doing something new, it can adjust itself to that.

  • by clam666 ( 1178429 ) on Saturday May 04, 2013 @04:14AM (#43627759)

    Portions of it were influenced on a couple of works done.

    Chellapilla and Fogel's 2001 work on Anaconda which built a completely evolved checkers program, which did similar techniques at the broad level. The checkers playing strategies in their case were building neural networks which regulated play. Our similarities are in the way that the strategies evolved and that no game specific knowledge was needed, other than movement rules and an aggregation of strategy fitness across competition rather than individual competition values,

    Other techniques are in Kewley and Embrechts 2002 work on military strategy which was interesting in that the evolved strategies were good military strategy (with emergent doctrinal tactics) which beat military experts strategies in a simulation, in additional to beating it's own strategy when military experts modified it. This also used evolutionary concepts to evolve its solutions.

    Unfortunately I can't divulge our own specific information above and beyond what I've discussed, but we certainly have been influenced by previous work on the subject, and made a few new additions to it in our own work.

  • by jtogel ( 840879 ) <julian@togelius.com> on Saturday May 04, 2013 @04:44AM (#43627835) Homepage Journal
    It's not "blind" search like Prolog, or even depth-first search. It's objective driven search using artificial evolution. Actually, almost all successful AI uses search in a prominent role.
  • Re:The Past, also: (Score:5, Interesting)

    by Internetuser1248 ( 1787630 ) on Saturday May 04, 2013 @05:48AM (#43627999)
    I don't entirely agree with this. I think the reason major development houses don't put resources into procedural content generation is lack of imagination, and fear of taking risks. Several independent [shamusyoung.com] software [blogspot.de] researchers [bay12games.com] have solo developed demonstration projects recently that hint at what can be achieved and how much work it takes, and in terms of programmer hours vs. artist hours it actually looks very promising, as well as in terms of actual product quality. I think the big studios just have a winning formula that is making them millions and they are afraid to step out of their comfort zone and risk trying something new.
  • by Anonymous Coward on Saturday May 04, 2013 @06:34AM (#43628083)

    Interestingly, Monopoly is a lot better when you play with the auction rule that everyone ignores. The official rules also include a couple of altered games with fixed time limits, to prevent the dragging-on that occurs when you omit the auction rule.

  • by loneDreamer ( 1502073 ) on Saturday May 04, 2013 @12:40PM (#43629765)

    It's AI the same way Prolog is AI. ... SearchIsNotAI or something.

    HUH? You definitely lost me there. First, Prolog is a computer language more than any kind of algorithm, just one more declarative and suited for logic. Definitely a lot of AI has been coded in Prolog.

    Second, how is search not AI??? Almost any AI algorithm I can think of is a search problem. Chess (or other games) AI is nothing else than a search for a close to optimal set of moves (based on a scoring function). SLAM and Path-finding in general is also a search. Watson performs a search for potential documents matching the query. Classifiers search for an optimal decision boundary to divide the data. Clustering searches for a stable configuration of centroids (for example). Object recognition searches for matches that maximize the likelihood between object... etcetera, etcetera, etcetera. I mean, almost any algorithm that I have been teach in Machine Learning and Robotics has been introduced as a search problem!

8 Catfish = 1 Octo-puss

Working...