Follow Slashdot stories on Twitter


Forgot your password?
AI Games

Machine Learning Allows Actors To Create Games That Understand Body Language 30

ptresset writes "Goldsmiths college is developing technology with natural responses to human interaction. The technology enables video games characters to move in a more natural way, responding to the player's own body language rather than mathematical rules. The hypothesis is that the actors' artistic understanding of human behavior will bring an individuality, subtlety and nuance to the character that it would be difficult to create in hand-authored models."
This discussion has been archived. No new comments can be posted.

Machine Learning Allows Actors To Create Games That Understand Body Language

Comments Filter:
  • Re:Question (Score:4, Informative)

    by jxander ( 2605655 ) on Monday August 13, 2012 @10:23PM (#40980501)

    Two things :

    1) Human learning isn't special. "Machine Learning" is a buzzword that sells clicks, or whatever metric TFA is after...

    2) Humans already know what body language is natural. We might not know exactly how to express/quantify what is or isn't natural, but we sure as hell know it when we see it. Hence: uncanny valley. If we can program some basic keys and triggers into a computer system - have it learn "yeah... that's too much eye contact, you're creeping me out" - we can not only make more realistic games (i.e. by not having to hand-program every bit character in the back, but rather just have them mill about and follow the standard conventions of human interaction) but it would be a big step for real-world applications, and having our future droids not look like c3po.

    2a) Seriously though, play a BioWare game. While most of their games are fun and contain varying levels of good/great writing ... the way characters voice and face sync up (or rather, don't) can be more than a bit unsettling.

  • Re:Question (Score:5, Informative)

    by hazem ( 472289 ) on Monday August 13, 2012 @10:55PM (#40980667) Journal

    A non-machine learning method would entail specifying a set of reactions for the set of inputs. If A then B, and if C then D... the hardcoding the relationships between the inputs and outputs. Machine learning, on the other hand, involves the computer program going through a pile of data and determing the relationships between inputs and outputs.

    This problem is probably well suited to the technique because there are so many possible inputs and many nuanced outputs. In that situation, it's difficult if not impoossible to construct a programatic flow-chart that will perform well.

    There's tons of material on the web now to help teach about these methods, like's courses in AI, Machine Learning, and even Probabalistic Graphical Models..

  • Answer (Score:2, Informative)

    by Anonymous Coward on Monday August 13, 2012 @10:56PM (#40980675)

    It presumably requires machine learning because the inputs (all possible permutations of bodily motion) are so diverse that it becomes pretty much impossible to hand engineer a decent response to them. Therefore you feed some training data (motion capture of actors) of paradigmatic or common inputs and let the algorithm learn how to respond to the myriad other inputs that you haven't provided. The situation is exactly analogous to handwriting recognition algorithms that the post office uses to sort and route your snail mail. There are a zillion ways to write the letter 'a', which makes a rigorous, engineered solution all but impossible, but you can feed a whole bunch of different examples of written 'a's to a learning algorithm and it will get really damn good at recognizing all the unseen, novel ones.

If I have seen farther than others, it is because I was standing on the shoulders of giants. -- Isaac Newton