Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Programming Games

Ubisoft is Using AI To Catch Bugs in Games Before Devs Make Them (wired.co.uk) 126

AI has a new task: helping to keep the bugs out of video games. From a report: At the recent Ubisoft Developer Conference in Montreal, the French gaming company unveiled a new AI assistant for its developers. Dubbed Commit Assistant, the goal of the AI system is to catch bugs before they're ever committed into code, saving developers time and reducing the number of flaws that make it into a game before release. "I think like many good ideas, it's like 'how come we didn't think about that before?'," says Yves Jacquier, who heads up La Forge, Ubisoft's R&D division in Montreal. His department partners with local universities including McGill and Concordia to collaborate on research intended to advance the field of artificial intelligence as a whole, not just within the industry.

La Forge fed Commit Assistant with roughly ten years' worth of code from across Ubisoft's software library, allowing it to learn where mistakes have historically been made, reference any corrections that were applied, and predict when a coder may be about to write a similar bug. "It's all about comparing the lines of code we've created in the past, the bugs that were created in them, and the bugs that were corrected, and finding a way to make links [between them] to provide us with a super-AI for programmers," explains Jacquier.

This discussion has been archived. No new comments can be posted.

Ubisoft is Using AI To Catch Bugs in Games Before Devs Make Them

Comments Filter:
  • I remember it being discussed in my software engineering class that trying to automate bug removal or detection could be shown to be isomorphic to solving Turing's halting problem.
    • by Megol ( 3135005 )

      So you say humans can solve the halting problem?

      The description doesn't make any claims that would make it impossible. It just detects patterns.

      And what do you mean with P=NP? Do you think that have something to do with the halting problem?

      • by rsilvergun ( 571051 ) on Monday March 05, 2018 @05:43PM (#56213031)
        For any given program f running on Microsoft Windows it will halt if you let it run long enough.
        • Re: (Score:2, Interesting)

          by mark-t ( 151149 )
          Technically speaking, all programs halt in actuality. I presented that point to my software engineering prof, who I seem to recall was less than appreciative of the insight.
          • by gweihir ( 88907 )

            Actually, your statement is incorrect. What is correct is that all program execution in physical reality halts eventually. That is something else.

            The difference is pretty much the one between theory and practice. You know, in theory, theory and practice are the same. An engineer understands that in practice they are not and that practical systems are complex enough and the theories may be incomplete, inaccurate or otherwise faulty enough, that the unexpected in theory does actually happen in practice.

            • by mark-t ( 151149 )

              What is correct is that all program execution in physical reality halts eventually

              That is *EXACTLY* what I had said.

              • by gweihir ( 88907 )

                No, you did not. "Actuality" can well be mathematical actuality.

    • I remember it being discussed in my software engineering class that trying to automate bug removal or detection could be shown to be isomorphic to solving Turing's halting problem.

      Many people misunderstand the halting problem. It doesn't mean that you can't tell if a program halts. It just means you can't do so for ALL programs.

      Likewise, automated bug detection is possible. It just won't find ALL bugs.

      If it finds even 10% of the bugs, it is still a huge win. False positives are unlikely to be a show stopper: Like existing static-analysis, even if the false positives are not bugs, they are still often sloppy code that makes programs less readable.

      • by gweihir ( 88907 )

        If it finds even 10% of the bugs, it is still a huge win.

        I disagree. This is 10% less opportunity for the coders to increase their skills finding bugs on more benign mistakes (software cannot find hard to find ones). As humans are able to generalize, this also reduced harder to find mistakes. In the end, whenever something like this is done, you end up with more damage per remaining mistake, which may well make matters worse overall. As a second negative effect, this will lead PHBs to decide that they can now hire even cheaper coders.

        • I disagree. This is 10% less opportunity for the coders to increase their skills finding bugs ...

          If you truly believe that more bugs are better, then you could just train the AI to insert extra bugs instead of detecting existing bugs.

          As a second negative effect, this will lead PHBs to decide that they can now hire even cheaper coders.

          If you truly believe that more bugs are better, then you should see this as a good thing, Bad programmers who write crappy code will give others plenty of bugs to practice on. Right?

          Good luck.

          • by gweihir ( 88907 )

            You have failed to even begin to understand what I said.

          • I believe the parent is talking about spoiling by making the process more convenient because programmers will then rely on the bug catcher program instead of themselves being more careful when they code. Though, I believe this is the trend that has been going on in human society. People rely on technology (for convenience) more and more which in turn make people dumber and dumber...

            • People rely on technology (for convenience) more and more which in turn make people dumber and dumber...

              Socrates believed that learning to read and write makes people dumber because they no longer need to remember things.

              Are illiterate people smarter?

              Are programmers that turn off compiler warnings smarter?

              • It all depends on how you view a situation. From your quote, Socrates was wrong because the read and write are actually a kind of making people remember. At the time he didn't know that. Even though your analogy sounds correct, I don't believe that it can be applied in this situation.

                I'm sorry that I use the wrong word. The correct word for word 'dumber' should be less careful'. When a tool is introduced to ease a process and people rely on the tool, they become more and more careless. Think about those pro

                • Think about those programmers who used to program with punch cards and programmers nowadays, for example.

                  I am not sure what your point is. I am old enough to remember programming on punch cards. If you actually believe that programmers back then were better or more productive, then you are delusional.

                  Sure, we were "more careful", in the sense that we spent hours scrutinizing code for syntax errors that a modern compiler could find in a millisecond. But it would be a pointless waste of time to do that today.

    • by GuB-42 ( 2483988 )

      There is a difference between solving in the general case and a good enough solution.
      I can write you an efficient travelling salesman problem solver without solving P=NP, and I can write a program that can tell if your code will enter an infinite loop without solving the halting problem. It is just that it won't work every time. In some cases, it is all that's needed.

      In that case, the point is obviously not to solve the general case. There isn't even a formal definition of what a bug is.

    • But since they don't require it detecting all bugs for it to be useful, that it it is impossible to do so for all bugs is irrelevant.

    • by gweihir ( 88907 )

      It is. But you can make dumb, pattern-based systems that hide some of the obvious screw-ups of the incompetent (and thereby make these people more dangerous) and at the same time make the competent less productive, because they now have to conform to brain-dead rules. Sounds very much like this is what this system is doing.

    • That's a weird thing to discuss since that comparison makes no sense at all. You don't need to solve the halting problem to detect large classes of bugs in a useful way. Not to mention that in a software ENGINEERING class you ought to be concerned with completely different things than ivory tower speculations. For example, as Sussman writes, rejecting probabilistic prime tests on the basis of them being sometimes wrong but NOT rejecting "accurate" prime tests on the basis of the run time required being so l
  • by bobbied ( 2522392 ) on Monday March 05, 2018 @03:41PM (#56212325)

    The more things change, the more they stay the same....

    Anybody else remember LINT? I used to work a project that required that all compiler warnings be dealt with and anything reported by LINT was documented and explained IN THE CODE. It certainly didn't catch everything but it sure kept the code consistent and common logical issues from appearing too often.

    Now off my lawn....(snicker)

    • by SirSlud ( 67381 )

      this is ..... not like LINT. Remember LINT? LINT is still in widespread use. This would be a complimentary tool to catch higher level bugs based on code heuristics in valid correct code beyond the safety checks that LINT looks for. This is not for doing what LINT does, and what LINT does is still very useful.

      • But it is the same concept as LINT.... It may be different from LINT but this really isn't a new idea, and it IS LIKE LINT...

        There have been static code analysis tools in use for decades now, LINT was among the first of these tools to find wide spread use and many have followed in its foot prints. This is NOT a new idea, even if the implementation method varies from the C program the initial LINT was/is.

        Read the Mythical Man Month.... There is nothing new... Each generation thinks their stuff is better,

        • by ljw1004 ( 764174 )

          LINT is something that looks at your code for problems and reports them.

          A compiler is something that looks at your code for problems and reports them (and also generates output).

          This ubisoft tool is something that looks at your code for problems and reports them.

          Your colleague standing over your shoulder is something that looks at your code for problems and reports them.

          Your colleague who does code-review is something that looks at your code for problems and reports them.

          Your copy-editor is someone who look

          • Yea, I used to say that kind of thing when I was young too. Hubris lives on in the young. We had all the good ideas then too, we where better educated, fresh out of school and full of promise. But we where as stupid as those who came before us. Wisdom is hard won though experience and I've personally learned the grey beards of my day where right, there is really nothing new. Programming remains the same problem, though the names and faces have changed.

            Face it.. At the very best, this is just an extensio

            • No, it is not a static code analyzer.
              You seem to old to grasp new concepts.
              Why don't you read the article instead of continuing to make an idiot out of your self?

              • No, it is not a static code analyzer. You seem to old to grasp new concepts. Why don't you read the article instead of continuing to make an idiot out of your self?

                From the Article above:

                "It's all about comparing the lines of code we've created in the past"

                That sure sounds like "static code analysis" to me. Perhaps you don't understand that that term means?

                Static Analysis of source code is looking at the source code for interesting patterns, in most cases looking for common programming errors. LINT did this, this "new" program does the same thing. It looks at the source code for patterns right? Then it's doing static analysis.

                • From the Article above:

                  "It's all about comparing the lines of code we've created in the past"

                  That sure sounds like "static code analysis" to me.

                  It IS static code analysis. But that doesn't mean it is "like Lint".

                  Lint uses a table of HUMAN GENERATED patterns. These patterns are labor intensive to produce, and only find bugs that humans thought to check for.

                  This new checker looks at a steadily expanding database of bugs, and the fixes for those bugs, and LEARNS THE PATTERNS ON ITS OWN. This means it can have a much bigger set of patterns, including many that a human might have never thought to include. It also means that the system can steadily i

            • by ljw1004 ( 764174 )

              Let me try again with a car analogy. Someone designs and builds a Tesla. You come along and say "Face it. At the very best, this is just an extension of the '87 Ford F150. There's nothing new. They've just re-invented the '87 Ford F150. Oh the hubris of the young, to think that they could do anything different."

              What's weird is that your "prior art" is (1) oddly specific almost in ignorance of the rest of the field that came before and after, (2) misses the point that Tesla has taken one existing technology

              • I like you analogy, but consider this small addition...

                Let's say I tell you that we discovered a whole pile of rules for designing cars. Don't try to stamp metal into this kind of shape, mount your glass in ways the flexing of the body doesn't break it, don't use plastic for this kind of part or that, don't use 6V lamps or non-rechargeable batteries, don't put the gas tank too close to the bumper or put electrical wires and exhaust manifolds near it either.... We use these "rules of the road" to validate

        • You should read the article.
          It has absolutely nothing to do with LINT and is not the same concept.

          • Oh, grasshopper... This is not new. It's doing static code analysis and flags parts that may have issues based on where trouble has been seen in the past. This is EXACTLY what LINT is, advice from experience. It's basically saying "Um, you *might* not want to do this kind of thing because it's often a mistake. Are you sure?"

            Of course you somehow think that because it's some fuzzy AI technique used it's somehow different? Cute....If anything, it's less effective being AI, but that's another debate you wo

            • Ok, idiot.
              It does not do static code analysis.

              You still have not read the story or the linked article.

              • Ok, idiot. It does not do static code analysis.

                From the above article I shall quote:

                "It's all about comparing the lines of code we've created in the past"

                Um... "Static code analysis" is looking at the source code for probable errors. This program does that.

                You may have read the articles in question, but you obviously don't understand what you saw there.

                • It is not _code analysis_, hence it can not be _static code analysis_

                  But perhaps you only want to nitpick with words.

                  The tool knows nothing about the code or the programming language.

                  It only knows I fixed a bug in line 100 of a text file.

                  And bottom line you can use the same technique for editing books. "The lector says 'this phrase is bad', the author changes it to 'this sounds better'". The AI checks for similarities in the rest of the book and flags them.

                  In other words: it knows nothing about the topic it

                  • If this tool doesn't understand the programming language of the code, Then... It's pretty much worthless compared to LINT. Shall we send all our code to a group of English grammar experts for their comments before we allow it to be committed too? What's the point of that?

                    In any programming language there are commonly used sequences which are prone to errors, which are syntactically correct, but likely wrong. Your AI based search would produce lots of useless garbage if it cannot tell the difference betw

      • by AmiMoJo ( 196126 )

        This sounds more like Clippy.

        "I see you are trying to write a state machine..."

        • by tlhIngan ( 30335 )

          This sounds more like Clippy.

          "I see you are trying to write a state machine..."

          Not a bad thing. That's actually quite useful. State machines and dates and other things seem to be items that programmers always re-invent, despite there being a half dozen of them in the libraries they already use.

          Imagine the usefulness one could have if the IDE simply stated "it appears you're implementing a date function. Have you considered the date and time APIs already available to you? Here's some APIs and documentation."

    • by enjar ( 249223 )

      Yes, indeed. We realized a lot of those compiler warnings were actually trying to tell us something (WOW!) and cleaned up our code base over the course of a number of years. We are now pretty much at the point where compiler warnings are generally viewed as errors, so the bar is very high and requires additional code review if you legitimately need to submit something that triggers a warning ... "the compiler is lying, I'm right" isn't good enough. We have a number of people on staff who are really good at

      • Do people really not use -Wall -Werror (or equivalent) by default? Aside from stuff being put on github which apparenrtly has a requirement to spew megabytes of warnings when compiling...

        • by enjar ( 249223 )
          Compiler warnings are often viewed as "noise" and disregarded entirely, or "logged and fixed in the next release". We learned our lesson the hard way years ago and have moved to warning-free code as a checkin requirement, but it would not surprise me to find a lot of organizations with date-driven releases who let them slip, especially as the ship date get ominously close. We pretty much use -Wall -Werror, but occasionally I need to deal with stuff from github and it's warning city. It's kind of like learni
  • by raftpeople ( 844215 ) on Monday March 05, 2018 @03:50PM (#56212385)
    AI: "Based on years of historical pattern matching, your commit has been flagged as 'needs review' for the following reasons:
    1 - First name of developer is 'Fred'"
    • by AmiMoJo ( 196126 )

      No need for AI, just search Stack Exchange to see if the code was cooy/pasted from there. Throw in some heuristics to account for refactoring...

  • If the dev AI/Automation is getting that good. Why are they not just doing a majority of the game development?

    I always hear the horror stories about how bad AI and Automation will affect burger flippers, etc.

    Seems to me the real jobs that will be affected are the white collar elites jobs. Bankers, Brokers, Developers, Judges, Lawyers, Programmers, etc. Imagine white collar professions minus the bias/corruption/prejudice because it is all just honest AI & Automation ;)

    Oh yea, honest AI/Automaton po
      • Yes I read about the burger flipper, But you will still need a few people to clean and do maintenance in each kitchen at least for a while. With the white collar SOE (Subject Matter Expert) jobs. Once the AI/Automation is ready, it will just sit someplace in the cloud and many of those elite jobs/employees will be gone. With a lot fewer real people needed handle the special cases and to do maintenance, etc.

        Just my 2 cents ;)
    • Well, high frequency traders are replacing stock brokers already. Nobody misses the people selling and buying. The machines and the algorithms are making life miserable for the rest of us.
    • The AI is not writing code.
      It is reviewing code.

      Small difference. AI will never write code, unless it is a near human level intelligent being.

    • Oh yea, honest AI/Automaton politicians, Sweet ;)

      No, they will just lie more efficiently. They are still politicians.

  • Brown ball means bug we can fix after the release, red ball means we can't ship until the precog AI is made happy.
  • Maybe this AI can finally convinced them to give an option to NOT have the elevation angle of driving cameras try to reset to default after a couple of seconds? Drives me nuts in all their games, e.g. Watchdogs and Ghost Recon: Wildlands. Especially important if driving a taller vehicle.
    • by CODiNE ( 27417 )

      You'd be better off modding the game and putting a NOP on that function call.

      • by ELCouz ( 1338259 )
        Yeah...good luck with Denuvo DRM on Watchdogs. Damn thing run inside a VM which decrypt on the fly part of the game.
  • Says it can detect 6 out of 10 bugs? Based on what? How would it even know what a "bug" was in the context of the game? The video also says it creates code signatures for bugs, but doesn't explain how. They explain the concept, but does it actually work? What specifically does it do? Without seeing examples, it's hard to imagine this tool does what they seem to be saying it does.
    • Oh man, you did not read the summary or the articfle? only watched a movie while being distracted on another web site?

      It is so super simple that I wonder why no one else had that idea before.

      A developers commits a piece of code to the source repository. Lets call that 'A'
      Later there is a bug found, which is in the issue tracker.
      Some guy fixes it.

      Now we have a ticket in the issue tracker connected to the source code repository: to the fix and thus also connected to the bug.

      Call that a rule or a hint. Or: a p

      • by jetkust ( 596906 )
        You just basically repeated what was said in the article but don't seem to get the gaping plot holes.

        You seem to think the idea of identifying a pattern in source code that is directly responsible for creating a bug is a "simple" task. They are talking about creating code "signatures". How? If anything, the article should be on that alone. But it's completely ignored. Also, there is this implied assumption that all bugs can be identified eventually by connecting them to previous bugs. Really? So if
        • 6 out of every 10 bugs, but more learning is gonna make it 100 percent!!!!
          It never will be 100% as it needs bugs to learn from first, bugs that escape it right now but will be analyzed in later development phases when they get fixed.

          You seem to think the idea of identifying a pattern in source code that is directly responsible for creating a bug is a "simple" task.
          Yes it is. Super simple.

          I get a ticket, telling me to figure why there is one item always missing in the bill.

          I check out the source code from t

          • by swilver ( 617741 )

            Unless of course if it needed to be - 2 because I'm doing something with pairs of elements...

            Anyway, good luck with this, let me know how it turns out.

            • I'm not working for Ubisoft.
              But actually I strongly consider to implement something like this.
              However most companies I worked for a bug fix was a partial rewrite.
              That would be pointless for such a system I think.

  • auto QA testing can just fail silently or pass in a way that any real person will see it as an error.

    Also poor ui's / control setups are bugs that some auto system can not find as well but QA testers will find.

    • Good to have Auto QA though to free humans for what they're good at. Also good to have auto-play-testing with AI agents just running against walls etc. If any AIs get stuck or position.z -100 and have fallen out of the game level then you can replay their game route.

      Save your play testers for quality not random breaking of the game.

  • by Gravis Zero ( 934156 ) on Monday March 05, 2018 @04:19PM (#56212551)

    How am I supposed to write myself a new minivan [dilbert.com] now?!

    • by q4Fry ( 1322209 )

      Get the AI to "correct" other people into writing it for you.

    • Once upon a time, our company decided to reward "innovation and OOB thinking" with a percentage of whatever savings we could produce. Within 3 weeks we had a proposal to drop our IBM mainframe and go with a competitor for a cool $600k savings in hardware, and more in licensing. Nothing ever came of it, of course. Undoubtedly they used our research in contract negotiations w/ IBM.

  • ... just rappelled into our cubicle. Should I worry?

  • It is so "eightyish".
  • Still no cure for Ubi's stupid design choices.
  • See, AI is replacing what we (consumers) used to do, especially for companies like Ubisoft. What do they expect me to do now, play the game without issue and enjoy myself? What am I going to complain about now?

  • by jeff4747 ( 256583 ) on Monday March 05, 2018 @04:50PM (#56212729)

    delete this

    "I'm sorry. I can't do that, Dave."

  • I believe that in the movie, they started using AI to lint their code, then it became self aware and just started writing the code and finally gave rise to the terminators which put all humans into a simulation to harvest their bioenergy. Or something like that...
  • ... and how does it compare to existing static code analyzers? It's not that static code analysis is a completely new technique.
  • There has been plenty of long-term research on this topic under the name "defect prediction". It is actually one of the best (or few..) suited application areas in the software engineering processes for machine learning. You get an obvious set of training data. Record all the bugs found, issues reported, match them to fixing patches (or whatever you call it, fix locations to code in practice). Label those places/parts of code as "error-prone" before the fix. Maybe take the "after" part as an example of non-

  • This is called static analysis and has been around for YEARS. Tons of companies are doing this.
  • Okay, so I add this diagnostic Ubisoft Commit Assist [wired.co.uk] to a Neural Net (NN) that is being trained, using a Genetic Algorithm (GA) to change the topology of the NN. Part of the Neural Net is dedicated to the Commit Assist subnet. This combination means that the entire system gets better exponentially because the diagnostic gets better as the system uses it to find errors in both the base problem and the subnet problem, while the GA improves the ability of the NN to do both the base problem and the subnet probl

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...