Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Games

Watch AI Grow a Walking Caterpillar In Minecraft (sciencemag.org) 22

sciencehabit shares a report from Science Magazine: The video in this story will be familiar to anyone who's played the 3D world-building game Minecraft. But it's not a human constructing these castles, trees, and caterpillars -- it's artificial intelligence. The algorithm takes its cue from the "Game of Life," a so-called cellular automaton. There, squares in a grid turn black or white over a series of timesteps based on how many of their neighbors are black or white. The program mimics biological development, in which cells in an embryo behave according to cues in their local environment.

The scientists taught neural networks to grow single cubes into complex designs containing thousands of bricks, like the castle or tree or furnished apartment building above, and even into functional machines, like the caterpillar. And when they sliced a creation in half, it regenerated. (Normally in Minecraft, a user would have to reconstruct the object by hand.) Going forward, the researchers hope to train systems to grow not only predefined forms, but to invent designs that perform certain functions. This could include flying, allowing engineers to find solutions human designers would not have otherwise foreseen. Or tiny robots might use local interactions to assemble rescue robots or self-healing buildings.
The researchers presented their system in a paper posted on arXiv.
This discussion has been archived. No new comments can be posted.

Watch AI Grow a Walking Caterpillar In Minecraft

Comments Filter:
  • I remember when the paper on this technique came out a few years ago. It's pretty cool: learning a compact encoding that will robustly generate a complicated pattern. Cool that these guys have applied it in Minecraft.

    • Now ML progresses quickly, but not THAT quickly. Don't make me feel older than I already am, please :) It was this article [distill.pub] which introduced the concept in 2020, and it was actually on distill.pub, not arxiv. Only about 13 months ago.

      • I think the algorithms were already known in the 70s.

        • by ceoyoyo ( 59147 )

          Fairly simple cellular automata algorithms that are either programmed or very simply trained have been. Just like reinforcement learning, if you stick in a capable deep learning network it makes for a bit of leap in capability. If you haven't yet, have a look at the link Vintermann provided.

          One of the nice things about this paper is that it's motivated by explaining how multicellular organisms manage to grow complex bodies with only simple intracellular communication.

      • by ceoyoyo ( 59147 )

        Apparently 2020 was a long year for some reason. I thought you were crazy, but that is it indeed.

  • This isnt AI, for starters significant input was made by humans writing those programs. Its just like those AI powered go players, the AI may find patters but the code didnt learn to play GO from nothing, it had significant rules and understanding programmed.
    • >The code didnt learn to play GO from nothing

      AlphaGo Zero did.
      • AlphaGoZero did not learn from scratch the rules and objectives of the game, and a mechanism to rate and score moves were programmed into it.
        • Neither did I nor most of the current Go players.
          • Except AlphaGo *didn't* learn it from scratch.

            They carefully chose what input to put into it. Not just random life experiences.
            Namely, they input only those where it succeeded. And didn't just do random shit.
            You know what that's called? Programming.

            There is no "it" separate individual. That is the point.
            "It" is a tool. A hammer doesn't "learn how to hammer in a nail" either, just because you "trained it to move on top of the nail".
            It's not a person!

            Then again, looking at many humans today, no offense, that

          • MuZero, a successor of AlphaZero (itself a successor of AlphaGo Zero) became a master of Go without knowing its rules:

            "MuZero is a computer program developed by artificial intelligence research company DeepMind to master games without knowing their rules.[1][2][3] Its release in 2019 included benchmarks of its performance in go, chess, shogi, and a standard suite of Atari games. The algorithm uses an approach similar to AlphaZero. It matched AlphaZero's performance in chess and shogi, improved on its perfor

        • AlphaGoZero did not learn from scratch the rules and objectives of the game, and a mechanism to rate and score moves were programmed into it.

          I think "from scratch" is the obfuscating term here; no learning happens in a void, be it human, machine, or otherwise. There has to be *something* to scaffold off of. Architectures all the way down.

          • "I think "from scratch" is the obfuscating term here; no learning happens in a void, be it human, machine, or otherwise. There has to be *something* to scaffold off of."

            If we say a human learned from 'scratch' or by itself, that's always going to include the network of humans they're operating or developed in, but if we say a computer program learned from scratch it's obviously not the same thing, it didn't do it 'by itself' within a network of other 'by themselves' computer programs. It can't make sense in

        • Yes, you can't learn unless you have some built-in way to tell desirable things from undesirable things. That goes for all learning algorithms, whether it's AGZ or the built-in human one.

          It probably wouldn't have made a big difference for AGZ if it wasn't told the rules, but was simply given an instant loss if it made an illegal move. If it was given absolutely nothing, though, of course it would learn absolutely nothing too.

          • My point which was obviously poorly made is the intelligence of AGZ is actually a test of the programming of the developers. If they make a mistake then it will be hurt and dumb or make mistakes. The only thing the computer program does is shift the data, its not learning, it sjust running the program.
            • You're right to hold the developers accountable. You're right to see it as an extension of what they want, not something that wants something in itself.

              But you're wrong to say that makes it not learning. All learning is like this. Human brains are "just running our programs" too.

              The decision to see humans as actually "wanting" something, and not merely mechanically moving towards it, is teleological. That doesn't mean it's wrong (indeed, it's right) but it can never be justified from experience alone.

              Howeve

              • > But you're wrong to say that makes it not learning. All learning is like this. Human brains are "just running our programs" too.
                Thats true, but the 99% of the success of AGZ is because of the programmers, the learning part is hardly learning its just really fast and processing data and making decisions. My point is real learning would be actually learning go and figuring out the rules itself.
                • As I said, AGZ can probably do that just fine, as long as it has some signal telling it what to do.

                  They could have trained it on a version where it got an instant loss on an illegal move. Or even one where it had to print a legal move as letters to not get an instant loss (e.g. "a5"). But, again as I said, you have to give it the signal in SOME form.

                  Given that you have to encode what you want the learning algorithm to learn in some form, i.e. you have to decide what you want, you have to start somewhere - y

  • (The joke is that YouTuber Mumbo Jumbo over-complicates the simplest of redstone circuits.)

  • And when they sliced a creation in half, it regenerated. (Normally in Minecraft, a user would have to reconstruct the object by hand.)

Make sure your code does nothing gracefully.

Working...