Watch AI Grow a Walking Caterpillar In Minecraft (sciencemag.org) 22
sciencehabit shares a report from Science Magazine: The video in this story will be familiar to anyone who's played the 3D world-building game Minecraft. But it's not a human constructing these castles, trees, and caterpillars -- it's artificial intelligence. The algorithm takes its cue from the "Game of Life," a so-called cellular automaton. There, squares in a grid turn black or white over a series of timesteps based on how many of their neighbors are black or white. The program mimics biological development, in which cells in an embryo behave according to cues in their local environment.
The scientists taught neural networks to grow single cubes into complex designs containing thousands of bricks, like the castle or tree or furnished apartment building above, and even into functional machines, like the caterpillar. And when they sliced a creation in half, it regenerated. (Normally in Minecraft, a user would have to reconstruct the object by hand.) Going forward, the researchers hope to train systems to grow not only predefined forms, but to invent designs that perform certain functions. This could include flying, allowing engineers to find solutions human designers would not have otherwise foreseen. Or tiny robots might use local interactions to assemble rescue robots or self-healing buildings. The researchers presented their system in a paper posted on arXiv.
The scientists taught neural networks to grow single cubes into complex designs containing thousands of bricks, like the castle or tree or furnished apartment building above, and even into functional machines, like the caterpillar. And when they sliced a creation in half, it regenerated. (Normally in Minecraft, a user would have to reconstruct the object by hand.) Going forward, the researchers hope to train systems to grow not only predefined forms, but to invent designs that perform certain functions. This could include flying, allowing engineers to find solutions human designers would not have otherwise foreseen. Or tiny robots might use local interactions to assemble rescue robots or self-healing buildings. The researchers presented their system in a paper posted on arXiv.
Seriously cool (Score:2)
I remember when the paper on this technique came out a few years ago. It's pretty cool: learning a compact encoding that will robustly generate a complicated pattern. Cool that these guys have applied it in Minecraft.
Re: (Score:2)
Now ML progresses quickly, but not THAT quickly. Don't make me feel older than I already am, please :) It was this article [distill.pub] which introduced the concept in 2020, and it was actually on distill.pub, not arxiv. Only about 13 months ago.
Re: Seriously cool (Score:1)
I think the algorithms were already known in the 70s.
Re: (Score:2)
Fairly simple cellular automata algorithms that are either programmed or very simply trained have been. Just like reinforcement learning, if you stick in a capable deep learning network it makes for a bit of leap in capability. If you haven't yet, have a look at the link Vintermann provided.
One of the nice things about this paper is that it's motivated by explaining how multicellular organisms manage to grow complex bodies with only simple intracellular communication.
Re: (Score:2)
Apparently 2020 was a long year for some reason. I thought you were crazy, but that is it indeed.
Not AI (Score:1)
Re: (Score:1)
AlphaGo Zero did.
Re: (Score:2)
Re: (Score:1)
Re: Not AI (Score:1)
Except AlphaGo *didn't* learn it from scratch.
They carefully chose what input to put into it. Not just random life experiences.
Namely, they input only those where it succeeded. And didn't just do random shit.
You know what that's called? Programming.
There is no "it" separate individual. That is the point.
"It" is a tool. A hammer doesn't "learn how to hammer in a nail" either, just because you "trained it to move on top of the nail".
It's not a person!
Then again, looking at many humans today, no offense, that
Re: (Score:2)
MuZero, a successor of AlphaZero (itself a successor of AlphaGo Zero) became a master of Go without knowing its rules:
"MuZero is a computer program developed by artificial intelligence research company DeepMind to master games without knowing their rules.[1][2][3] Its release in 2019 included benchmarks of its performance in go, chess, shogi, and a standard suite of Atari games. The algorithm uses an approach similar to AlphaZero. It matched AlphaZero's performance in chess and shogi, improved on its perfor
Re: (Score:1)
AlphaGoZero did not learn from scratch the rules and objectives of the game, and a mechanism to rate and score moves were programmed into it.
I think "from scratch" is the obfuscating term here; no learning happens in a void, be it human, machine, or otherwise. There has to be *something* to scaffold off of. Architectures all the way down.
Re: (Score:2)
"I think "from scratch" is the obfuscating term here; no learning happens in a void, be it human, machine, or otherwise. There has to be *something* to scaffold off of."
If we say a human learned from 'scratch' or by itself, that's always going to include the network of humans they're operating or developed in, but if we say a computer program learned from scratch it's obviously not the same thing, it didn't do it 'by itself' within a network of other 'by themselves' computer programs. It can't make sense in
Re: (Score:2)
Yes, you can't learn unless you have some built-in way to tell desirable things from undesirable things. That goes for all learning algorithms, whether it's AGZ or the built-in human one.
It probably wouldn't have made a big difference for AGZ if it wasn't told the rules, but was simply given an instant loss if it made an illegal move. If it was given absolutely nothing, though, of course it would learn absolutely nothing too.
Re: (Score:2)
Re: (Score:2)
You're right to hold the developers accountable. You're right to see it as an extension of what they want, not something that wants something in itself.
But you're wrong to say that makes it not learning. All learning is like this. Human brains are "just running our programs" too.
The decision to see humans as actually "wanting" something, and not merely mechanically moving towards it, is teleological. That doesn't mean it's wrong (indeed, it's right) but it can never be justified from experience alone.
Howeve
Re: (Score:2)
Thats true, but the 99% of the success of AGZ is because of the programmers, the learning part is hardly learning its just really fast and processing data and making decisions. My point is real learning would be actually learning go and figuring out the rules itself.
Re: (Score:2)
As I said, AGZ can probably do that just fine, as long as it has some signal telling it what to do.
They could have trained it on a version where it got an instant loss on an illegal move. Or even one where it had to print a legal move as letters to not get an instant loss (e.g. "a5"). But, again as I said, you have to give it the signal in SOME form.
Given that you have to encode what you want the learning algorithm to learn in some form, i.e. you have to decide what you want, you have to start somewhere - y
Reminds me (Score:2)
Guess Mumbo Jumbo is out of work... (Score:2)
(The joke is that YouTuber Mumbo Jumbo over-complicates the simplest of redstone circuits.)
So, they created cancer! (Score:1)
And when they sliced a creation in half, it regenerated. (Normally in Minecraft, a user would have to reconstruct the object by hand.)