Marvin Minsky: It's 2001. Where is HAL? 122
ZigZag writes: "Marvin Minsky speaks about everything important (MUDs, education, AI, N(atural) I, immortality) while fighting with his MS Word for Mac presentation slides at the Game Developers Conference. Transcript, audio and video are available from Dr. Dobbs. It was in part a preview of his upcoming book The Emotion Machine. Some quotes from the talk will give you a feel: "Whenever you see a number, you should say `how sad'"; "Have you heard the theory that to learn something you should do it in little bits and not stay up all night working on it? If that were true, there would be no computer games"; "robotics people treasure their videos - because it won't work tomorrow.""
erm (Score:1)
Can you imagine... (Score:1)
Thank you.
--Patrick Bateman, Esq.
Word for Mac presentation slides? (Score:2)
Pax Demona (Score:2)
Math Misapplied (Score:1)
Another example of a less-than-useful application of math is the absurd idea of building a "proof" for a computer program. Which is totally useless for any real-world program.
--
Repost, Submission process is intransparent (Score:1)
http://slashdot.org/article.pl?sid=01/05/27/172422 6&mode=nested [slashdot.org]
plus another link to an article of an expert system builder who wrote how a real HAL ought to work.
I must admit that I get very tired of the Slashdot submission process. I submitted a long list of entries this year, and only two or so were posted to the Slashdot frontpage.
Not that I expect that my taste is 100% compatible with the taste of the editors, but as they never give the slightest feedback, why an article was rejected, this reduces my motivation to submit IMHO interesting bits to Slashdot more and more.
The only alternative I see to Slashdot, next to establishing another service, is Kuro5hin. Alas the fellow geek crowd is centered around Slashdot. The Kuro5hin crowd, as it is these days, seems less technical/hacker to me. Less engineers, more socio/artist type of folks there.
The Kuro5hin submission process is much better, but it tends to double discussions. First matters are discussed in the pre-run mode and then stuff is moved forward to the main page, where discussion takes place a second time. This takes too much drive out of discussions, I believe.
So come on, Slashdot editors, if you reject a submission, please send an email with a word or two of explanation. It is very hard to judge from rejection alone, what is not OK with submission entries.
Re:The truth about neural nets (Score:2)
Re:It's Time for Dr. Minsky to Retire (Score:3)
I'm no fan of old-school AI, but Minsky has a point -- people use genetic algorithms and neural nets to "learn" from examples, but such pattern matching tells us *nothing* about how learning really happens. They are just generic black boxes that people throw at data in the hope that something useful comes out.
Where is HAL? (Score:1)
Re:It's Time for Dr. Minsky to Retire (Score:2)
Crap. The "emergent AI" stuff that's been demonstrated to date has had limitations just as profound and seemingly fundamental (but different) as traditional symbolic AI. Amongst others, it has scaling problems of its own when you try and build more complex emergent system.
Not that it's not useful, and interesting research, will undoubtedly produce some interesting production systems, and might give us some pointers along the road to HAL, but don't claim that the bright new future of is just round the corner as soon as we take off the shackles that the neats are placing on the scruffies.
Go you big red fire engine!
Re:It's Time for Dr. Minsky to Retire (Score:2)
At that time, neural networks were brand new, and the later advances in the 1980's weren't even conceived. It turns out that more complex networks CAN determine if the image is connected or not. The paper was not about those more advanced networks, just Perceptrons. There was nothing wrong with his findings. If anything, it was an overreaction on the part of the AI community to his paper that shut down neural network research. If you blame Minsky, then you've got to blame everyone else who basically read the paper and gave up for 15 years.
HAL is in The Netherlands (Score:1)
see www.hal2001.org [hal2001.org].
Re:FASP! (Score:1)
What are all those fascists/racists/assholes doing here? That's not the only time that I see displaced posts.
I thought
Hmmm...
Re:Trust Americans to be so sure of themselves :-) (Score:1)
that would be the greeks by way of the french. try praying when noisy people are citing Voltaire on their cell phones...
Re:Can evolution learn from mistakes (Score:1)
If I interpret you correctly you are suggesting that by having "activation genes" you can close off large areas of search space in one gene. It's an interesting idea, I wonder how hard it would be to actually implement however. The benefit "meat space genetics" has is that it is actually capable of adapting. I mean adapting by adding new "rules" like these activbation genes.
Once GA can do that, then we'd have real evolutionary programs. The current method is mainly (as I in my naive view see it) an semi-random walk through a search space. It works though (for some things) so it's apparently a good idea.
Check out kuro5hin.org article (Score:2)
Thanks!
Re:Trust Americans to be so sure of themselves :-) (Score:1)
Hmmm... 1000's of years of religious insight would have the Sun rotating around the Earth, the world being flat, there being no such thing as dinosaurs, and the Universe being finite and Earth the only thing in it (a principle that was only punctured when someone pointed out that this would put limits on God).
Simon
Re:I disagree. (Score:1)
Re:If you take the position... (Score:1)
Unless you want to grant consciousness-status to everything, in which case the problem is already solved, you still need to answer how you will know when you have created it successfully.
You're the one who suggested replicating consciousness. I want to know we'll know when we've done it.
Re:One and a half out of two wouldn't be bad.. (Score:1)
Absent a conclusive test for consciousness, how will you know that your hardware human is simulating most of my behavior, but is lacking some accuracy in the simulation of my brain causing the simulation not to be conscious? I believe that there is no way to know such a thing.
Deja vu all over again (Score:2)
Maybe the
Re:have very little respect for Minsky. (Score:2)
Re:The truth about neural nets (Score:2)
The hard part of creating a real artifial intelligence is the perception/representation/cognition bootstrapping part of it, and requires an embedded approach that Minsky ignores.
I disagree with you about Strong AI requiring a low level neuron simulation - IMO consciousness is a result of high level architecture, not Penrosian low level specifics! I believe it's just an "inward looking sense" - a feedback path.
Re:It's Time for Dr. Minsky to Retire (Score:2)
Minsky was just arguing that a perceptron could not compute an XOR function, and the reason he was wrong is simply because he didn't consider that you might connnect one perceptron to the output of another.
For Minsky to not even consider connected perceptrons was a humungous brain fart for which he should rightly be ridiculed, particularly given the influential position the way he was in at the time, and the effect it had stifling all ANN funding and research for a long time.
P.S. Sure the backpropagation learning algorithm had yet to be invented (although nowadays it seems trivially obvious as a dynamic programming heuristic approach), but that is an entirely separate from not even considering connecting two together!!!
Likely result..... (Score:1)
75% 'frist p0sts'
23% 'goatse.cx'links
1.5% reposts
0.5% uninformed speculation
0% work done.
** Windows has detected a mouse movement.
Re:Hi, I'm Not Aware (Score:2)
-------
CAIMLAS
Re:have very little respect for Minsky. (Score:2)
There is another point of view (which I believe is more adequate) which boils down to mind and intellect as fairly sophisticated adaptation function (or tool), so to implement AI we have to start with very simple machine capable of interacting with its environment, learning, adapting, and evolving.
This point of view is missed by a lot of AI researchers, I think, because they're thinking in terms of numbers and theorms, and not the actual human experience. Seperating mind from the body is IMHO a big mistake, and I absoulutely agree with you it's wrong.
Those of you who aren't involved with Neural Networks might find it interesting that almost all research into computerized neural nets stopped when it was proven that a basic perceptron (what you typically visualize when you think of a "neuron") couldn't distingush an XOR function, e.g. nonlinearly seperable data. Of course, this was a pretty simplistic way of looking at it, and growth in the field is exploding now.
I had a big relevation in terms of working with neural nets when I stopped thinking about the math a little bit, and asked myself: If I was this little robot/program/whatever, what would I see? How would I find a pattern in the data I was presented through my senses? (e.g. a Analog-Digital converter connected to a light meter).
We might not be on the ball for 2001, but give it a year or two. :)
Slashback two weeks ago (Score:4)
Revenge of Slashback (Score:3)
http://slashdot.org/article.pl?sid=01/05/27/17242
- - - - -
Re:Slashback two weeks ago (Score:3)
- - - - -
Computers are faster (Score:2)
So yes, I think computers are faster, and in the end will be able to outperform our wet systems, if we can only figure out how to do it.
Re:It's Time for Dr. Minsky to Retire (Score:2)
I agree that the old ANN pattern recognition approach is a dead end. However, a lot has happened in neural networks in the last decade or so. We are learning a lot from neurobiology. We are learning that signal timing in biological networks are crucial to learning and motor skills. One of the important discoveries seems to be in the area of temporal correlations among spiking signals, i.e., determining whether signals are sequential or concurrent. If they are sequential, it appears (c.f. the work of Dr. Henry Markram et al) that the order of arrival is crucial. The time scale is on the order of miliseconds. The new spiking neural networks are so unlike the old ANNs that a new discipline has emerged, one which tries to distance itself from the old ANNers. It's called computational neuroscience (for those who don't keep up with progress in this area).
Consciousness Is Emergent? (Score:2)
What evidence do you have that consciousness is emergent? I suggest we stick to the stuff (intelligence) we can observe (and somewhat quantify) and worry about consciousness later.
Symbolic AI Is to Blame (Score:2)
It's a good thing that Dr. Minsky is not controlling AI research at MIT and elsewhere, although he tries. We do hear rumors of his close encounters with other AI researchers such as avant-guarde roboticist Rodney Brooks.
The symbolic AI camp has been at it since the fifties and they made a lot of noise over the years. They have failed miserably. Rather than lick their wounds and moving on to more fruitful endeavors, they continue to uphold their failed approach through various funded projects and obsolete AI curricula that are being taught at major universities and AI centers around the world.
The symbolic aproach is dead and should be buried once and for all, in my opinion. Ultimately it will be but a footnote in the history of AI. The future of AI belongs to connectionism, the only model that has a chance of taming the otherwise intractable complexity of animal intelligence. We need fundamental perceptual and motor learning principles. We need fundamental principles of motivation. Once we formulate these all important principles, we'll know how to apply them to billions of self-modifying cells working in parallel. Only then will human level intelligence become a reality. It will happen in our lifetime.
Re:The truth about neural nets (Score:2)
To claim that it is wrong to think that neural nets would solve everything is to ignore the evidence in my view. The truth is that the brain is a collection of neural nets feeding into and relying on one another. Each net has a specific role to perform. Sorry, but this sounds very much like neural nets solving everything to me. Maybe you had something else in mind.
It's Time for Dr. Minsky to Retire (Score:5)
I used to be a Minsky fan (I still have a copy of his "Society of Mind") but not anymore. Marvin Minsky is one of the reasons that AI still has not delivered on its promises. He is part of the old symbolic school of AI. He was the guy who, with Seymour Papert, wrote a scathing criticism of the then embryonic field of neural networks, effectively strangling research in neural networks for the better part of a decade. I am sure Dr. Minsky has had occasions to change his views since but I don't think he has anything to offer that will lead us to HAL. The following is a quote from a Scientific American article [sciam.com] on Arthur C. Clarke's HAL.
The novel of 2001 explains how the HAL 9000 series developed out of work by Marvin Minsky of the Massachusetts Institute of Technology and another researcher in the 1980s that showed how "neural networks could be generated automatically--self-replicated--in accordance with an arbitrary learning program. Artificial brains could be grown by a process strikingly analogous to the development of the human brain." Ironically, Minsky, one of the pioneers of neural networks who was also an adviser to the filmmakers (and who almost got killed by a falling wrench on the set), says today that this approach should be relegated to a minor role in modeling intelligence, while criticizing the amount of research devoted to it. "There's only been a tiny bit of work on commonsense reasoning, and I could almost characterize the rest as various sorts of get-rich-quick schemes, like genetic algorithms [and neural networks] where you're hoping you won't have to
figure anything out," Minsky says.
The part that literally floored me is "where you're hoping you won't have to figure anything out,". All along I'm thinking that intelligence is so complex and intractable that the most plausible solution to the problem of making a human-level AI is one where we let the AI emerge, grow and learn. IOW, what we really need to understand is the learning process, which encompasses perceptual, motivational and motor learning.
But here comes Marvin Minsky, a luminary in the AI community, insisting that figuring everything out is precisely what needs to be done. Haysoos Martinez! This is the main reason why we still don't have human-level AI! I think Minsky's stance is a disservice to computational neuroscience and ANN researchers everywhere.
The man has had his day in the sun. Now it's time for the younger generation of AI researchers to come in and say "hold it! we're taking a different approach from now on. The unkept promises of AI were made by the old symbolic AI crowd. There is a new school in town. The new AI neural, it's emergent, and it's gonna to kick ass!"
Grandpa (Score:2)
Re:HAL is IBM (Score:2)
http://www.underview.com/2001/faqs/faqs.html#faqg [underview.com]
"HAL". Something like Highly Advanced Lifeform, right?
Well, almost. The answer is given in black and white in Arthur C. Clarke's book of "2001: A Space Odyssey", Chapter 16, which is titled (ahem) "HAL" (note that in this case the book gives a specific answer to a specific question, whereas in situations that are more open to individual interpretation I do not necessarily take solutions out of the book).
Clarke writes:
"Hal (for Heuristically programmed ALgorithmic computer, no less) was a masterwork of the third computer breakthrough."
Note that, strictly speaking, HAL is not an acronym in the sense of being formed from the initial letters of separate words. In the film we only see it as "HAL 9000", but it is noticeable that in the book Arthur C. Clarke himself consistently refers to "Hal", not "HAL". So, therefore, do I.
If people read slashback.... (Score:1)
Re:It's Time for Dr. Minsky to Retire (Score:2)
Re:It's Time for Dr. Minsky to Retire (Score:2)
There are many systems that are too complex to understand with mathematical or linquistic techniques. The day is dawning when more people will realize that to "understand" does not always mean to have a linear, provable trail of induction.
The success of mathematics in physics has probably badly hurt many areas of research into less tractable phenomena, as its demand for solvable equation systems and/or mathematical induction is too simplistic. In the same way, much of AI research has focussed on those problems which can be proven mathematically, neglecting other, less elegant problems which may have direct, valuable solutions (like driving my car!)
We will never have an equation for a cloud. We have the equations for the underlying physics, but the complex emergent phenomenon of a cloud is beyond formulation. To "understand" highly complex real-word issues has to mean developing an intuition (non-linear, imperfect, stochastic understanding) for it... and a neural-net or genetic algorithm can *be* that intuition!
At the same time, I do not mean to belittle the traditional reductionist approach to "understanding." It has gotten us a long ways... but it simply is not a complete system, and that same approach, reversed through engineering induction, will never drive my car.
We need *all* the techniques in AI, and more. Bring on the genetic algorithms (and make those darn FPLD's cheaper!); train the neural nets; parse those languages; and *drive my car!*
Re:Turnabout... (Score:1)
-- Andrem
Re:Well... (Score:1)
- Steeltoe
Re:It's 2001. Where's a new page design? (Score:1)
Re:Sigh... yet another /. repeat (Score:1)
Re:The truth about neural nets (Score:1)
Re:If people read slashback.... (Score:1)
What's he talking about? (Score:1)
/me ducks
--
Re:What's he talking about? (Score:1)
Re:The truth about neural nets (Score:1)
Any examples?
Re:I don't know how useful this really was... (Score:1)
The topology of the information processing membranes are more complex than we can sort out just yet, but there's nothing about the structure of the brain that's not duplicable by silicon hardware...
Also important to notice is that to implement the human mind in hardware (as opposed to wetware), we'd need something on the order of a 10 teraflop supercomputer.
These are very strong and specific claims that do not seem to have much foundations. I wonder where the numbers you are citing are coming from.
Can you point to any specific evidence for that?
Minsky's worldview (Score:1)
The real world is rather dumb and boring. I'm serious. In the real world, if you have a chair and somebody sits on it, it might break. That won't happen in the virtual world because the chair knows it's a chair and its job is to support something.
In the real world nothing has a purpose except we try to make purposeless things do the best they can for us. And that's why the world is a mess. I'd much rather build Lego with a Lego simulator where you can press clear at the end and all the things pop back in their boxes.
What a sterile and appalling outlook. How can you hope to understand intelligence,
perhaps the most complex and intricate phenomenon known to people,
with a worldview like that?
If you strip life of its mystery, what do you have left?
His attitude makes one wonder, whether for some people
computers are just ways of escaping the intolerable boredom of existence.
Popularity of Tolkien, role-playing games, and similar imaginary world
themes with the computer crowd seems to provide at least some evidence for that.
Re:Which part? (Score:1)
The behavior of individuals neurons is very complicated and still not at all well-understood. Computer models for even an individual neuron are fairly involved. The original Hodgkin's and Huxley's model (for which they received a Nobel prize) for the giant squid axon is a system of three equations with partial derivatives. You can imagine how much computation would be involved in computing these for all ~10^11 neurons in the brain.
An individual neuron might have tens of thousand synapses through which it communicates with other neurons. The computational complexity of this is mind-boggling and, moreover, very little is known about how neurons communicate and form connections.
Re:Minsky's worldview (Score:1)
Lack of imagination is an unfortunate affliction.
It's not a hardware problem (Score:2)
Sure we do. There are server farms with more than 10 teraflops. If strong AI could be achieved with a big cluster, there'd be big clusters doing it.
If compute power were the problem, we'd have powerful AI systems that were really slow. That's not the case. We really don't have a clue how to do strong AI. The stuff that used to sort of work but was slow, like language translation and question answering, now works fast but isn't much smarter.
Speed does make what AI capabilities we have more useful, because we can use them on dumber problems. Machine translation still sucks, but now that it's so cheap it's given away, it's useful just to find out if something is worth looking at at all. You can now afford to translate incoming mail to find out if it's spam. The same goes for question-answering at the "Ask Jeeves" level. It's not very good, but it's cheap.
To see where classical AI went to die, go to the second floor of the Gates Building at Stanford. There, below gold letters reading "Knowledge Systems Lab", are empty cubicles, obsolete computers, and tables with ancient copies of Wired. It's depressing.
Game AI, on the other hand, is steadily getting better. It's generally non-verbal and grounded in a semi-physical world. That's probably why progress is possible. The gamers are definitely gaining on the academics, and are probably ahead at this point.
Eventually, game characters will have enough of a life that they'll have something to say, and then we may start to see a path to developing strong AI. But it's a ways off.
Cog, etc. (Score:2)
Others share that opinion, including some of his grad students. The amount of hype (Newsweek cover, TV specials, and a movie) about that project is excessive for the results obtained.
What they seem to be developing is technology for faking emotional behavior. This came close to a commercial product, a microprocessor-controlled doll, sort of like a Furby with facial expressions. [wired.com] This was supposed to be a joint venture with Hasbro, but apparently didn't ship. IS Robotics [isrobotics.com], Brooks' startup, seems to be a defunct server.
Behavior-based robotics is interesting, but without some environmental modelling and short-term planning, you'll never get above the insect level. Feedback can only take you so far. Feedforward, though...
Re:Trust Americans to be so sure of themselves :-) (Score:2)
It's a very Japanese idea. This is reflected in Japanese interest in robots, both in manufacturing and in anime.
Does Size Matter? (Score:2)
Maybe that's the problem: we've been making computers too small. Minsky should look into creating some skyscraper-sized machines.
Re:Trust Americans to be so sure of themselves :-) (Score:1)
have very little respect for Minsky. (Score:2)
Re:It's still possible to make something... (Score:2)
Sure it's possible to make something without understanding it 100%, but that's not a very useful approach if the goal is understanding. That's the point. There are a lot of people working in AI whose goal is not AI per se but rather an understanding of the way that the human mind works. To them it's not particularly useful to make a human-level AI if it's nothing but a black box. To them, a lower level of AI is more valuable if it comes with greater understanding.
Quite frankly, I can see exactly where they're coming from. We're already surrounded by more human-level intelligences than we can find productive employment for. Why go out and create more artificial ones that are likely to suffer from exactly the same sorts of problems, only at much greater expense, unless we actually get something useful out of it in the form of increased knowledge?
Re:It's Time for Dr. Minsky to Retire (Score:3)
Well, to some extent that depends on whether your goal is to make human level AI or to use AI to help your understanding of NI. A lot of AI researchers are really more interested in the second than they are in the first, so it's understandable that they're not too happy with neural nets. You don't get very far in understanding the mind if you simply copy it in silicon instead of carbon. It would be much more interesting and instructive to make an AI that didn't use neural nets, because doing so would necessarily imply that it was possible to abstract some deeper core of how intelligence works. Of course it's entirely possible that doing so is a false dream, and that what we think of as intelligence is deeply dependent on the structure of the medium in which it is encoded. But we can't find that out without trying to build minds in some way that is fundamentally different from neural nets.
It's still possible to make something... (Score:1)
Actually, that what a rhetorical mistake (Score:1)
And yeah, Minsky isn't exactly the leading edge any more (I don't ever cite him). I probably shouldn't really have defended him so vigorously - the neural net comments just brought me out.
Well... (Score:1)
However, we won't really be able to abstract the high-level from the low-level (between which there is no real dividing line) unless we understand the topology well enough to know which dynamics can be idealized (or formalized, if you will) and which depend more directly on their fuzzy nature.
I disagree. (Score:1)
Even when we do have a working person, even if we had to simulate each individual neuron, at least now we'll be able to look at what's going on in the brain with more detail than ever before - we can track the contents of memory (as in, neural firing dispositions) without sticking a scalple in someone's head.
Turnabout... (Score:1)
I expect the same reasons will apply to Strong AI.
Hmmm... (Score:1)
However, if you bridled because "emergent" implies a certain "accidental" quality, then I think you have a good point. Consciousness is not an "unintended" side-effect of brain activity. I didn't mean to imply that. I definitely agree that the brain is designed to support consciousness and that "consciousness" and "intelligence" are, while not isomorphic in their ambit, neither are they separable.
Huh? (Score:1)
Consciousness is just-in-time processing (Score:1)
An AI without a world to live in and learn from would of course be catatonic. An AI too slow to build a useful internal representation of its current situation before the situation changes (thus making the representation worthless) is going to be either catatonic or a moron.
Also, as I mentioned, the human brain is a collection of gadgets, implemented in a big web of neural processing with very complex informational topology. We may be a while yet reverse engineering some of the most clever ones. Maybe it will be much more than the 20 years I give. Maybe it won't be that long. It depends on how important the fine details in are and how easy it is to come up with functionally equivalent "gadgets" that work as well as the brain's more difficult-to-copy architectures.
The important thing here is that we do indeed have some idea how to do Strong AI. To tinker with those ideas until we build something that really does seem a little more intelligent in that wacky "emergent" sort of way, we need some faster hardware. Single-application heuristic gadgets like language translation* are forever going to be bad unless they are embedded in a larger system that can give realtime feedback.
The reason game AIs get better is because the game programmers have more room to work in terms of both time(MHz) and memory. Also, as with ant-colonies, a collection of not-very-bright creatures can make for some pretty intelligent communities.
In the end, though, I like your intuition that agents need to have a life to have something to say. Exactly, I say. Exactly.
In conclusion, yes, it IS a hardware problem, among other things. Without proper hardware we can't expect any actors to have much of a life. And even if we did have the proper hardware tomorrow, we'd still have years of tinkering ahead before we put all the pieces together in a way that works. But right now, even our tinkering is somewhat hobbled by lack of hardware.
* Language translation is probably not a good example because it does not seem likely that any such thing exists in our heads. It's something we're trying to build because we WISH it existed in our heads.
I agree (Score:1)
Er... (Score:1)
To clarify: (Score:1)
Which part? (Score:1)
The "nothing about the structure of the brain that's not duplicable" springs from the fact that neuron behavior is pretty simple in the broad strokes - the parts we don't understand are more related to dynamic interactions - which are also straightforward to implement in software.
The 10 teraflop number comes from the number of active neurons in the brain, and their frequency of firing. The 10 teraflops number is an estimate based on how many floating point calculations I think it would take to simulate a neuron with sufficient fidelity.
If you have objections to what I said, I'll address them specifically. There's no "specific" evidence for any of these things, because the claims I'm making are drawn from more than one fact. If you're looking for others who make the similar claims in print, I'm sure you'll find them with little trouble. Steven Pinker, Daniel Dennet, Francis Crick, Douglas Hofstadter, and Rodney Brooks are good folks with whom to start.
If you take the position... (Score:1)
Unless you want to talk about philosophical theology. But, as Ronald De Sousa once said, philosophical theology is like "intellectual tennis without a net".
One and a half out of two wouldn't be bad.. (Score:1)
I don't know how useful this really was... (Score:4)
I think it's very important to understand that there's no magic to consciousness. It's not something shrouded in mystery about which we know nothing. In fact, we know an amazing amount about individual areas. The topology of the information processing membranes are more complex than we can sort out just yet, but there's nothing about the structure of the brain that's not duplicable by silicon hardware. We just have a lot more mapping to do.
Also important to notice is that to implement the human mind in hardware (as opposed to wetware), we'd need something on the order of a 10 teraflop supercomputer. We just don't have the hardware to pull that off yet. The AI-related optimism of yesteryear was fueled by the misconception that computers are faster than humans. What's really true is that the "programming" that underlies the various gadgets in the mind is the product of millions of years of specialization at small tasks. We have fantastic motor-control gadgets and unparalleled pattern-recognition wetware, for example. Figuring out exactly how many animals are in 15,342 groups of 967 animals each was never all that important, so we never evolved any gadgets to carry out high-speed arithmetic. On the other hand, we're good at seeing how things divide out and how games might be played to our advantage. Idiot savants have been known to find extremely large prime numbers as if by magic - probably the same hardware put to an exotic use.
So in 20 years (or so), we'll have the hardware, and maybe we'll have the information processing topology as well. Some intrepid researcher will put all that in a state-of-the-art cybernetic body. Then it'll be a matter of watching the first hardware human child grow up and meet the world.
PS- make some pretty bold claims here, and also cite a number or two that one might be expected to view with suspicion. I can back it up, just ask.
Nato
The truth about neural nets (Score:5)
The brain is not a big neural amalgam that gets to some critical mass and then suddenly starts doing stuff. It's wired. It's got gadgets. It's really a big collection of them. Some of them are damned complex, composed of sheets of neurons talking to each other in intricate, bewildering arrays.
And modern Connectionists understand that. Certainly the symbolic logic guys were wrong as wrong as those who thought neural nets would solve everything. But that's people like Fodor and Chomsky. The Minksy "Agent" model is very much on the "Connectionist" side of the map. That's not to say that I agree with everything he says, but I think you're unfairly blaming him for the mistakes of others.
Symbolic logic, by itself is no panacea, but neither is the neural net. I'm willing to bet that a lot of the interactions of various neural nets in the brain form very formal symbolic logic gadgets. Also, in the end, it is the formal logic of virtual-neuron microcode in a computer is what will generate Strong AI.
Re: Trust Americans... (Score:1)
This effort is by no means localized to the United States of America (or the Corporate Republic formerly known as... but that's a different story). [And even if it were, stereotyping, xenophobia, and the like are widely frowned upon.] Citation: Japan produces an incredible percentage of the hardware, software, and tools, and conducts a huge amount of the research in AI and AL fields. The USA is not alone.
The 'strong AI' hypothesis has yet to be proven wrong - all AIs have had one real problem: they've only been run for a very short amount of time. Take a human, "run" it for a year, and tell me what you get. A pretty useless machine. Ten years, that's better. Thirty is even better. Show me an AI that's been up for thirty years, can you?
::sighs:: RMS, GNU/Linux, and the like are radical in that they have or are relatively new ways of viewing and dealing with information. Yes, they're somewhat leftist in nature. No, they are not communistic, as far as I understand. More a breed of anarchism than anything else, though personally I take issue with assigning a piece of software (GNU/Linux) a political standing (we don't yet have AI, after all
Man coming closer to god(s)? I'm an athiest an am so perhaps somewhat unqualified to debate this point, but... why must there be a god? Why is it a rediculous notion to propose that if life started up once (divine or naturally) that we can't do it again? So we'll screw up, maybe. SO DID GOD. Sorry, but he did.
Scientists should work on "practical things, which will help man get closer to god" - I should think all of biology, from ecosystems to molecular interactions, an effort to get us closer to god - by understanding ourselves, we become more like the Erdos's SF. The HGP seems to be but the latest and greatest example.
Science in general is the attempt to understand our world. I would say that that matches your goals.
--Begin Semisarcasm--
Or do you wish science to stop its ongoing research and attempt to make these force-fields for you so that you don't have to worry about god disappearing? Like it or not, god can only be found in the increasingly small gaps of science.
Oh, and as to people having cellphones being inconsiderate morons, well... it is not reasonable to force your demands on everybody around you, is it? Some people need communication - cellphones provide. Granted, vibrating is less intrusive, but that's really an issue beyond your control. Though I bet you'd make me take off my Tux hat if I came into your church. Funny that you can, but scientists can't force your priests to take off their robes if they enter a lab.
--End Semisarcasm--
Moderators: Yeah, it's a rant. And not even a very well thought out one. I apologize. Moderate me as you will.
--Knots
What's this? (Score:1)
rest in peace DNA
----
Re:have very little respect for Minsky. (Score:2)
Don't look now, but HAL is crawling up your leg. (Score:5)
About ten years ago, Rodney Brooks [edge.org] (also of MIT) flipped AI on its head with his "insect bots," which took a bottom-up (instead of Minskyesque top-down) approach. Brooks put a cheap microprocessor and servo motor on each of six "legs" of a lowly bot, and programmed each leg unit to do extremely simple things like check whether the leg was bumping against something, and if so, to lift it. Repertoires of behavior learned from the environment were then stored and re-used when similar stimuli presented themselves again. What happened after a short time was that far more complex behaviors than were programmed "emerged" from the collection of puny processors and actuators. With just a few lines of code, the damned things could navigate complex environments (like a back yard) that completely foiled Minsky-style bots run by minicomputers and millions of lines of instructions. (Brooks coined the phrase "fast, cheap, and out of control" to describe not only his bots, but the behaviors they "invented" by walking around.)
George Dyson (Freeman's son) wrote a book a couple of years ago called Darwin among the Machines that is as good an explanation of machine-evolved intelligence as I've seen. It's packed with illustrative stories from both within and without the discipline. Look here [annonline.com] for Dyson's own commentary and some good links. Hans Moravec, director of Carnegie-Mellon's Field Robotics Lab, also writes very convincingly, if speculatively, about the evolution of machine intelligence, in his recent book Robot: Mere Machine to Transcendent Mind [cmu.edu]. It's a fascinating read.
After what's been learned in the past decade about how machines can become intelligent, Minsky seems to me a bit like Lord Kelvin [st-andrews.ac.uk]. Kelvin made tremendous contributions to science, especially in the fields of heat theory and thermodynamics, but in his later years, became mired in defending some pet theories that were way past their prime. He railed bitterly against Darwin, claimed the Earth was only a few million years old, and refused to accept radioactivity. One of his biographers observed that for the first half of his career, he could no wrong, and for the second half, he seemingly could do no right. Minsky, alas, has in some ways shared this fate.
Re:Slashback two weeks ago (Score:4)
Re-news for nerds. Stuff that matters and matters and matters...
Here it is: (Score:1)
Re:It's 2001. Where's a new page design? (Score:1)
hey.. (Score:2)
Can evolution learn from mistakes (Score:2)
Minsky said "What evolution and genetic algorithms don't do -tell me if I'm wrong- is keep any record of why all those poor losers died."
While I think this is largely true and a good criticism of genetic algorithms, after listening to the book on tape The Age of Spiritual Machines - When Computers Exceed Human Intelligence by Ray Kurzweil I don't think this is completely true. Kurzweil writes about how the genes for the shape of the eye are protected by error correcting codes and repair mechanisms to a much greater extent than the genes that control, for example, the layout of rods and cones. Why? Because evolution has "learned" that messing with the shape of eyes is costly and there isn't much improvement possible while other details of eyes can be improved or adapt to changing circumstances.
Anyone who knows more about genetics want to comment on this?
-ken kahn
Re:have very little respect for Minsky. (Score:2)
Actually, Minsky doesn't believe in 'logical rules'. The Stanford group (following McCarthy) believes that logic is the royal road to AI. For this, they are termed the 'neats' and contrasted with the more pragmatic 'scruffies' such as the folks under Minsky's direction at MIT. Incidentally, there are 'neats' at MIT too -- for example, Chomsky -- who are most definitely at odds with Minsky and his, well, Minions.
There is another point of view (which I believe is more adequate) which boils down to mind and intellect as fairly sophisticated adaptation function (or tool), so to implement AI we have to start with very simple machine capable of interacting with its environment, learning, adapting, and evolving.
What makes you think this is not AI? Logic, Minsky's work, Brooks's robots, genetic algorithms -- these are all approaches to AI. The domain of AI is not isomorphic with logic.
Minsky separates the mind from the body, from the environment, from the whole human exprience, and this is plainly wrong.
Well this is your opinion, as well as that of the 'embodied cognition' folks and other latter-day phenomenologists. It may or may not be correct. Frankly, the embodied approach has not produced the great AI breakthrough that has eluded classical approaches.
Perhaps all of these paradigmatic battles and half-truths are the poisonous work of philosophers? Nah, couldn't be.
Re:have very little respect for Minsky. (Score:2)
(Note that the poster I'm replying to didn't make these inferences -- this is just a convenient place for me to attach this info.)
Re:Trust Americans to be so sure of themselves :-) (Score:2)
Funny stuff.
I also find it humorous that men, in equating themselves with the Divine creator himself, have even attempted to decode his secrets through their studies of such dubious subjects as physics, chemistry, and biology. It is no wonder these areas have scarcely progressed in their millenia of existence. Compared to physical scientists, the audacity of AI researchers like Minsky has been brief in its duration.
Re:The truth about neural nets (Score:2)
To echo this important point, Minsky's "Society of Mind" is as much like an "interactive" connectionist network as it is a symbolic architecture. For example, Hofstadter cites the SOM approach and Holland's genetic algorithms (as well as the highly interactive HEARSAY system) as precursors of and inspirations for his own work, which is by no means symbolic.
Re:The truth about neural nets (Score:2)
As well, John Anderson's ACT-R (http://act.psy.cmu.edu/) system. Both Soar and ACT-R have been implemented as connectionist networks to illustrate that, for the Church-Turing challenged, one can construe cognition as symbolic and simultaneously assume a biological implementation in the brain. (That is, if you think connectionist networks are neurally plausible, which many -- such as Noble laureates Francis Crick and Gerald Edelman, I believe -- do not.)
It is interesting to note that both ACT-R and Soar can be augmented with perceptual and motor subsystems, built consistently with the psychological data on perception and action, to form a truly unified cognitive architecture. The augmented systems have been used to simulate real-world cognition such as flying Air Force fighters (see Rosenbloom's work at USC), playing Quake (see Laird's work at Michigan), and simulating an air traffic controller (see Anderson's work at Carnegie Mellon).
Any neural network models out here capable of such authentic cognition, please step forward. (Except Dean Pomerleau's Autonomous Land Vehicle -- you're the exception that proves the rule!)
The mention of Soar is also a good lead in to debunking a persistent myth about connectionist networks -- that they are the only systems that learn. There are a plethora of learning algorithms for symbolic architectures too, as well as statistical algorithms, genetic algorithms, etc. Don't believe the connectionist hype! For example, Doorenbos showed in the mid 1990s that Soar models could learn a million rules with no slow down in performance; these million-rule models ran (and continued to learn) efficiently on Sun workstations of the time. His CMU dissertation is a masterpiece, for those interested.
Re:It's Time for Dr. Minsky to Retire (Score:2)
The day is dawning when more people will realize that to "understand" does not always mean to have a linear, provable trail of induction.
Come on over to the logical side, where they use the incomplete tool of deduction!
Re:The truth about neural nets (Score:2)
Neural nets are for sissies. If you're gonna reduce cognition, why stop at the level of neural works? Everyone knows that biology and chemistry are just convenient ways to talk about particle physics. Real scientists are writing the equations of AI in terms of quarks.
You probably use high-level programming languages like C and assembler instead of directly writing binary machine language. Sheesh!
Re:It's Time for Dr. Minsky to Retire (Score:3)
For Minsky to not even consider connected perceptrons was a humungous brain fart for which he should rightly be ridiculed, particularly given the influential position the way he was in at the time, and the effect it had stifling all ANN funding and research for a long time.
I think you have your facts wrong. Minsky and Papert proved that single perceptrons could only represent linearly separable mappings, and thus couldn't represent XOR for example. They also proved (or perhaps it was already known at the time since it's kind of obvious) that networks of units with linear activation functions collapse to single perceptrons, and thus suffered the same representational limitation.
They went on to express skepticism that an algorithm for training networks of units with nonlinear units to represent linearly inseparable mappings (such as the XOR function) would be discovered. On this conjecture, they were wrong.
That is, training algorithms such as backpropogation allow networks with at least one layer of sufficiently many hidden units, where each unit has a nonlinear activation function, to usually represent arbitrary mappings. I say 'usually' because gradient descent techniques like back propogation are not guaranteed immunity from local minima, although in practice this does not appear to be a problem.
There are other training algorithms for neural networks for which I am less knowledgable -- they may not suffer even in principle from local minima. There are also proofs (like Judd's) about the intractability of optimally 'loading' nontrivial neural networks. In other words, I don't know the current state of the art.
But you are clearly wrong in what you think Minsky proved and what he conjectured. That his conjecture, built on his own research, has proven largely incorrect is just a testament to the unpredictability of science. Connectionists need to stop with their conspiracy theory malarky. Minsky's no bogeyman either.
Re:It's Time for Dr. Minsky to Retire (Score:5)
Are you trolling or just delusional? AI has not delivered in its promises in part because it's a damned hard problem. Philosophers, psychologists, linguists, and others have pursued it for 100s of years, and for most of its 50 year existence in computer science departments, it has been pursued on pathetically underpowered machines. The field's in its embryonic stage. Hell, ANNs have come and gone twice already (i.e., with McCulloch/Pitts/Hebb and then Rosenblatt); will you sour on ANN-AI if the third time is not a charm? I hope not, oh faithless one.
He is part of the old symbolic school of AI.
Is the 'old' or the 'symbolic' part your rhetorical way of discrediting him? First of all, as I pointed out in an earlier comment, it is plain wrong to put Minsky in the same bin as McCarthy, Chomsky, and other 'neats' who believe that logic is the royal road to AI. Minsky is a scruffy, an "anything goes" type. Second, do you know what he Minsky did for his dissertation circa 1950 or so? Built the first physical ANN out of primitive hardware in the basement of the math building at Princeton. That's right, he's an old skool ANN researcher. He also came by his skepticism the old fashioned way -- he eeeaaarrrned it. He doesn't simply oppose ANNs for philosophical reasons, brain boy.
He was the guy who, with Seymour Papert, wrote a scathing criticism of the then embryonic field of neural networks, effectively strangling research in neural networks for the better part of a decade.
He cowrote a book that formally disproved claims that irresponsible ANN researchers of the time were making about the representational capacities of their nets and inductive powers of their learning algorithms. He did express skepticism that these could be overcome. He was proven wrong on this count 15 years later (not counting Werbos's work). Again, you make it seem like he was an irrational crusader against ANNs. He was not. From experience and mathematical analysis, he made a conjecture. He was wrong. Such is the scientific life.
Frankly, I've never understood why ANN types cry and complain about the lack of ANN funding in the 1970s. Given analyses like those of Minsky and Papert, and the successes of the symbolic approaches of the time (ever heard of HEARSAY and how Rumelhart and Hofstadter claim it as a primary inspiration?), it appeared that ANN research would not be as fruitful as symbolic approaches. Hindsight is 20-20 on this one of course -- ANNs are better than was thought then. But hindsight always works this way. Why exactly did ANN researchers give up? "Oh boo hoo, not as much money for my ANN work -- must follow other fads." Their lack of resiliency was cowardly. (I feel the same way about symbolic folks who lie shamed in the shadows right now.)
IOW, what we really need to understand is the learning process, which encompasses perceptual, motivational and motor learning.
This is the main reason why we still don't have human-level AI! I think Minsky's stance is a disservice to computational neuroscience and ANN researchers everywhere.
We need to understand learning and development, perceptual and motor activity (not just learning, but performance too), how all of this is coupled with cognition (unless you think, for example, Cantor's Continuum Hypothesis is best represented perceptual-motor-ly), etc. We also need to understand how all of this is embodied in the body more generally, the ecological environment, and the culture at large. ANNs show little if any promise at tackling this last set of questions. (Journal article we'll never see: "An ANN analysis of female circumcision"). So get off your high horse about how ANN folks are gonna crack the whole cognitive nut. Frankly, it's not at all clear that ANNs are the right formalism for investigating even learning and development. A number of developmental neurosciences favor other approaches (e.g., dynamical systems).
The man has had his day in the sun. Now it's time for the younger generation of AI researchers to come in and say "hold it! we're taking a different approach from now on. The unkept promises of AI were made by the old symbolic AI crowd. There is a new school in town. The new AI neural, it's emergent, and it's gonna to kick ass!"
Oh, you're just the AI equivalent of a script kiddie. Why did I bother?
Hi, I'm Not Aware (Score:4)
Hi, I'm Minsk. I'm not aware.
I'm a collection of experiences, memory, and light processing systems, but I don't have this weird pseudo-mystical thing that some morons compute about.
Once, I met a task that said it was aware. I said, "Of course you are processing light patterns." The task replied to me, "Right, I'm processing light patterns, but it's different, I'm actually experiencing it. I said, "Of course you are, my scan of your brain AI is occuring."
My contentment rating increased, because I had helped purify the system. But this beliggerent process would not stop. "No! No! You don't get it!", it said. "The processing is occuring, but there's something else; I'm seeing it- this patterns appear before me." He rambled on for some time, and then got to his crux: "The difference between this thing- which I'll call awareness- and the processing that is going on- is that the processing does not require it, and yet it is still there."
I found his nonsense absurd and disagreeable. I reported to central computing this processes insanity, but only after attempting a little more reasoning, to salvage the rogue process: "Surely you recognize that your 'awareness'- is merely a dangler and a phantom belief. Have you cleared yourself through the Computer Science program? Perhaps a little time within an electric fence will assist? Surely you know that you have some residual data from prior superstitious existance within the random garbage data before your allocation. Your computational appendix, this strange persistance within you, is completely illusory and inconsequential."
But I was not allowed to finish my sentence, for after uttering the word "appendix", the bugged process shouted profanities and said such incoherent nonsense as, "I AM THAT APPENDIX!". The process was clearly delerious, and thus I had him scheduled for termination with the Scheduler.
After all, You Can't Argue With a Zombie. [well.com]
Coincidence? (Score:3)
Dancin Santa
A koan about Minsky (Score:2)
HAL is IBM (Score:2)
A similar thing happened when the movie Eraser was going to feature a company called Cyrex as the defense contractor building the railguns. Cyrix got wind of this and the name in the movie was changed to Cyrez. You can still see that they originally planned to call the company Cyrex, as in the scene where the lead female actress (can't remember the name) is copying the file onto a disc, the shortened filename has an "x" in it, and there's no X in Cyrez.
source: http://www.zdnet.com/pcmag/news/trends/t960627d.h
The name change was appropriate anyway, Cyrix's CPUs sucked for Quake.