Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Games Entertainment

Marvin Minsky: It's 2001. Where is HAL? 122

ZigZag writes: "Marvin Minsky speaks about everything important (MUDs, education, AI, N(atural) I, immortality) while fighting with his MS Word for Mac presentation slides at the Game Developers Conference. Transcript, audio and video are available from Dr. Dobbs. It was in part a preview of his upcoming book The Emotion Machine. Some quotes from the talk will give you a feel: "Whenever you see a number, you should say `how sad'"; "Have you heard the theory that to learn something you should do it in little bits and not stay up all night working on it? If that were true, there would be no computer games"; "robotics people treasure their videos - because it won't work tomorrow.""
This discussion has been archived. No new comments can be posted.

Marvin Minsky: It's 2001. Where is HAL?

Comments Filter:
  • by Anonymous Coward
    This is much better - and older. http://bots.internet.com/pcai/article8.htm
  • by Anonymous Coward
    ... a Beowulf cluster of these?

    Thank you.

    --Patrick Bateman, Esq.
  • by Anonymous Coward
    Using Word for Mac as a presentation program? How can one be so bright and at the same time be so dumb?
  • 666
  • "If all you have is a hammer, everything starts looking like a nail."

    Another example of a less-than-useful application of math is the absurd idea of building a "proof" for a computer program. Which is totally useless for any real-world program.
    --

  • Yep, I submitted the Minsky lecture two weeks ago

    http://slashdot.org/article.pl?sid=01/05/27/172422 6&mode=nested [slashdot.org]

    plus another link to an article of an expert system builder who wrote how a real HAL ought to work.

    I must admit that I get very tired of the Slashdot submission process. I submitted a long list of entries this year, and only two or so were posted to the Slashdot frontpage.

    Not that I expect that my taste is 100% compatible with the taste of the editors, but as they never give the slightest feedback, why an article was rejected, this reduces my motivation to submit IMHO interesting bits to Slashdot more and more.

    The only alternative I see to Slashdot, next to establishing another service, is Kuro5hin. Alas the fellow geek crowd is centered around Slashdot. The Kuro5hin crowd, as it is these days, seems less technical/hacker to me. Less engineers, more socio/artist type of folks there.

    The Kuro5hin submission process is much better, but it tends to double discussions. First matters are discussed in the pre-run mode and then stuff is moved forward to the main page, where discussion takes place a second time. This takes too much drive out of discussions, I believe.

    So come on, Slashdot editors, if you reject a submission, please send an email with a word or two of explanation. It is very hard to judge from rejection alone, what is not OK with submission entries.

  • Indeed. People can make fun of Minsky and the other proponents of classical strong AI but they should remember that connectionists also made such claims. I have a book from the mid 1980's "Apprentices of Wonder:Inside the Neural Net Revolution" that made also sorts of silly predictions like intelligent cars that drive themselves (not as research projects, but commerically available) by 2001.
  • by Jonathan ( 5011 ) on Friday June 08, 2001 @04:44PM (#165078) Homepage
    The part that literally floored me is "where you're hoping you won't have to figure anything out,"

    I'm no fan of old-school AI, but Minsky has a point -- people use genetic algorithms and neural nets to "learn" from examples, but such pattern matching tells us *nothing* about how learning really happens. They are just generic black boxes that people throw at data in the hope that something useful comes out.
  • HAL is between 10 and 12 August 2001 at the campus of Twente University in the Netherlands. HAL is Hacking At Large, a gathering in the tradition of HEU and HIP. Camp outside, bring your PC and have a fast uplink to the Internet and a lot of nice people around (you can have both at the same time!). Website: http://www.hal2001.org/ [hal2001.org]. Spread the word [hal2001.org] and spread the banners [hal2001.org].
  • The man has had his day in the sun. Now it's time for the younger generation of AI researchers to come in and say "hold it! we're taking a different approach from now on. The unkept promises of AI were made by the old symbolic AI crowd. There is a new school in town. The new AI neural, it's emergent, and it's gonna to kick ass!"

    Crap. The "emergent AI" stuff that's been demonstrated to date has had limitations just as profound and seemingly fundamental (but different) as traditional symbolic AI. Amongst others, it has scaling problems of its own when you try and build more complex emergent system.

    Not that it's not useful, and interesting research, will undoubtedly produce some interesting production systems, and might give us some pointers along the road to HAL, but don't claim that the bright new future of is just round the corner as soon as we take off the shackles that the neats are placing on the scruffies.

    Go you big red fire engine!

  • You leave out some important information about Minsky's paper with Seymour. They found that a simple type of neural network called a Perceptron could not determine if a type of image that looks like a spiral is connected or not (is the image made of one spiraled line, or two spiraled lines?)

    At that time, neural networks were brand new, and the later advances in the 1980's weren't even conceived. It turns out that more complex networks CAN determine if the image is connected or not. The paper was not about those more advanced networks, just Perceptrons. There was nothing wrong with his findings. If anything, it was an overreaction on the part of the AI community to his paper that shut down neural network research. If you blame Minsky, then you've got to blame everyone else who basically read the paper and gave up for 15 years.

  • HAL 2001 will take place in august in Enschede, the Netherlands.

    see www.hal2001.org [hal2001.org].

  • Fuck!

    What are all those fascists/racists/assholes doing here? That's not the only time that I see displaced posts.

    I thought /. audience were educated intelligent and tolerant people.

    Hmmm...
  • The idea that man is equal to God, and believes he can put himself on an equal footing with the Divine Creator himself is just the sort of ridiculous notion that could only come from the USA.

    that would be the greeks by way of the french. try praying when noisy people are citing Voltaire on their cell phones...
  • I think that Minky's point was that a normal GA don't remember what it has done. There is nothing stopping it from evaluating the same point in the search space a million times. Because it has no way of remembering that it has been there before.

    If I interpret you correctly you are suggesting that by having "activation genes" you can close off large areas of search space in one gene. It's an interesting idea, I wonder how hard it would be to actually implement however. The benefit "meat space genetics" has is that it is actually capable of adapting. I mean adapting by adding new "rules" like these activbation genes.

    Once GA can do that, then we'd have real evolutionary programs. The current method is mainly (as I in my naive view see it) an semi-random walk through a search space. It works though (for some things) so it's apparently a good idea.
  • in which I've reviewed the talk Minsky gave. The same review has been posted on comp.ai and comp.ai.philosophy, too. You can find it in the Media section in k5, titled ' Minsky's "Programs, Emotions and Common Sense" '

    Thanks!
  • This Minsky guy seems to be promoting the discredited 'strong AI' hypothesis. Well, that would seem to render 1000's of years of religious insight redundant. But then, what would we expect from someone from MIT.

    Hmmm... 1000's of years of religious insight would have the Sun rotating around the Earth, the world being flat, there being no such thing as dinosaurs, and the Universe being finite and Earth the only thing in it (a principle that was only punctured when someone pointed out that this would put limits on God).

    Simon
  • Suppose you think you've created consciousness. How could you know?
  • With humans you can at least assume, based on behavioral and pysiological similarity, that consciousness is present, fairly safely.

    Unless you want to grant consciousness-status to everything, in which case the problem is already solved, you still need to answer how you will know when you have created it successfully.

    You're the one who suggested replicating consciousness. I want to know we'll know when we've done it.
  • To what, exactly, will your hardware human be behaviorally identical? Other humans? Even wetware humans aren't behaviorally identical to one another.

    Absent a conclusive test for consciousness, how will you know that your hardware human is simulating most of my behavior, but is lacking some accuracy in the simulation of my brain causing the simulation not to be conscious? I believe that there is no way to know such a thing.
  • .. and it wasn't even that interesting the first time around.

    Maybe the /. editors can recommend a better tech news site, since they're obviously not reading their own!
  • Uh, that'd be Rodney Brooks. Nowadays he's working on a humanoid robot called Cog - which still uses his bottom up subsumption architecture, and IMO seems to be a bit of "anthromorphic robotic grant troll"! I think he may have some plans for adding representation and cognition, or maybe I'm just thinking that he *should*!

  • Yeah, but the symbolic part of it itself isn't a hard problem - Allen Newell's SOAR already does pretty much everything you could hope. Who (other than a neurologist) cares if the implementation is itself symbolic rather than based on connectionist building blocks.

    The hard part of creating a real artifial intelligence is the perception/representation/cognition bootstrapping part of it, and requires an embedded approach that Minsky ignores.

    I disagree with you about Strong AI requiring a low level neuron simulation - IMO consciousness is a result of high level architecture, not Penrosian low level specifics! I believe it's just an "inward looking sense" - a feedback path.
  • Minksy's put down of perceptrons was incredibly short-sighted, and not simply reflective of the state of knowledge at that time.

    Minsky was just arguing that a perceptron could not compute an XOR function, and the reason he was wrong is simply because he didn't consider that you might connnect one perceptron to the output of another.

    For Minsky to not even consider connected perceptrons was a humungous brain fart for which he should rightly be ridiculed, particularly given the influential position the way he was in at the time, and the effect it had stifling all ANN funding and research for a long time.

    P.S. Sure the backpropagation learning algorithm had yet to be invented (although nowadays it seems trivially obvious as a dynamic programming heuristic approach), but that is an entirely separate from not even considering connecting two together!!!
  • You'd get a cluster that emulates slashdot :
    75% 'frist p0sts'
    23% 'goatse.cx'links
    1.5% reposts
    0.5% uninformed speculation
    0% work done.

    ** Windows has detected a mouse movement.
  • actually, I think he was trying to effect a semblance to the aura that is "Tron". I could be mistaken, however; it's been years since I've seen that film.

    -------
    CAIMLAS

  • There is another point of view (which I believe is more adequate) which boils down to mind and intellect as fairly sophisticated adaptation function (or tool), so to implement AI we have to start with very simple machine capable of interacting with its environment, learning, adapting, and evolving.

    This point of view is missed by a lot of AI researchers, I think, because they're thinking in terms of numbers and theorms, and not the actual human experience. Seperating mind from the body is IMHO a big mistake, and I absoulutely agree with you it's wrong.

    Those of you who aren't involved with Neural Networks might find it interesting that almost all research into computerized neural nets stopped when it was proven that a basic perceptron (what you typically visualize when you think of a "neuron") couldn't distingush an XOR function, e.g. nonlinearly seperable data. Of course, this was a pretty simplistic way of looking at it, and growth in the field is exploding now.

    I had a big relevation in terms of working with neural nets when I stopped thinking about the math a little bit, and asked myself: If I was this little robot/program/whatever, what would I see? How would I find a pattern in the data I was presented through my senses? (e.g. a Analog-Digital converter connected to a light meter).

    We might not be on the ball for 2001, but give it a year or two. :)

  • by QuantumG ( 50515 ) <qg@biodome.org> on Friday June 08, 2001 @04:26PM (#165098) Homepage Journal
    here [slashdot.org]. Thanks for playing.
  • by 1010011010 ( 53039 ) on Friday June 08, 2001 @05:19PM (#165099) Homepage
    I thought that article looked familiar. And yep, posted on Slashdot not too long ago.

    http://slashdot.org/article.pl?sid=01/05/27/172422 6 [slashdot.org]

    - - - - -
  • by 1010011010 ( 53039 ) on Friday June 08, 2001 @05:21PM (#165100) Homepage
    Slashdot: Reuse, Reduce, Regret. Er, Recycle.

    - - - - -
  • As much a fan as I am of nature and the wonders it's created, I can't help but think it could have done a better job on neural tissue. Specifically, the speed at which impulses are conducted could probably be muchly improved where our systems to be redesigned from scratch.

    So yes, I think computers are faster, and in the end will be able to outperform our wet systems, if we can only figure out how to do it.

  • I'm no fan of old-school AI, but Minsky has a point -- people use genetic algorithms and neural nets to "learn" from examples, but such pattern matching tells us *nothing* about how learning really happens. They are just generic black boxes that people throw at data in the hope that something useful comes out.

    I agree that the old ANN pattern recognition approach is a dead end. However, a lot has happened in neural networks in the last decade or so. We are learning a lot from neurobiology. We are learning that signal timing in biological networks are crucial to learning and motor skills. One of the important discoveries seems to be in the area of temporal correlations among spiking signals, i.e., determining whether signals are sequential or concurrent. If they are sequential, it appears (c.f. the work of Dr. Henry Markram et al) that the order of arrival is crucial. The time scale is on the order of miliseconds. The new spiking neural networks are so unlike the old ANNs that a new discipline has emerged, one which tries to distance itself from the old ANNers. It's called computational neuroscience (for those who don't keep up with progress in this area).
  • When you have emergent phenomena like consciousness

    What evidence do you have that consciousness is emergent? I suggest we stick to the stuff (intelligence) we can observe (and somewhat quantify) and worry about consciousness later.
  • And your implication that he is somehow holding back the field of AI is not too plausible, either. It's hardly as if he is controlling AI research all over the world. He's not even controlling AI research at MIT. If neural networks haven't yet taken over the world, you can hardly lay the blame at Marvin's doorstep.

    It's a good thing that Dr. Minsky is not controlling AI research at MIT and elsewhere, although he tries. We do hear rumors of his close encounters with other AI researchers such as avant-guarde roboticist Rodney Brooks.

    The symbolic AI camp has been at it since the fifties and they made a lot of noise over the years. They have failed miserably. Rather than lick their wounds and moving on to more fruitful endeavors, they continue to uphold their failed approach through various funded projects and obsolete AI curricula that are being taught at major universities and AI centers around the world.

    The symbolic aproach is dead and should be buried once and for all, in my opinion. Ultimately it will be but a footnote in the history of AI. The future of AI belongs to connectionism, the only model that has a chance of taming the otherwise intractable complexity of animal intelligence. We need fundamental perceptual and motor learning principles. We need fundamental principles of motivation. Once we formulate these all important principles, we'll know how to apply them to billions of self-modifying cells working in parallel. Only then will human level intelligence become a reality. It will happen in our lifetime.
  • Certainly the symbolic logic guys were wrong as wrong as those who thought neural nets would solve everything.

    To claim that it is wrong to think that neural nets would solve everything is to ignore the evidence in my view. The truth is that the brain is a collection of neural nets feeding into and relying on one another. Each net has a specific role to perform. Sorry, but this sounds very much like neural nets solving everything to me. Maybe you had something else in mind.
  • by Louis Savain ( 65843 ) on Friday June 08, 2001 @04:31PM (#165106) Homepage
    It is clear that AI hasn't delivered on the promises made over thirty years ago. What happened? In a preview of his upcoming book, The Emotion Machine, Marvin Minsky examines the failures of AI research and lays out directions for future development in the field.

    I used to be a Minsky fan (I still have a copy of his "Society of Mind") but not anymore. Marvin Minsky is one of the reasons that AI still has not delivered on its promises. He is part of the old symbolic school of AI. He was the guy who, with Seymour Papert, wrote a scathing criticism of the then embryonic field of neural networks, effectively strangling research in neural networks for the better part of a decade. I am sure Dr. Minsky has had occasions to change his views since but I don't think he has anything to offer that will lead us to HAL. The following is a quote from a Scientific American article [sciam.com] on Arthur C. Clarke's HAL.

    The novel of 2001 explains how the HAL 9000 series developed out of work by Marvin Minsky of the Massachusetts Institute of Technology and another researcher in the 1980s that showed how "neural networks could be generated automatically--self-replicated--in accordance with an arbitrary learning program. Artificial brains could be grown by a process strikingly analogous to the development of the human brain." Ironically, Minsky, one of the pioneers of neural networks who was also an adviser to the filmmakers (and who almost got killed by a falling wrench on the set), says today that this approach should be relegated to a minor role in modeling intelligence, while criticizing the amount of research devoted to it. "There's only been a tiny bit of work on commonsense reasoning, and I could almost characterize the rest as various sorts of get-rich-quick schemes, like genetic algorithms [and neural networks] where you're hoping you won't have to
    figure anything out," Minsky says.


    The part that literally floored me is "where you're hoping you won't have to figure anything out,". All along I'm thinking that intelligence is so complex and intractable that the most plausible solution to the problem of making a human-level AI is one where we let the AI emerge, grow and learn. IOW, what we really need to understand is the learning process, which encompasses perceptual, motivational and motor learning.

    But here comes Marvin Minsky, a luminary in the AI community, insisting that figuring everything out is precisely what needs to be done. Haysoos Martinez! This is the main reason why we still don't have human-level AI! I think Minsky's stance is a disservice to computational neuroscience and ANN researchers everywhere.

    The man has had his day in the sun. Now it's time for the younger generation of AI researchers to come in and say "hold it! we're taking a different approach from now on. The unkept promises of AI were made by the old symbolic AI crowd. There is a new school in town. The new AI neural, it's emergent, and it's gonna to kick ass!"
  • Why is there a picture of Al "Grandpa" Lewis [pscelebrities.com] doing in that article? All Minsky needs now is a cape.
  • No no no, according th the Author Clarke that was not the case. As is demonstrated here:
    http://www.underview.com/2001/faqs/faqs.html#faqg [underview.com]

    "HAL". Something like Highly Advanced Lifeform, right?
    Well, almost. The answer is given in black and white in Arthur C. Clarke's book of "2001: A Space Odyssey", Chapter 16, which is titled (ahem) "HAL" (note that in this case the book gives a specific answer to a specific question, whereas in situations that are more open to individual interpretation I do not necessarily take solutions out of the book).

    Clarke writes:

    "Hal (for Heuristically programmed ALgorithmic computer, no less) was a masterwork of the third computer breakthrough."

    Note that, strictly speaking, HAL is not an acronym in the sense of being formed from the initial letters of separate words. In the film we only see it as "HAL 9000", but it is noticeable that in the book Arthur C. Clarke himself consistently refers to "Hal", not "HAL". So, therefore, do I.

  • Then they would already have seen this here [slashdot.org] a couple of weeks ago.
  • Minsky has the best approach. Throw all of the approaches into the same box. However, its a shame that he is such a losey speaker. All of the important information in his speach could have been put into about six sentences.
  • Dr. Minsky seems to miss the point as badly as he did when he first squashed the perceptron research. His form of AI will never be able to drive my car!

    There are many systems that are too complex to understand with mathematical or linquistic techniques. The day is dawning when more people will realize that to "understand" does not always mean to have a linear, provable trail of induction.

    The success of mathematics in physics has probably badly hurt many areas of research into less tractable phenomena, as its demand for solvable equation systems and/or mathematical induction is too simplistic. In the same way, much of AI research has focussed on those problems which can be proven mathematically, neglecting other, less elegant problems which may have direct, valuable solutions (like driving my car!)

    We will never have an equation for a cloud. We have the equations for the underlying physics, but the complex emergent phenomenon of a cloud is beyond formulation. To "understand" highly complex real-word issues has to mean developing an intuition (non-linear, imperfect, stochastic understanding) for it... and a neural-net or genetic algorithm can *be* that intuition!

    At the same time, I do not mean to belittle the traditional reductionist approach to "understanding." It has gotten us a long ways... but it simply is not a complete system, and that same approach, reversed through engineering induction, will never drive my car.

    We need *all* the techniques in AI, and more. Bring on the genetic algorithms (and make those darn FPLD's cheaper!); train the neural nets; parse those languages; and *drive my car!*

  • In other words, you won't?
    -- Andrem
  • Pure logics always fails in the real world. For instance, there are many men that have XX-chromosones. Perfectly normal and healthy men I might add.

    - Steeltoe
  • I wish it would keep track of which messages I'd already read and not redisplay those unless I choose. Doesn't seem like that hard of a thing to code.
  • that one was buried. Not everyone reads every word of every article. I for one am glad they reposted it in a more prominent position.
  • (not as research projects, but commerically available) so we're getting there?
  • I didn't read it; should I not be reading this now?
  • NT developers know that HAL has been around for quite a while.

    /me ducks

    --

  • You mean Direct3D developers. HAL stands for Hardware Abstraction Layer in Windows world.

  • Sure, there are some problems it seems that only they can solve.

    Any examples?

  • I think it's very important to understand that there's no magic to consciousness ...

    The topology of the information processing membranes are more complex than we can sort out just yet, but there's nothing about the structure of the brain that's not duplicable by silicon hardware...

    Also important to notice is that to implement the human mind in hardware (as opposed to wetware), we'd need something on the order of a 10 teraflop supercomputer.

    These are very strong and specific claims that do not seem to have much foundations. I wonder where the numbers you are citing are coming from.

    Can you point to any specific evidence for that?

  • From the article:

    The real world is rather dumb and boring. I'm serious. In the real world, if you have a chair and somebody sits on it, it might break. That won't happen in the virtual world because the chair knows it's a chair and its job is to support something.

    .....

    In the real world nothing has a purpose except we try to make purposeless things do the best they can for us. And that's why the world is a mess. I'd much rather build Lego with a Lego simulator where you can press clear at the end and all the things pop back in their boxes.

    What a sterile and appalling outlook. How can you hope to understand intelligence,
    perhaps the most complex and intricate phenomenon known to people,
    with a worldview like that?
    If you strip life of its mystery, what do you have left?

    His attitude makes one wonder, whether for some people
    computers are just ways of escaping the intolerable boredom of existence.
    Popularity of Tolkien, role-playing games, and similar imaginary world
    themes with the computer crowd seems to provide at least some evidence for that.

  • The "nothing about the structure of the brain that's not duplicable" springs from the fact that neuron behavior is pretty simple in the broad strokes - the parts we don't understand are more related to dynamic interactions - which are also straightforward to implement in software.

    The behavior of individuals neurons is very complicated and still not at all well-understood. Computer models for even an individual neuron are fairly involved. The original Hodgkin's and Huxley's model (for which they received a Nobel prize) for the giant squid axon is a system of three equations with partial derivatives. You can imagine how much computation would be involved in computing these for all ~10^11 neurons in the brain.

    An individual neuron might have tens of thousand synapses through which it communicates with other neurons. The computational complexity of this is mind-boggling and, moreover, very little is known about how neurons communicate and form connections.

  • the 'mystery of life' is an excuse for those who wish to remain ignorant.

    Lack of imagination is an unfortunate affliction.

  • Also important to notice is that to implement the human mind in hardware (as opposed to wetware), we'd need something on the order of a 10 teraflop supercomputer. We just don't have the hardware to pull that off yet.

    Sure we do. There are server farms with more than 10 teraflops. If strong AI could be achieved with a big cluster, there'd be big clusters doing it.

    If compute power were the problem, we'd have powerful AI systems that were really slow. That's not the case. We really don't have a clue how to do strong AI. The stuff that used to sort of work but was slow, like language translation and question answering, now works fast but isn't much smarter.

    Speed does make what AI capabilities we have more useful, because we can use them on dumber problems. Machine translation still sucks, but now that it's so cheap it's given away, it's useful just to find out if something is worth looking at at all. You can now afford to translate incoming mail to find out if it's spam. The same goes for question-answering at the "Ask Jeeves" level. It's not very good, but it's cheap.

    To see where classical AI went to die, go to the second floor of the Gates Building at Stanford. There, below gold letters reading "Knowledge Systems Lab", are empty cubicles, obsolete computers, and tables with ancient copies of Wired. It's depressing.

    Game AI, on the other hand, is steadily getting better. It's generally non-verbal and grounded in a semi-physical world. That's probably why progress is possible. The gamers are definitely gaining on the academics, and are probably ahead at this point.

    Eventually, game characters will have enough of a life that they'll have something to say, and then we may start to see a path to developing strong AI. But it's a ways off.

  • "Cog" ... "anthromorphic robotic grant troll"

    Others share that opinion, including some of his grad students. The amount of hype (Newsweek cover, TV specials, and a movie) about that project is excessive for the results obtained.

    What they seem to be developing is technology for faking emotional behavior. This came close to a commercial product, a microprocessor-controlled doll, sort of like a Furby with facial expressions. [wired.com] This was supposed to be a joint venture with Hasbro, but apparently didn't ship. IS Robotics [isrobotics.com], Brooks' startup, seems to be a defunct server.

    Behavior-based robotics is interesting, but without some environmental modelling and short-term planning, you'll never get above the insect level. Feedback can only take you so far. Feedforward, though...

  • The idea that man is equal to God, and believes he can put himself on an equal footing with the Divine Creator himself is just the sort of ridiculous notion that could only come from the USA.

    It's a very Japanese idea. This is reflected in Japanese interest in robots, both in manufacturing and in anime.

  • My parents have an early 1970's edition of the World Book Encyclopedia. In the "Computer" entry, it states that computers will keep on getting larger and smarter. By the year 2000, we will have computers the size of skyscrapers, and they will be able to think and speak.

    Maybe that's the problem: we've been making computers too small. Minsky should look into creating some skyscraper-sized machines.

  • you know, this is funny, but I believe the whole notion of intellect or mind as completely authonomous system (hence with a potential of being implemented as some super-complex program) has religious roots, in particular Christian roots. It is 'mind vs body'. Mind can exists outside the body. So having this as an axiom, why not try to implement the mind as AI? I wonder if there are religions which treat mind and body as unseparable (?)
  • Minsky's concept of mind and intellect is too simplistic and often plainly wrong. of course slashdot is not the proper place for in-depth discussion but from what I've read I have an impression that according to Minsky mind is something completely authonomous, with its own absolute (logical rules), something you can implement in a machine (hence the whole notion of AI). There is another point of view (which I believe is more adequate) which boils down to mind and intellect as fairly sophisticated adaptation function (or tool), so to implement AI we have to start with very simple machine capable of interacting with its environment, learning, adapting, and evolving. I believe someone (also with MIT roots) is doing just that, and quite successfully. Minsky separates the mind from the body, from the environment, from the whole human exprience, and this is plainly wrong. No wonder AI (according to him) is a dead end. Incidently I remember reading one of his books, he's tried to explain the notion of 'humor in music', in particular in some Bethoveen composition, using mathematical analysis. Oh vey.
  • Sure it's possible to make something without understanding it 100%, but that's not a very useful approach if the goal is understanding. That's the point. There are a lot of people working in AI whose goal is not AI per se but rather an understanding of the way that the human mind works. To them it's not particularly useful to make a human-level AI if it's nothing but a black box. To them, a lower level of AI is more valuable if it comes with greater understanding.

    Quite frankly, I can see exactly where they're coming from. We're already surrounded by more human-level intelligences than we can find productive employment for. Why go out and create more artificial ones that are likely to suffer from exactly the same sorts of problems, only at much greater expense, unless we actually get something useful out of it in the form of increased knowledge?

  • The part that literally floored me is "where you're hoping you won't have to figure anything out,". All along I'm thinking that intelligence is so complex and intractable that the most plausible solution to the problem of making a human-level AI is one where we let the AI emerge, grow and learn. IOW, what we really need to understand is the learning process, which encompasses perceptual, motivational and motor learning.

    Well, to some extent that depends on whether your goal is to make human level AI or to use AI to help your understanding of NI. A lot of AI researchers are really more interested in the second than they are in the first, so it's understandable that they're not too happy with neural nets. You don't get very far in understanding the mind if you simply copy it in silicon instead of carbon. It would be much more interesting and instructive to make an AI that didn't use neural nets, because doing so would necessarily imply that it was possible to abstract some deeper core of how intelligence works. Of course it's entirely possible that doing so is a false dream, and that what we think of as intelligence is deeply dependent on the structure of the medium in which it is encoded. But we can't find that out without trying to build minds in some way that is fundamentally different from neural nets.

  • ...that one doesn't understand 100%. The brain is a design we know works. It's been stress tested for a million years. Maybe we should just go with that for now and work out the less tractible problems when we can. Because having hardware humans who don't have to die (and are thus concerned about the future), who don't have to eat (and thus use fewer natural resources), and yet can be friends (and more, one assumes) with us bio-humans might be really nice.
  • I agree totally. Implimenting individuals neurons in code is not likely to be necessary. But I was getting so wrapped up in trying to write a pretty paragraph (in a hurry) that I went overboard. I just hoped no one would call me out on it so I didn't have to explain myself.

    And yeah, Minsky isn't exactly the leading edge any more (I don't ever cite him). I probably shouldn't really have defended him so vigorously - the neural net comments just brought me out.
  • I should be clear - the familiar "I am a man. All men have Y chromosomes. Therefore, I have a Y chromosome" type of logic is of course uninterestingly tractible. The logic represented by the topology of some of our neural gadgets is not so trivial. Their linkages to one-another are less-than-trivial as well. Still, those are just engineering problems.

    However, we won't really be able to abstract the high-level from the low-level (between which there is no real dividing line) unless we understand the topology well enough to know which dynamics can be idealized (or formalized, if you will) and which depend more directly on their fuzzy nature.
  • When you have emergent phenomena like consciousness, replicating it is a good way of making sure you haven't left out anything important. It means that anything you've changed from the original wasn't essential. It's a way to confirm or disconfirm guesses. Of course, the ungly truth of this is that we're almost sure to make a number of catatonic, retarded, or psychotic AIs before we make our first happy, well-adjusted hardware human. On the other hand, we are sure to learn from our failures.

    Even when we do have a working person, even if we had to simulate each individual neuron, at least now we'll be able to look at what's going on in the brain with more detail than ever before - we can track the contents of memory (as in, neural firing dispositions) without sticking a scalple in someone's head.
  • How do you know your parents are conscious? Your best friend?

    I expect the same reasons will apply to Strong AI.
  • It depends on in what way you mean to challenge me. I can easily back up what I said if you mean to suggest that some homunculus-like structure could just as easily be responsible for consciousness. If you're simply expressing skepticism about my apparent functionalism regarding consciousness, then I will simply refer you to Daniel Dennet, since I couldn't really do that topic any justice here.

    However, if you bridled because "emergent" implies a certain "accidental" quality, then I think you have a good point. Consciousness is not an "unintended" side-effect of brain activity. I didn't mean to imply that. I definitely agree that the brain is designed to support consciousness and that "consciousness" and "intelligence" are, while not isomorphic in their ambit, neither are they separable.
  • I won't what?
  • There's no such thing as "powerful AI systems that are really slow". An agent like a human or an AI interacts with the world in real-time, learns from it in real time.

    An AI without a world to live in and learn from would of course be catatonic. An AI too slow to build a useful internal representation of its current situation before the situation changes (thus making the representation worthless) is going to be either catatonic or a moron.

    Also, as I mentioned, the human brain is a collection of gadgets, implemented in a big web of neural processing with very complex informational topology. We may be a while yet reverse engineering some of the most clever ones. Maybe it will be much more than the 20 years I give. Maybe it won't be that long. It depends on how important the fine details in are and how easy it is to come up with functionally equivalent "gadgets" that work as well as the brain's more difficult-to-copy architectures.

    The important thing here is that we do indeed have some idea how to do Strong AI. To tinker with those ideas until we build something that really does seem a little more intelligent in that wacky "emergent" sort of way, we need some faster hardware. Single-application heuristic gadgets like language translation* are forever going to be bad unless they are embedded in a larger system that can give realtime feedback.

    The reason game AIs get better is because the game programmers have more room to work in terms of both time(MHz) and memory. Also, as with ant-colonies, a collection of not-very-bright creatures can make for some pretty intelligent communities.

    In the end, though, I like your intuition that agents need to have a life to have something to say. Exactly, I say. Exactly.

    In conclusion, yes, it IS a hardware problem, among other things. Without proper hardware we can't expect any actors to have much of a life. And even if we did have the proper hardware tomorrow, we'd still have years of tinkering ahead before we put all the pieces together in a way that works. But right now, even our tinkering is somewhat hobbled by lack of hardware.

    * Language translation is probably not a good example because it does not seem likely that any such thing exists in our heads. It's something we're trying to build because we WISH it existed in our heads.
  • I don't mean to say that when we do get AI up and running that we'll never be able to get it to work better than the wetware mother nature gave us. I was just trying to explain why, even though even a 286 8MHz proc has do math a million times faster than we can, it can't do what are brains do very fast at all. And as I mention below, doing what our brains do slowly is worthless.
  • make that, "...doing what our brains do, but slowly, is worthless."
  • Neural nets solve the problem of consciousness like microchips solve the problem of personal computing. You can't just throw a bunch of chips on a board, run electricity through them, and viola! A PC! They have to be chosen for their tasks, the data-paths arranged, and so on. The brain is the same way, but about a thousand times as complex.
  • The "no magic to consciousness" part is is supported by Occam's razor, since there are alternate explanations that require fewer leaps of faith than "magic".

    The "nothing about the structure of the brain that's not duplicable" springs from the fact that neuron behavior is pretty simple in the broad strokes - the parts we don't understand are more related to dynamic interactions - which are also straightforward to implement in software.

    The 10 teraflop number comes from the number of active neurons in the brain, and their frequency of firing. The 10 teraflops number is an estimate based on how many floating point calculations I think it would take to simulate a neuron with sufficient fidelity.

    If you have objections to what I said, I'll address them specifically. There's no "specific" evidence for any of these things, because the claims I'm making are drawn from more than one fact. If you're looking for others who make the similar claims in print, I'm sure you'll find them with little trouble. Steven Pinker, Daniel Dennet, Francis Crick, Douglas Hofstadter, and Rodney Brooks are good folks with whom to start.
  • ...that we don't know if other bio-humans are conscious, then yes, we won't know if hardware humans are conscious. But then, that's just a position of general skepticism, not an idictment of AI. It (the position) has its champions in professional philosophy, but not very many, and is just about dismissed by most of the feild.

    Unless you want to talk about philosophical theology. But, as Ronald De Sousa once said, philosophical theology is like "intellectual tennis without a net".
  • Since I submit that hardware humans will be behaviorally identical, that's half of your grounds for assumption right there. If the computer simulates the behavior of your physiology (neurons and neural bundles), then that's worth at least a half point as well.
  • ...for game programming, but I'm always happy to have people read Minsky because he tends to crack peoples' preconceptions about what is "obvious" about consciousness and AI and etc. Even better might be Daniel Dennet, author of "Consciousness Explained". Less philosophically sound (and ultimately less satisfying) but still very interesting is Steven Pinker.

    I think it's very important to understand that there's no magic to consciousness. It's not something shrouded in mystery about which we know nothing. In fact, we know an amazing amount about individual areas. The topology of the information processing membranes are more complex than we can sort out just yet, but there's nothing about the structure of the brain that's not duplicable by silicon hardware. We just have a lot more mapping to do.

    Also important to notice is that to implement the human mind in hardware (as opposed to wetware), we'd need something on the order of a 10 teraflop supercomputer. We just don't have the hardware to pull that off yet. The AI-related optimism of yesteryear was fueled by the misconception that computers are faster than humans. What's really true is that the "programming" that underlies the various gadgets in the mind is the product of millions of years of specialization at small tasks. We have fantastic motor-control gadgets and unparalleled pattern-recognition wetware, for example. Figuring out exactly how many animals are in 15,342 groups of 967 animals each was never all that important, so we never evolved any gadgets to carry out high-speed arithmetic. On the other hand, we're good at seeing how things divide out and how games might be played to our advantage. Idiot savants have been known to find extremely large prime numbers as if by magic - probably the same hardware put to an exotic use.

    So in 20 years (or so), we'll have the hardware, and maybe we'll have the information processing topology as well. Some intrepid researcher will put all that in a state-of-the-art cybernetic body. Then it'll be a matter of watching the first hardware human child grow up and meet the world.

    PS- make some pretty bold claims here, and also cite a number or two that one might be expected to view with suspicion. I can back it up, just ask.

    Nato
  • by hypermanng ( 155858 ) on Friday June 08, 2001 @04:54PM (#165148) Homepage
    Neural nets, on their own, are not very smart in many ways. Sure, there are some problems it seems that only they can solve. But complex, multi-stage problems generally baffle them nearly indefinitely.

    The brain is not a big neural amalgam that gets to some critical mass and then suddenly starts doing stuff. It's wired. It's got gadgets. It's really a big collection of them. Some of them are damned complex, composed of sheets of neurons talking to each other in intricate, bewildering arrays.

    And modern Connectionists understand that. Certainly the symbolic logic guys were wrong as wrong as those who thought neural nets would solve everything. But that's people like Fodor and Chomsky. The Minksy "Agent" model is very much on the "Connectionist" side of the map. That's not to say that I agree with everything he says, but I think you're unfairly blaming him for the mistakes of others.

    Symbolic logic, by itself is no panacea, but neither is the neural net. I'm willing to bet that a lot of the interactions of various neural nets in the brain form very formal symbolic logic gadgets. Also, in the end, it is the formal logic of virtual-neuron microcode in a computer is what will generate Strong AI.
  • Ohkay... I'll bite.

    This effort is by no means localized to the United States of America (or the Corporate Republic formerly known as... but that's a different story). [And even if it were, stereotyping, xenophobia, and the like are widely frowned upon.] Citation: Japan produces an incredible percentage of the hardware, software, and tools, and conducts a huge amount of the research in AI and AL fields. The USA is not alone.

    The 'strong AI' hypothesis has yet to be proven wrong - all AIs have had one real problem: they've only been run for a very short amount of time. Take a human, "run" it for a year, and tell me what you get. A pretty useless machine. Ten years, that's better. Thirty is even better. Show me an AI that's been up for thirty years, can you?

    ::sighs:: RMS, GNU/Linux, and the like are radical in that they have or are relatively new ways of viewing and dealing with information. Yes, they're somewhat leftist in nature. No, they are not communistic, as far as I understand. More a breed of anarchism than anything else, though personally I take issue with assigning a piece of software (GNU/Linux) a political standing (we don't yet have AI, after all :)

    Man coming closer to god(s)? I'm an athiest an am so perhaps somewhat unqualified to debate this point, but... why must there be a god? Why is it a rediculous notion to propose that if life started up once (divine or naturally) that we can't do it again? So we'll screw up, maybe. SO DID GOD. Sorry, but he did.

    Scientists should work on "practical things, which will help man get closer to god" - I should think all of biology, from ecosystems to molecular interactions, an effort to get us closer to god - by understanding ourselves, we become more like the Erdos's SF. The HGP seems to be but the latest and greatest example.

    Science in general is the attempt to understand our world. I would say that that matches your goals.

    --Begin Semisarcasm--
    Or do you wish science to stop its ongoing research and attempt to make these force-fields for you so that you don't have to worry about god disappearing? Like it or not, god can only be found in the increasingly small gaps of science.

    Oh, and as to people having cellphones being inconsiderate morons, well... it is not reasonable to force your demands on everybody around you, is it? Some people need communication - cellphones provide. Granted, vibrating is less intrusive, but that's really an issue beyond your control. Though I bet you'd make me take off my Tux hat if I came into your church. Funny that you can, but scientists can't force your priests to take off their robes if they enter a lab.
    --End Semisarcasm--

    Moderators: Yeah, it's a rant. And not even a very well thought out one. I apologize. Moderate me as you will.

    --Knots
  • What's Marvin doing looking for HAL? I didn't think he cared about ANYTHING... shouldn't he be moping about off somewhere calling HAL stupid 'cause his name sounds stupid?

    rest in peace DNA

    ----

  • (reaches into the dark recesses of my memory banks) That would be Rob Brooks, who from, say, 1985 onwards, started playing around with a behavioural approach to robotics, using a subsumption architecture in which more urgent behaviours (like avoiding collissions) could subsume control of the robot from higher order functions ... and none of the behaviours communicated, or did heavy AI stuff like try to represent their environment. Leads of papers, many with great titles, here [mit.edu]. Great days.
  • by xmark ( 177899 ) on Friday June 08, 2001 @10:25PM (#165152)
    A fail-proof method for creating intelligence has already been developed...by Nature. Intelligence is now thought to be an emergent property - it arises naturally in certain kinds of self-organizing systems (like life) in which the ability to acquire and process information increases survival, and natural selection sorts out the best ways of doing it. That you are reading this and understanding it is proof that this mechanism can deliver the goods.

    About ten years ago, Rodney Brooks [edge.org] (also of MIT) flipped AI on its head with his "insect bots," which took a bottom-up (instead of Minskyesque top-down) approach. Brooks put a cheap microprocessor and servo motor on each of six "legs" of a lowly bot, and programmed each leg unit to do extremely simple things like check whether the leg was bumping against something, and if so, to lift it. Repertoires of behavior learned from the environment were then stored and re-used when similar stimuli presented themselves again. What happened after a short time was that far more complex behaviors than were programmed "emerged" from the collection of puny processors and actuators. With just a few lines of code, the damned things could navigate complex environments (like a back yard) that completely foiled Minsky-style bots run by minicomputers and millions of lines of instructions. (Brooks coined the phrase "fast, cheap, and out of control" to describe not only his bots, but the behaviors they "invented" by walking around.)

    George Dyson (Freeman's son) wrote a book a couple of years ago called Darwin among the Machines that is as good an explanation of machine-evolved intelligence as I've seen. It's packed with illustrative stories from both within and without the discipline. Look here [annonline.com] for Dyson's own commentary and some good links. Hans Moravec, director of Carnegie-Mellon's Field Robotics Lab, also writes very convincingly, if speculatively, about the evolution of machine intelligence, in his recent book Robot: Mere Machine to Transcendent Mind [cmu.edu]. It's a fascinating read.

    After what's been learned in the past decade about how machines can become intelligent, Minsky seems to me a bit like Lord Kelvin [st-andrews.ac.uk]. Kelvin made tremendous contributions to science, especially in the fields of heat theory and thermodynamics, but in his later years, became mired in defending some pet theories that were way past their prime. He railed bitterly against Darwin, claimed the Earth was only a few million years old, and refused to accept radioactivity. One of his biographers observed that for the first half of his career, he could no wrong, and for the second half, he seemingly could do no right. Minsky, alas, has in some ways shared this fate.

  • by the_other_one ( 178565 ) on Friday June 08, 2001 @05:03PM (#165153) Homepage

    Re-news for nerds. Stuff that matters and matters and matters...

  • http://www.hal2001.org
  • I like the interface, it's simple and to the point. It's one of the easiest websites to use.
  • we have Clippy®©Tm., why the hell would we need HAL?
  • Minsky said "What evolution and genetic algorithms don't do -tell me if I'm wrong- is keep any record of why all those poor losers died."

    While I think this is largely true and a good criticism of genetic algorithms, after listening to the book on tape The Age of Spiritual Machines - When Computers Exceed Human Intelligence by Ray Kurzweil I don't think this is completely true. Kurzweil writes about how the genes for the shape of the eye are protected by error correcting codes and repair mechanisms to a much greater extent than the genes that control, for example, the layout of rods and cones. Why? Because evolution has "learned" that messing with the shape of eyes is costly and there isn't much improvement possible while other details of eyes can be improved or adapt to changing circumstances.

    Anyone who knows more about genetics want to comment on this?

    -ken kahn

  • I have an impression that according to Minsky mind is something completely authonomous, with its own absolute (logical rules), something you can implement in a machine (hence the whole notion of AI).

    Actually, Minsky doesn't believe in 'logical rules'. The Stanford group (following McCarthy) believes that logic is the royal road to AI. For this, they are termed the 'neats' and contrasted with the more pragmatic 'scruffies' such as the folks under Minsky's direction at MIT. Incidentally, there are 'neats' at MIT too -- for example, Chomsky -- who are most definitely at odds with Minsky and his, well, Minions.

    There is another point of view (which I believe is more adequate) which boils down to mind and intellect as fairly sophisticated adaptation function (or tool), so to implement AI we have to start with very simple machine capable of interacting with its environment, learning, adapting, and evolving.

    What makes you think this is not AI? Logic, Minsky's work, Brooks's robots, genetic algorithms -- these are all approaches to AI. The domain of AI is not isomorphic with logic.

    Minsky separates the mind from the body, from the environment, from the whole human exprience, and this is plainly wrong.

    Well this is your opinion, as well as that of the 'embodied cognition' folks and other latter-day phenomenologists. It may or may not be correct. Frankly, the embodied approach has not produced the great AI breakthrough that has eluded classical approaches.

    Perhaps all of these paradigmatic battles and half-truths are the poisonous work of philosophers? Nah, couldn't be.
  • For those who would equate AI with symbolic approaches and Lisp, and the widely-believed failure of symbolic AI with the puported inadequacy of Lisp, note that Brooks is a Lisp guy and at least his original subsumption work relied on the Scheme programming language (a variant of Lisp), perhaps implemented directly in a chip.

    (Note that the poster I'm replying to didn't make these inferences -- this is just a convenient place for me to attach this info.)
  • The idea that man is equal to God, and believes he can put himself on an equal footing with the Divine Creator himself is just the sort of ridiculous notion that could only come from the USA.

    Funny stuff.

    I also find it humorous that men, in equating themselves with the Divine creator himself, have even attempted to decode his secrets through their studies of such dubious subjects as physics, chemistry, and biology. It is no wonder these areas have scarcely progressed in their millenia of existence. Compared to physical scientists, the audacity of AI researchers like Minsky has been brief in its duration. :-)
  • The Minksy "Agent" model is very much on the "Connectionist" side of the map.

    To echo this important point, Minsky's "Society of Mind" is as much like an "interactive" connectionist network as it is a symbolic architecture. For example, Hofstadter cites the SOM approach and Holland's genetic algorithms (as well as the highly interactive HEARSAY system) as precursors of and inspirations for his own work, which is by no means symbolic.
  • Yeah, but the symbolic part of it itself isn't a hard problem - Allen Newell's SOAR already does pretty much everything you could hope.

    As well, John Anderson's ACT-R (http://act.psy.cmu.edu/) system. Both Soar and ACT-R have been implemented as connectionist networks to illustrate that, for the Church-Turing challenged, one can construe cognition as symbolic and simultaneously assume a biological implementation in the brain. (That is, if you think connectionist networks are neurally plausible, which many -- such as Noble laureates Francis Crick and Gerald Edelman, I believe -- do not.)

    It is interesting to note that both ACT-R and Soar can be augmented with perceptual and motor subsystems, built consistently with the psychological data on perception and action, to form a truly unified cognitive architecture. The augmented systems have been used to simulate real-world cognition such as flying Air Force fighters (see Rosenbloom's work at USC), playing Quake (see Laird's work at Michigan), and simulating an air traffic controller (see Anderson's work at Carnegie Mellon).

    Any neural network models out here capable of such authentic cognition, please step forward. (Except Dean Pomerleau's Autonomous Land Vehicle -- you're the exception that proves the rule!)

    The mention of Soar is also a good lead in to debunking a persistent myth about connectionist networks -- that they are the only systems that learn. There are a plethora of learning algorithms for symbolic architectures too, as well as statistical algorithms, genetic algorithms, etc. Don't believe the connectionist hype! For example, Doorenbos showed in the mid 1990s that Soar models could learn a million rules with no slow down in performance; these million-rule models ran (and continued to learn) efficiently on Sun workstations of the time. His CMU dissertation is a masterpiece, for those interested.
  • Your post is fluff, but I found this line particularly errorful, and thus amusing:

    The day is dawning when more people will realize that to "understand" does not always mean to have a linear, provable trail of induction.

    Come on over to the logical side, where they use the incomplete tool of deduction!
  • The truth is that the brain is a collection of neural nets feeding into and relying on one another.

    Neural nets are for sissies. If you're gonna reduce cognition, why stop at the level of neural works? Everyone knows that biology and chemistry are just convenient ways to talk about particle physics. Real scientists are writing the equations of AI in terms of quarks.

    You probably use high-level programming languages like C and assembler instead of directly writing binary machine language. Sheesh!
  • by sv0f ( 197289 ) on Saturday June 09, 2001 @09:30AM (#165165)
    Minsky was just arguing that a perceptron could not compute an XOR function, and the reason he was wrong is simply because he didn't consider that you might connnect one perceptron to the output of another.

    For Minsky to not even consider connected perceptrons was a humungous brain fart for which he should rightly be ridiculed, particularly given the influential position the way he was in at the time, and the effect it had stifling all ANN funding and research for a long time.


    I think you have your facts wrong. Minsky and Papert proved that single perceptrons could only represent linearly separable mappings, and thus couldn't represent XOR for example. They also proved (or perhaps it was already known at the time since it's kind of obvious) that networks of units with linear activation functions collapse to single perceptrons, and thus suffered the same representational limitation.

    They went on to express skepticism that an algorithm for training networks of units with nonlinear units to represent linearly inseparable mappings (such as the XOR function) would be discovered. On this conjecture, they were wrong.

    That is, training algorithms such as backpropogation allow networks with at least one layer of sufficiently many hidden units, where each unit has a nonlinear activation function, to usually represent arbitrary mappings. I say 'usually' because gradient descent techniques like back propogation are not guaranteed immunity from local minima, although in practice this does not appear to be a problem.

    There are other training algorithms for neural networks for which I am less knowledgable -- they may not suffer even in principle from local minima. There are also proofs (like Judd's) about the intractability of optimally 'loading' nontrivial neural networks. In other words, I don't know the current state of the art.

    But you are clearly wrong in what you think Minsky proved and what he conjectured. That his conjecture, built on his own research, has proven largely incorrect is just a testament to the unpredictability of science. Connectionists need to stop with their conspiracy theory malarky. Minsky's no bogeyman either.
  • by sv0f ( 197289 ) on Friday June 08, 2001 @06:01PM (#165166)
    Marvin Minsky is one of the reasons that AI still has not delivered on its promises.

    Are you trolling or just delusional? AI has not delivered in its promises in part because it's a damned hard problem. Philosophers, psychologists, linguists, and others have pursued it for 100s of years, and for most of its 50 year existence in computer science departments, it has been pursued on pathetically underpowered machines. The field's in its embryonic stage. Hell, ANNs have come and gone twice already (i.e., with McCulloch/Pitts/Hebb and then Rosenblatt); will you sour on ANN-AI if the third time is not a charm? I hope not, oh faithless one.

    He is part of the old symbolic school of AI.

    Is the 'old' or the 'symbolic' part your rhetorical way of discrediting him? First of all, as I pointed out in an earlier comment, it is plain wrong to put Minsky in the same bin as McCarthy, Chomsky, and other 'neats' who believe that logic is the royal road to AI. Minsky is a scruffy, an "anything goes" type. Second, do you know what he Minsky did for his dissertation circa 1950 or so? Built the first physical ANN out of primitive hardware in the basement of the math building at Princeton. That's right, he's an old skool ANN researcher. He also came by his skepticism the old fashioned way -- he eeeaaarrrned it. He doesn't simply oppose ANNs for philosophical reasons, brain boy.

    He was the guy who, with Seymour Papert, wrote a scathing criticism of the then embryonic field of neural networks, effectively strangling research in neural networks for the better part of a decade.

    He cowrote a book that formally disproved claims that irresponsible ANN researchers of the time were making about the representational capacities of their nets and inductive powers of their learning algorithms. He did express skepticism that these could be overcome. He was proven wrong on this count 15 years later (not counting Werbos's work). Again, you make it seem like he was an irrational crusader against ANNs. He was not. From experience and mathematical analysis, he made a conjecture. He was wrong. Such is the scientific life.

    Frankly, I've never understood why ANN types cry and complain about the lack of ANN funding in the 1970s. Given analyses like those of Minsky and Papert, and the successes of the symbolic approaches of the time (ever heard of HEARSAY and how Rumelhart and Hofstadter claim it as a primary inspiration?), it appeared that ANN research would not be as fruitful as symbolic approaches. Hindsight is 20-20 on this one of course -- ANNs are better than was thought then. But hindsight always works this way. Why exactly did ANN researchers give up? "Oh boo hoo, not as much money for my ANN work -- must follow other fads." Their lack of resiliency was cowardly. (I feel the same way about symbolic folks who lie shamed in the shadows right now.)

    IOW, what we really need to understand is the learning process, which encompasses perceptual, motivational and motor learning.

    This is the main reason why we still don't have human-level AI! I think Minsky's stance is a disservice to computational neuroscience and ANN researchers everywhere.


    We need to understand learning and development, perceptual and motor activity (not just learning, but performance too), how all of this is coupled with cognition (unless you think, for example, Cantor's Continuum Hypothesis is best represented perceptual-motor-ly), etc. We also need to understand how all of this is embodied in the body more generally, the ecological environment, and the culture at large. ANNs show little if any promise at tackling this last set of questions. (Journal article we'll never see: "An ANN analysis of female circumcision"). So get off your high horse about how ANN folks are gonna crack the whole cognitive nut. Frankly, it's not at all clear that ANNs are the right formalism for investigating even learning and development. A number of developmental neurosciences favor other approaches (e.g., dynamical systems).

    The man has had his day in the sun. Now it's time for the younger generation of AI researchers to come in and say "hold it! we're taking a different approach from now on. The unkept promises of AI were made by the old symbolic AI crowd. There is a new school in town. The new AI neural, it's emergent, and it's gonna to kick ass!"

    Oh, you're just the AI equivalent of a script kiddie. Why did I bother?
  • by LionKimbro ( 200000 ) on Friday June 08, 2001 @04:44PM (#165168) Homepage

    Hi, I'm Minsk. I'm not aware.

    I'm a collection of experiences, memory, and light processing systems, but I don't have this weird pseudo-mystical thing that some morons compute about.

    Once, I met a task that said it was aware. I said, "Of course you are processing light patterns." The task replied to me, "Right, I'm processing light patterns, but it's different, I'm actually experiencing it. I said, "Of course you are, my scan of your brain AI is occuring."

    My contentment rating increased, because I had helped purify the system. But this beliggerent process would not stop. "No! No! You don't get it!", it said. "The processing is occuring, but there's something else; I'm seeing it- this patterns appear before me." He rambled on for some time, and then got to his crux: "The difference between this thing- which I'll call awareness- and the processing that is going on- is that the processing does not require it, and yet it is still there."

    I found his nonsense absurd and disagreeable. I reported to central computing this processes insanity, but only after attempting a little more reasoning, to salvage the rogue process: "Surely you recognize that your 'awareness'- is merely a dangler and a phantom belief. Have you cleared yourself through the Computer Science program? Perhaps a little time within an electric fence will assist? Surely you know that you have some residual data from prior superstitious existance within the random garbage data before your allocation. Your computational appendix, this strange persistance within you, is completely illusory and inconsequential."

    But I was not allowed to finish my sentence, for after uttering the word "appendix", the bugged process shouted profanities and said such incoherent nonsense as, "I AM THAT APPENDIX!". The process was clearly delerious, and thus I had him scheduled for termination with the Scheduler.

    After all, You Can't Argue With a Zombie. [well.com]

  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Friday June 08, 2001 @03:58PM (#165176) Journal
    I was just looking up Mirsky on the web the other day. It was cute, I found the Drunk Browsing Test [turnpike.net]. It's not the Worst of the Web or anything, but fairly amusing.

    Dancin Santa
  • In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-Tac-Toe." "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play." Minsky shut his eyes. "Why do you close your eyes?", Sussman asked his teacher. "So the room will be empty." At that moment, Sussman was enlightened.
  • They used HAL in the movie to avoid being sued by IBM... Each letter is one "behind" its corresponding letter - HI AB LM.

    A similar thing happened when the movie Eraser was going to feature a company called Cyrex as the defense contractor building the railguns. Cyrix got wind of this and the name in the movie was changed to Cyrez. You can still see that they originally planned to call the company Cyrex, as in the scene where the lead female actress (can't remember the name) is copying the file onto a disc, the shortened filename has an "x" in it, and there's no X in Cyrez.

    source: http://www.zdnet.com/pcmag/news/trends/t960627d.ht m

    The name change was appropriate anyway, Cyrix's CPUs sucked for Quake.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...