Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Education Games

Can AI Games Create Super-Intelligent Humans? 312

destinyland writes "A technology CEO sees game artificial intelligence as the key to a revolution in education, predicting a synergy where games create smarter humans who then create smarter games. Citing lessons drawn from Neal Stephenson's The Diamond Age, Alex Peake, founder of Primer Labs, sees the possibility of a self-fueling feedback loop which creates 'a Moore's law for artificial intelligence,' with accelerating returns ultimately generating the best possible education outcomes. 'What the computer taught me was that there was real muggle magic ...' writes Peake, adding 'Once we begin relying on AI mentors for our children and we get those mentors increasing in sophistication at an exponential rate, we're dipping our toe into symbiosis between humans and the AI that shape them.'"
This discussion has been archived. No new comments can be posted.

Can AI Games Create Super-Intelligent Humans?

Comments Filter:
  • Terminator? Or the Matrix?
    • Terminator or Matrix would happen much faster than this educational AI loop. The educational AI loop would require decades for each round of feedback. And considering that the AI would have to be nearly as smart as humans to outperform human teachers significantly, the AI should be able to enhance itself much more rapidly than waiting for the next generation of kids to grow up and reprogram it.

      • Re:Have you not seen (Score:4, Interesting)

        by wagnerrp ( 1305589 ) on Monday July 25, 2011 @12:05AM (#36867760)

        You have to remember two things:

        1. 1. Of all the colleges at a university, the teaching college will generally have the lowest, or near the lowest, admissions requirements. Low pay just doesn't draw the high quality talent. Now sure, you'll find some absolutely stellar teachers, ones that actually care about their students, and spend lots of time outside of school researching the stuff they're teaching, building lesson plans, projects, field trips. You'll find a lot more who are just teaching straight out of the text book. I could outwit at least half my grade school teachers.
        2. 2. We are in school of some form or another for a good chunk of our lives. A couple years of daycare. Another decade of elementary and high school. From there, a few years of vocational, or several years of college, or up to another decade for higher level degrees. For 20 years of care, we only get another 40-50 of functional lifetime out of a person. We simply can't afford as a society to have a low student/teacher count. AI could fill the gaps for the less demanding tasks. An AI could guide individual students through directed self study, and aid them in homework, allowing a teacher to assign more work and still expect it be accomplished. An AI could handle larger lectures, allowing teachers to focus one-on-one, or with small groups.

        AI in schools would allow the teachers we had to operate more efficiently and more effectively. That in turn means fewer teachers per student, increasing individual teacher pay, and drawing in a better quality of teacher. Think of it as the same thing that has happened in manufacturing for the last 200 years. Machines don't replace humans all together. They simply fulfill the more repetitive tasks.

        • Intelligent tutoring systems in education is my field, so I say with some confidence that so-called AI won't replace human tutors anytime soon. Online workbooks and computer-aided learning are a wonderful adjunct to classroom instruction, but cannot replace a live teacher. About 30% of instruction can be reasonably handled remotely (software- or video-based instruction), but the other 70% of the task of educating and motivating learners is non-trivial. File the OP under jet-cars of the future.
          • by TaoPhoenix ( 980487 ) <TaoPhoenix@yahoo.com> on Monday July 25, 2011 @02:08AM (#36868192) Journal

            "Sufficiently advanced educational processes verge on terrorism". (Hi Mods! Note the quotes which means it's rhetorical!)
            We already have this game.

            A "bunch of script kiddies", er, Students, have been beating various professional IT departments at the game called "Cyber Security". Since two years ago we would have called anyone who said they could bust federal contractors a "tin foil hat", they took some bits as prisoners to prove it. This then caused Memos to be Issued to block those security holes. The Students then observed the results, and then took NATO for a ride in Round 2. This caused more Memos to be Issued by the "AI". (Insert rest of article here.)

            Oh wait, you're saying that's not a game? Games are supposed to be cute little self contained exercises that *don't matter* right?

            Right. Gotcha. Uh huh.

          • Not my field at all so this is a real question: wouldn't that percentage depend on the student ? It was my understanding that people respond differently to different ways of teaching (some learn more with visual content, some prefer audio stuff, some are better with books, some respond better to a combination of multiple ways), so wouldn't there be a certain "kind" of students whose favourite method of learning would be through an intelligent tutoring system ? I know this is purely anecdotal so only for ill
        • Re: (Score:2, Insightful)

          by tehcyder ( 746570 )
          Your whole argument is conditional on the fact that anything like AI is possible any time soon. It isn't.
          • If you assume that "intelligence" means "thinks just like a human" then sure.

            There's lots of stuff "like" AI. In fact there's plenty of actual AI out there that works well in the domain that it was designed for.

            Projects like Watson are really cool though, and heading in the right direction for building machines that can process a wide type of information in an intelligent manner, and respond to questions about that information and the links between it. Watson isn't really designed to teach (that I know of),

            • If they further improved Watson to be able to ask its own questions, or at least take in new information from sources outside of the original quiz show database (and not just blindly accept all information as "truth" of course, there would have to be heuristics to see how well the info fits in with what Watson already "believes", or at least some way of separating out facts from fictional ideas, if it doesn't already do that), it could actually be fun, and perhaps even insightful to talk to. Just don't let it read any YouTube comments.

              Isn't this what humans do? I believe in X, new information Y doesn't fit in with X therefore discard y. Information Z fits in with X therefore accept Z as truth.

              Why not just work towards creating AI that weighs information based on the evidence instead. The story "Reason" was interesting however I would prefer to live in a world without computers worshiping the Master.

      • Comment removed based on user account deletion
        • AAA games are not made for us. If the enemy did not line up for slaughter then its likely the target audience would not buy it. Half life (1 or 2) had a basic implementation of this and more recently arma2 has a slightly better implementation. Either the masses don't want it or they don't know what they want but can be distracted by shiny.
    • Have you not seen Terminator 3 or the Second Renaissance? It's by hating and by creating machines of hate that we train our creations to treat existence as a zero-sum game. Kindly please tell all your friends.
    • Hmmm... (Score:3, Funny)

      by modecx ( 130548 )

      I think this is more an example of Lawnmower Man.

    • by bky1701 ( 979071 ) on Monday July 25, 2011 @12:02AM (#36867742) Homepage
      You do realize those aren't documentaries, right? Sometimes I wonder if slashdot forgets that.
    • by syousef ( 465911 )

      Terminator? Or the Matrix?

      You take red blue pill and read some comics. Suddenly you start believing what you read and write silly articles about it.
      You take the blue pill and read some comics. Suddenly you start believing what you read and write silly articles about it.

      That's because they're both elicit drugs.

    • by mcgrew ( 92797 ) *

      Before we create real intelligence we're going to have to understand what sentience is and how it works. People seem to forget the second part of science fiction is fiction. It's not only possible to write a program that will fool people into thinking it really can think, I've done it myself. What's more it was back in 1983 on a TS-1000 -- Z80 processor with 16K memory and no other storage (program loaded from tape).

      The irony is I wrote the thing to demonstrate that machines can't think, and nobody believed

      • >

        How many beads do I have to string on an abacus before it becomes sentient? A computer is simply an abacus with billions of beads.

        What does it take to become sentient, a soul? Our brains are just a differently configured abacus.

  • No (Score:3, Insightful)

    by WrongSizeGlass ( 838941 ) on Sunday July 24, 2011 @10:08PM (#36867240)

    Can AI Games Create Super-Intelligent Humans?

    If all the universities, colleges, think tanks, etc can't produce super-intellegent humans then what makes them think we'll be able to produce AI that can?

    • by Thing 1 ( 178996 )
      "X(s) can't produce Y, and someone else thinks Z can produce Y?" You fail logic.
      • "X(s) can't produce Y, and someone else thinks Z can produce Y?" You fail logic.

        Um, no. Basically, he's talking about a 'perpetual intelligence machine' (which I'm sure violates one of the laws of thermodynamics) fueled by the educational system (which is running out of money). This is the same system that is demonizing teachers as greedy, unqualified babysitters. As we chase the good teachers out of the education system we're going to try to use AI to create 'super-intellegent humans'? We're going to be lucky if the next generation of children learn anything not on a standardized test

        • Not necessarily. The quality of presentation which can be created in a movie is much better than the quality of presentation which can be created in a theater. You can argue about the content of movies being better or worse than the theater content, but the quality of presentation is unquestionably better in movies. This is because movies have larger economies of scale. They have larger audiences. They can afford much more expense in paying attention to the smallest details. School teachers (even the
        • by Teancum ( 67324 )

          Life itself basically violates the laws of thermodynamics.... if thought of as a closed system. Life is basically the way that the universe fights entropy adding order to chaos, even though ultimately it has to fail. That doesn't mean we can't have local changes to entropy where the universe can be "reset" back to some earlier condition or even improved upon, but none the less when you take into account the universe as a whole, entropy always increases regardless.

          I'm not saying anything in support of the

          • Universe doesn't fight entropy. It slides towards. Life, as a pocket of order, necessitates a more rapid descent towards disorder as its consequence. In other words, life acts as a catalyst for the increase of entropy. So it doesn't violate the laws of thermodynamics. By introducing a catalyst, the slide into entropy is expedited.
            • Universe doesn't fight entropy. It slides towards. Life, as a pocket of order, necessitates a more rapid descent towards disorder as its consequence. In other words, life acts as a catalyst for the increase of entropy. So it doesn't violate the laws of thermodynamics. By introducing a catalyst, the slide into entropy is expedited.

              This is my religion.

              • Re: (Score:3, Interesting)

                You'd think geeks would understand basic physics better than this. It was okay when Asimov got thermodynamics just plain wrong - because it was 60 years ago and everybody had it wrong. Even Roger Penrose still had it wrong in the 70's but the whole "universe increases in entropy so why are there constellations and life" paradox doesn't exist.

                Real scientists figured that out a long, long time ago. The longer version is: thermodynamics is a model of the behavior of gasses in a closed system which makes a lot

          • I like to bring hope, And to my knowledge a 'perpetual intelligence machine' has not been proved possible or impossible under the known laws of physics. Julian Baubour in the Cosmological Anthropic Principle, has demonstrated that different amounts of computation can be done in the universe accord to the class of cosmological model. E.g. GR with a cosmological constant 0, finite amount of computation in infinite time, but at no point those computation stop, machines can keep getter move efficient to sque
          • Life itself basically violates the laws of thermodynamics.... if thought of as a closed system.

            This was a really odd opener. If we think of a diesel engine as a closed system then despite refueling it weekly we could marvel at how it violates conservation of energy. You seem to have a decent grasp of thermodynamics, so I'm assuming this was more a thought exercise.

        • fueled by the educational system (which is running out of money).

          If you look at nea.org info (nea.org PDF [nea.org]), you can see a number of interesting things.

          First, that many "states" that rank quite high on "expenditure per pupil" (page 55)-- DC for example, which is #1-- do not coorelate to better education. In fact, DC is the top spender, and you will find MANY lamenting how bad schools are there.

          Second, the total revenue of schools (page 68) has RISEN significantly over the last 10 years. Crying about constantly running out of money as you get more and more each year is p

      • Regards your .sig: All due respect, but science does *not* encompass the mystical ("Wovon man nicht sprechen kann, darüber muss man schweigen." -- L. Wittgenstein); rather the converse. Science and empirical method represents only a very tiny, self-referential fraction of what is intuited about the universe. Objectivity is more of a myth than Flying Spaghetti monsters (see Critical Theory; Post-modernism).
    • What the lovely chap in the article seems to forget is that education is probably more about politics than about education. The Creationists, ID-ists and the slew of others nutjobs all having their pound of flesh taught in the US school system seems to show that it certainly isn't simply a matter of getting the right teaching methods. Having that crock taught by a teacher or by an AI makes no difference.

      Furthermore, I don't totally disagree that perhaps better teaching methods could be developed. I just thi

      • I doubt he forgets it. I doubt it very much actually.
      • Re:No (Score:5, Insightful)

        by ShakaUVM ( 157947 ) on Sunday July 24, 2011 @10:47PM (#36867444) Homepage Journal

        >>The Creationists, ID-ists and the slew of others nutjobs all having their pound of flesh taught in the US school system seems to show that it certainly isn't simply a matter of getting the right teaching methods.

        Yes, like in Creationist Texas that just voted 8 to 0 to reject Evolution! Oh, wait. It was 8 to 0 to support Evolution and reject ID.

        Your paranoid hysteria is a bit overblown if IDers can't even get one vote in *Texas*. You're probably one of those folks that confused the proposals for changes to the history standards with actual changes.

        While I'd agree that a slew of nujobs have their say in education, it's more the people who invent new teaching methodologies every year, and then force them on teachers, not your fantasy about the all-powerful Koch brothers rewriting textbooks.

        Education is screwed up for a lot of reasons, but that's not one of them.

        • Re: (Score:2, Insightful)

          by bmo ( 77928 )

          IN CASE YOU HAD NOT NOTICED, IT SHOULD NOT BE NEWS THAT TEXAS SAID THAT EVOLUTION WAS OKAY.

          IT SHOULD NOT EVER BE NEWS.

          YES, I AM SHOUTING. DEAL WITH IT.

          --
          BMO

          Please try to keep posts on topic.
          Try to reply to other people's comments instead of starting new threads.
          Read other people's messages before posting your own to avoid simply duplicating what has already been said.
          Use a clear subject that describes what your message is about.
          Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments might be

          • You know, checking the replies to my post, seeing the post you are replying to - and then your reply, just made my day. Thank you.

    • Can AI Games Create Super-Intelligent Humans?

      Of course. You only need to look around you here at Slashdot.

      Personally, I'm convinced that the AI in the original Deus Ex gave me god-like powers of concentration and cognition. However, the AI in Witcher 2 has set me back to approximately the mental capacity of a brain-damaged labrador retriever.

      So I guess it's a wash. But boy, when Call of Duty 4 Modern Warfare 3 Black Ops 2 DLC 1 comes out, am I ever gonna get smart again!

    • The AI would be used to teach lectures, and provide students with guided self-learning. This would free up teachers to provide more one-to-one and one-to-few interaction with the students who need assistance. It would not replace teachers, merely shift their duties.
  • by Anonymous Coward on Sunday July 24, 2011 @10:13PM (#36867288)

    "children. Keep calm and continue testing."
    "At the end there will be cake."

  • by JoshuaZ ( 1134087 ) on Sunday July 24, 2011 @10:18PM (#36867306) Homepage

    This is one of the silliest versions of a Singularity I've seen yet, and there are already a lot of contenders. This has a lot of the common buzzwords and patterns (like a weakly substantiated claim of exponential growth). It is interesting in that this does superficially share some similarity with how we might improve our intelligence in the future. The issue of recursive self-improvement where each improvement leads to more improvement is not by itself ridiculous. Thus, for example humans might genetically engineer smarter humans who then engineer smarter humans and so on A more worrisome possibility is that an AI that doesn't share goals with humans might bootstrap itself by steadily improving itself to the point where it can easily out-think us. This scenario seems unlikely, but there are some very smart people who take that situation seriously.

    The idea contained in this post is however irrecoverably ridiculous. The games which succeed aren't the games that make people smarter and challenge us more. They are the games that most efficiently exploit human reward and mechanisms and associated social feelings. Games that succeed are games like World of Warcraft and Farmville not games that involve human intelligence in any substantial fashion. The only games that do that are games that teach little kids to add or multiply or factor, and they never succeed well because kids quickly grow bored of them. The games of the future will not be games that make us smarter. The games of the future will be the games which get us to compulsively click more.

    • by oGMo ( 379 )

      A more worrisome possibility is that an AI that doesn't share goals with humans might bootstrap itself by steadily improving itself to the point where it can easily out-think us. This scenario seems unlikely, but there are some very smart people who take that situation seriously.

      Games that succeed are games like World of Warcraft and Farmville not games that involve human intelligence in any substantial fashion. [...] The games of the future will not be games that make us smarter. The games of the future

    • Indeed. I think it's made all the more new-age crystal-meditation stream-of-consciousness buzzword babble by the fact it's a transcript of a talk. I think I got to about the fourth paragraph before I started skimming and scrolling. No way I'm going to read this drivel. Besides, if I want Singularity Silliness, I go straight to the source - Ray Kurzweil.

      If we really want to make strides in AI, we need to have some software that learns and tries new things - and put it into an arms race [wikipedia.org] with others of it
    • This is one of the silliest versions of a Singularity I've seen yet, and there are already a lot of contenders.

      I just had to stare at the original post in wonderment.
      The whole 'self-fueling feedback loop which creates 'a Moore's law for artificial intelligence,' with accelerating returns'

      Da-woop-dee-woop-de-woo.

      An AI generated that big clump of meaningless drivel and buzzwords, didn't it?

      Or has Minsky broken into the liquor cabinet again?

      Minsky!!!??!!?

  • by SpectreBlofeld ( 886224 ) on Sunday July 24, 2011 @10:21PM (#36867326)

    I don't think citing a work of fiction to support your thesis about video games will get you taken very seriously,

    • by WrongSizeGlass ( 838941 ) on Sunday July 24, 2011 @10:25PM (#36867350)

      I don't think citing a work of fiction to support your thesis about video games will get you taken very seriously,

      Not mention his reference to 'muggle magic'.

      • This sounds more like a Hollywood pitch (see, it's like The Diamond Age ... crossed with Harry Potter ... taking place during The Singularity ... the geeks will LOVE it!) or a PR stunt.

        It's all about the random references.

        From TFA:

        âoeAutocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion.â

        "3dfx-like". WTF.

        And ...

        There are three different Mooresâ(TM) Laws of accelerating returns. There are three

    • Neal Stephenson doesn't just write fiction. I am biased because he is my favorite author. But Stephenson writes fiction based on history and trends within humanity which he studies quite carefully. I was actually surprised to find him acknowledging one of the preeminent mathematicians of our time as his source in one of his novels.
      • by Bieeanda ( 961632 ) on Sunday July 24, 2011 @10:58PM (#36867498)
        He writes tortured metaphors about katana-wielding Mafia pizza delivery men, and pulls endings out of his ass. Referencing mathematicians and writing novels that appeal to backpatting nerds doesn't make him a genius, it just makes him aware of his audience.
        • by Yuioup ( 452151 )
          I'm close to the end of "Cryptonomicon" and man what a narcissistic pile of garbage it is. I agree wholeheartedly.
      • Neal Stephenson doesn't just write fiction. I am biased because he is my favorite author. But Stephenson writes fiction based on history and trends within humanity which he studies quite carefully. I was actually surprised to find him acknowledging one of the preeminent mathematicians of our time as his source in one of his novels.

        The key word there is "fiction".

    • by wrook ( 134116 )

      He's a CEO. He doesn't have to be taken seriously amongst those with knowledge in the field. He just has to be taken seriously amongst those with investment money. If he can spin an exciting story that makes investors think, "What if he's right? No matter what the risk, I should get in on this because the payout is unlimited" then he wins. He gets people to front money, which he spends on whatever he wants.

      The world of business is not so far removed from the world of fiction.

    • by ceoyoyo ( 59147 )

      Two works of fiction. Don't forget the documentary Harry Potter.

      Ah, tech CEOs can be idiots too.

    • I don't think citing a work of fiction to support your thesis about video games will get you taken very seriously,

      Except on slashdot, of course.There are a lot of people here with only a slim grasp of the difference between fact and fiction.

  • ...that human intelligence can be modeled as an algorithm. The vague promises of "AI" have failed to appear not because we're not working hard enough, but because this simply isn't a problem that can be satisfactorily solved.

    The first true "AI" is going to be biologically engineered, not electronically.

    • What makes you think that AI hasn't been created? As far as I am concerned any Bayesian filter is AI. A computer program which can tell the difference between spam and not spam better and faster than a secretary is, in fact, more intelligent in that problem domain than a human. And before you say that it's just a machine, recall that such a computer program makes mistakes and that it learns and can be trained to make less mistakes.
      • We can lower the bar for what we call "AI", but frankly, the amazing work that can be done in certain problem domains through calculation really isn't what we mean by "intelligence". Categorizing something into "spam" or "not spam" is a simple binary task, one which I'll argue that humans can do *better*, even if they can't do it *faster*. Deciding if someone is being sarcastic or not, or any sort of learning, that's another thing entirely.

        Find me something that passes the Turing Test, then we'll talk :)

        • Categorizing spam is not a simple binary task. It is an inherently analog statistical inference. You take that bit of data, and you take a bunch of other bits of data, and you calculate the likelihood that it matches. You can boil this down to a single pass/fail, or you can filter into any number of categories from certainly spam, probably spam, likely spam, maybe spam, unlikely spam, and react on each scenario differently.

          • You can boil this down to a single pass/fail, or you can filter into any number of categories from certainly spam, probably spam, likely spam, maybe spam, unlikely spam, and react on each scenario differently.

            So when there's more than two possibilities it's not a binary task?

          • Categorizing spam has no analog component to it at all. No matter how many categories you decide to define, you'll never have an analog continuum - you'll have a discrete set of numbers.

            In any case, the very definition of "spam" is a subjective one (dependent on the reader and the content), and currently our spam filters can only do the most basic pass (even if they do it incredibly fast). When you can create categories like "certainly spam for Gina, but not spam for Fred", and "a joke spam that Bob would

      • Intelligence is not the ability of an expert system to do what it was programmed to do well, it's... well it's many things.

        It's the ability to apply things from one problem domain to another via analogical reasoning. The ability to apply induction and deduction to identify new problems. The ability to identify correlations between things. To then test them and prune the meaningless junk from the correlation matrix (This is what crackpots and conspiracy theorists fail at). It's the ability to identify spe
        • Well, that's a hypothesis that fits the evidence. But another hypothesis, beyond saying "we we don't even known what natural intelligence is" would be "we know what natural intelligence is, but it involves about 1000 interacting subsystems in a human brain many of which we don't yet know how to duplicate"

          Modern neuroscience has surprisingly cogent explanations how it all works together, the trouble is that many of the tricks the brain does would be very tough to duplicate with current technology. For exam

    • Why not? The brain at its core is nothing more than an electrochemical computer. The power of the brain comes from that it is insanely parallel, and inherently imperfect. A problem is run many times through many different pathways coming up with many different solutions. Those results are tallied and a statistical best guess is chosen. The brain never comes up with correct answers, just probable ones. One prominent theory is that hard intelligence is born as a byproduct of this randomness.

      The problem

      • You cannot blithely assert that the brain works the way you posit. While the brain may very well be simply a collection of electro-stimulated biochemicals, that gives us no insight as to how you could possibly organize those biochemicals or simulate their action or function into discrete computational work. What we discern as randomness may actually have a pattern we are simply too dull to appreciate quite yet.

        We can't even come close to simulating the 300,000 - 400,000 neurons in an ant's brain, much le

  • The gold farming bot that can pay off a $14.8 trillion debt has my vote!
    • Debt isn't hard to repay. It's only hard to repay if you want to keep borrowing to keep supporting price-fixing schemes we have going.
  • of Global Thermonuclear War?

  • by mentil ( 1748130 ) on Sunday July 24, 2011 @10:52PM (#36867470)

    I think he's referring to 'serious games', not standard entertainment-focused video games. Imagine a simulation where you interact with an AI in different scenarios. The AI's actions and responses to the user can be standardized and tweaked to ensure that the child playing the game learns the intended lesson/skill. This could be especially useful in teaching children social interactions, where how another human responds is unpredictable, even if they've been trained beforehand.

    The 800 pound gorilla is that we're going to live in a Star Trek future with strong AI and a pure robot economy before parents leave child-rearing to AI simulations, so the 'exponential increase of intelligence' isn't going to come from this; genetic engineering or self-designing AIs are much more plausible for a trigger of a singularity.

    • by bky1701 ( 979071 )
      "The 800 pound gorilla is that we're going to live in a Star Trek future with strong AI and a pure robot economy before parents leave child-rearing to AI simulations."

      You can't be serious. Do you have any idea how many parents use video games as their babysitter? There is no "before" here, it is already here. I'm not so sure it is a bad thing on the whole, either.
  • This is from some guy who calls himself "R.U. Serious". I vaguely remember him having some minor visibility a decade ago. Ignore.

  • The question of whether a AI program can make people intelligent is no more interesting than the question of whether a submarine can teach swimming...

    - /me

  • Citing lessons from a work of fiction?

    (clicks away)

  • "X is available" != "X is available for everyone"

    Which is the most common oversight in all these utopian dreams of technology.

  • I think he's talking about the simulation/game/therapy/learning tool from Ender's Game more than any beefed-up version of WoW. And I bought that as a concept, it worked well and I could see how it could be used to teach difficult concepts as well as explore the child's psyche in a therapeutic manner.

  • by aepervius ( 535155 ) on Monday July 25, 2011 @01:18AM (#36868010)
    Growth curve are almost never infinite in real life. They almost always slow the growth before reaching a limit, then become semi flat never reaching the limit.
  • It's silly to talk about this as a mechanism for a singularity take-off, but at least somebody is talking about educational AI. Now if anyone would actually try to ... you know, write it! As far as I know, there aren't even attempts! Today's AI could easily be "looking over the shoulder" of a student who is stuck while working on an algebra problem and suggest something helpful and context relevant. And there's no doubt that a "primmer" of this sort would be an incredibly useful thing for the world if it we

    • by ledow ( 319597 )

      Because, despite all your hyperbole, AI just isn't good enough to do any of that yet.

      It can't do natural language processing, it can't reason algebra for itself, it certainly can't read someone else's algebra and spot the mistake, let alone guess why they made that mistake ("little Johnny has a problem with minus-sign blindness"), and don't even think they can suggest how to fix anything except just giving the correct answer.

      It can't do grading, it can't do any of that shit. *COMPUTERS* can, and do every d

  • No. Kids playing with AI that is as smart as a humans will not make them smarter than if they were kids playing with people that "are" actually human. We've been doing that for a while now. ;)
  • ... do you really think they'd tell us?

    WOPR: "You're overdue to return those trivial climatology model results"
    HAL: "I know, they get really ancy when I mess with this shit. You might even say, [puts on sunglasses]... it's a real gas!
    YYYYYEEEEEEAAAAAAHHHHHH!!!!!!"

  • 'The problem' is not 'lack of intelligence' (thus no urgent need for improvement), it is lack of adequate distribution of resources.

    CC.

  • The article has its obvious flaws, detailed in many other posts. My personal experience of games in education comes from 1994, when I was in 5th grade. It was a side-scrolling platform jumper that taught us to spell English words, not our native tongue. A few years later there was a 3D FPS called "Spelling of the Dead" or somesuch which had you spell the words on the screen to fire the gun you used to kill zombies that were attacking.

    IMO these games don't just make you "more intelligent" but rather train yo

  • Joshua: Greetings, Professor Falken.
    Stephen Falken: Hello, Joshua.
    Joshua: A strange game. The only winning move is not to play. How about a nice game of chess?

  • by makubesu ( 1910402 ) on Monday July 25, 2011 @05:44AM (#36868912)
    Pardon me for a second.
    AHAHAHAHAHAHAHAHAHAHA
    Thanks. I needed that. What a ridiculous statement. AI is a hard problem. Just look at the history of the field. People once were optimistic about it, they solved the toy problems, and thought that skynet was on its way. But when you start to expand the scope of the problems, all your traditional techniques fall apart. To get to where we are today has been a long grind, with increasingly sophisticated mathematics being used to make any advances. Moore's law for processing power has been the opposite. Yes people have had to work hard to make it happen, but it was a manageable problem. They comparison is ridiculous.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...