Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Games Hardware

2000x GPU Performance Needed To Reach Anatomical Graphics Limits For Gaming? 331

Vigile writes "In a talk earlier this year at DICE, Epic Games' Tim Sweeney discussed the state of computing hardware as it relates to gaming. While there is a rising sentiment in the gaming world that the current generation consoles are 'good enough' and that the next generation of consoles might be the last, Sweeney thinks that is way off base. He debates the claim with some interesting numbers, including the amount of processing and triangle power required to match human anatomical peaks. While we are only a factor of 50x from the necessary level of triangle processing, there is 2000x increase required to meet the 5000 TFLOPS Sweeney thinks will be needed for the 8000x4000 resolution screens of the future. It would seem that the 'good enough' sentiment is still a long way off for developers."
This discussion has been archived. No new comments can be posted.

2000x GPU Performance Needed To Reach Anatomical Graphics Limits For Gaming?

Comments Filter:
  • Development costs? (Score:5, Interesting)

    by oakgrove ( 845019 ) on Thursday March 08, 2012 @03:22PM (#39291823)
    My question is this: how much more will games have to cost to support the development to this level of detail?
    • by SlightlyMadman ( 161529 ) <slightlymadman AT slightlymad DOT net> on Thursday March 08, 2012 @03:28PM (#39291943) Homepage

      It shouldn't make a huge difference, actually. Things like trees and faces are already rendered to a complexity beyond where it's reasonable to create them by hand. That's why there are 3rd-party utilities to render these things easily, with some simple inputs, like plugging a formula into a fractal generator. You don't have to hand-design an NPC's face any more than their parents had to piece their fetus together. You plug in the DNA and the code does the rest.

      • by Dahamma ( 304068 )

        Yeah, but you are now limited to those algorithms, which will be way inadequate when other objects, textures, etc are photorealistic. All of the new R&D costs to come up with something better is an extra cost, as well.

        Not to mention you have to improved everything to match when you improve the poly count and lighting - making perfectly lifelike characters does no good if their animations look robotic... yet more dev costs (probably, in algorithms, mo-cap, and hand animated/tweaked animations/keyframes/

        • by nomel ( 244635 )

          Libraries man. Make a single cloth simulator, use it for all cloth. Make a single skin texture builder, use it for all skin. Make a single face constructor, use it for all faces. Trees, rocks, pavement, buildings...everything. If you've ever played with a proper 3d rendering platform where you don't have to draw everything by hand, it becomes impressive what you can make with a small algo and a random number generator to feed to it.

      • by justforgetme ( 1814588 ) on Friday March 09, 2012 @02:08AM (#39297941) Homepage

        No, actually they are not. I had dabbled a lot with 3D in 2007/2008 and I can tell you no engine whatsoever delivers accurate foliage.

        What state of the art engines do is return a good approximation by filtering obstructed objects out of computation. Transforms are not
        live and lighting is a very rough estimate, ignoring subsurface scattering and calculating shadows out of a reduced mesh.

        Want to go even further? Fur and then cloth. Fur atm is non existent in real time engines (to create real tangible fur in a Max scene can introduce thousandfold increases in computation) and don't even get me started about cloth.

        So yes, graphics hardware isn't anywhere near a plateau. The 5000fold estimate is a reasonable one if not optimistic. IMO hardware will continue to leap forward untill state of the art processing will be able to simulate realtime physics of high density meshes by just knowing the material properties of each mesh (which has never been as much as suggested).

        As for displays, those will keep growing both in physical dimensions and resolution because there just are uses for that (and before anybody argues think how many people thought `17" 1024x768 is all you need`)

        • by BetterSense ( 1398915 ) on Friday March 09, 2012 @10:33AM (#39300427)
          I don't know shit about graphics.

          But the PS2 game "ICO" taught me a few things. It's hard to explain the impact the graphics had when the game came out. Particularly the trees...they look absolutely amazing for a PS2 game which was actually developed for the PS1 (it fits on a CD, rather than DVD).

          I tried to get a close FPV on the leaves, and I realized there weren't any leaves. Just simple shapes that shimmered, glittered and moved in mass like a tree. The PS1-era developers didn't have anywhere near enough polygons to actually generate leaves; they didn't have raytracing hardware to simulate light glittering off millions of leaves, and they didn't have subsurface scattering to model light going through the leaves. But it didn't matter, because they managed to hack something that looks just like a fucking tree from any reasonable distance. They didn't synthesize a TREE...they synthesized something that looked like a tree, using minimal primitive elements arranged to give a stunning impression of a tree--some real Bob Ross shit.

          In other parts of the game, there are what appear to be very realistic dust effects and lighting effects (in the cathedral area). These effects were just amazing at the time...beautiful. A closer inspection shows that they just hand-placed luminescent polygons to construct every shaft of light in the cathedral, and the apparent dust effects are just moving texture on the polygons. Again...no ray tracing, no particle effects, but they made something that looked absolutely convincing, the way a good painter can give an impression of light paint and canvas--basically human visual cortex hacking.

          There is no point to this post, except that there is more to creating good graphics than technology.
    • by Hentes ( 2461350 )

      Don't worry, most studios don't spend much on salaries, the big costs are marketing and management.

    • My suspicion would be that the level of detail that is commercially viable will depend largely on the availability of tools to generate it more efficiently(and recycle without being too blatant what you've already generated)...

      Something like a high-resolution 3D laser scan and motion capture isn't cheap; but if you have the capability to take a library of captured actors and then programmatically mix-and-match and slightly randomize certain parameters to generate unlimited NPCs, the start-up cost is high
    • by DahGhostfacedFiddlah ( 470393 ) on Thursday March 08, 2012 @03:32PM (#39292005)

      Same as any other new technology.

      In 2028, James Cameron will spend 3.2 trillion dollars on Avatar: Reloaded. You'll spend $50 to see it once in theatres.

      The technology will be ported to games about 5 years after that, costing $60/game (top-tier game prices haven't changed since 1980).

      5 years after that, $2 flash games will all include photo-realistic graphics at 200 fps.

    • by mcgrew ( 92797 ) * on Thursday March 08, 2012 @03:37PM (#39292087) Homepage Journal

      It's not the cost of the games, but cost of the hardware. That's one reason I got out of the gaming scene -- to play a new game you had to have the latest, greatest, fastest, most expensive hardware.

      Sweeny and company need to get a clue. I'm a nerd, but I'm not Steve Wozniac. I have bills to pay and much better things to do with my time and money than to spend half a C-note on hardware, take the time to install the hardware, just to play a $50 game I might not even enjoy that much.

      I mean, its a GAME. I don't care that every hair on Duke Nukem's head is perfectly rendered. I just want it to be FUN.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        I have bills to pay and much better things to do with my time and money than to spend half a C-note on hardware

        Fifty dollars on hardware doesn't sound that bad, honestly. Presumably that will last for at least a couple years.

      • by Bengie ( 1121981 )

        IGPs can play Crysis now. GPUs aren't the bottleneck anymore, it's how many command you can issue to the GPU. This part is limited by IPC*mhz and # of threads.

      • by thesandtiger ( 819476 ) on Thursday March 08, 2012 @04:23PM (#39292823)

        A gaming rig that can more than handle medium settings of any modern game shouldn't cost you more than $1000 to build, and almost certainly wouldn't be getting used JUST for gaming by anyone who is budget conscious.

        The rig I currently game (and do a lot of work and other personal computing use) on was just over $1000 when I set it up 2 years ago and I can play modern games at medium settings no problem, and have decent frame rates.

        It came with:
        - i7 860 @ 2.8GHz
        - 8GB RAM
        - ATI Radeon 5850
        - Win 7 64
        - Running 2 displays at 1920x1080

        I added:
        - 8GB more RAM (I do a lot of work via virtualization - I highly doubt any games I'm playing even use the base 8 that I had originally). $200 when I got it - and though I wrote it off, I'll add it to my "gaming" cost
        - 64 GB Intel SSD that I install games I'm actively playing on; $90 on Newegg when I got it.

        I'll use this system for 2 more years before replacing it and turning it into a server, but let's pretend I'll throw it in the garbage, so it comes out to $325/year invested.

        I won't even try to pro-rate the cost due to work I do with it or personal, non-gaming use, so let's pretend that my gaming hardware costs me $325 a year with no other benefits.

        If you're saying that a $325/year investment in multi-purpose hardware is too much, but dropping $60 at a shot to play modern games is cheap, you have very, very weird budgeting.

        And I'm assuming I'm right on the $60 price because that's the price of the bleeding edge games that I'm assuming you imagine require insanely expensive hardware to run (but that I'm quite capable of playing on my rig)

        Or maybe your machines cost sub $500 to put together, in which case, yeah, you're not going to be having a very fluid gaming experience.

      • If anything, while the hardware requirements of being a true nut have climbed a bit(SLI/Crossfire and motherboards with 4 or more PCIex16 slots certainly make it possible to go overboard in fine style, along with your $1k processor and SSD array..), the price premium of adequate 'gamer' hardware vs. 'a boring computer' hardware has been extremely modest of late.

        The total cost of the computer is still a bit higher than the cost of the console(though the games are often cheaper, if you don't insist on havi
      • Comment removed based on user account deletion
      • by Sir_Sri ( 199544 ) on Thursday March 08, 2012 @04:31PM (#39292915)

        Well that's why there's consoles. If you can't afford PC hardware, or don't want the hassle of PC hardware, you buy a console. Since they're fixed development targets you in some ways get a better experience, because the developers knew exactly how your hardware would behave with their software and tuned accordingly.

        If you have absolutely no money, well, sucks to be you? Sorry, but in a world where people spend 1000 bucks on a TV, 25000 on cars etc. etc. etc. 500 dollars in disposable income on a console, which lasts for 5 years is targeting anyone who makes 35K/year or more. It's not perfect, but what else do you expect? We're not going to resell PS2's for 30 bucks here. There are about 100 million consoles sold at the price point of 700-300 dollars (launch price to current price), which is a pretty wide distribution given that not everyone even likes games, and lots of consoles serve a lot more than 1 person.

        Sure, if you don't live in a first world country consoles are insanely expensive, no doubt, but then you'd have a stratification of consoles for the 2nd and third world and consoles for the first world, since people who *can* spend 500 bucks on a console will want a better experience than you're griping about at 50.

        The idea that graphics doesn't matter is a misleading one. Graphics matter in their absence. Go play mass effect 3 (at a friends house, since obviously you don't have it, and can't afford it), and then compare to final fantasy 7. That's about a factor of 2000 different in performance. Sure, Final Fantasy 7 is still fun, but you're overlooking the shitty graphics because it's nostalgia, if you tried to release that today you'd be laughed out of publisher and retailer offices. Minecraft gets away with it by being uniquely quirky, but minecraft is one game in a world of AAA titles launching about 1 a week on average. When the other guy has dragons that look like dragons, and boobs that look like boobs, and you have dragons that look like a collection of 25 triangles (compare: http://zam.zamimg.com/images/i/d/id3283.png - original version of Lord Nagafen, EQ1 to http://images.wikia.com/elderscrolls/images/3/31/Ancient.jpg, Ancient dragon in Skyrim) or boobs that are just spheres, I'm sorry but it detracts from the immersion of the experience. Especially for younger people who are used to better quality graphics.

      • I won't be able to afford the electricity bill alone at this rate.

      • by tom17 ( 659054 )

        Well I solved the problem for me, I am only just now playing Half Life 2. What a great game!!!

        Yes, I know... http://xkcd.com/606/ [xkcd.com]

        Oh good grief! I didn't realise the comic was *actually* HL2! :)

        Tom...

    • Developers used to start off with high resolution models and have to pare down the triangle count and adjust textures to meet memory and processing requirements. In the future, they won't have to do all of that tweaking and will be able to use full resolution models, so it will probably be cheaper.

      Also, not all games aim for realistic depictions, many (most?) are stylized, and won't necessarily need to be highly detailed. The extra processing power could go to effects, deformation, physics, etc.
    • by jd ( 1658 )

      Less than at present. Currently, they have to hard-code data and pre-render bumpmaps. That's expensive. The more realistic you want to make something, the more abstract you want the model*, which means less work for the designers and less computer time spent pre-generating things.

      *An abstract model can be rendered under a wider range of conditions and thus look real under them. A pre-generated bitmap only looks realistic under very specific conditions. At best. Letting the computer do the work, rather than

    • less because you will just be able to scan real world objects without doing any cleanup on the resulting meshes. When it comes to things that don't exist, like alien monsters and battle armor, well, you can model them using techniques that result in more wasted triangles without worry.
    • Figure $70-80 for five hours of game time, plus downloadable content.

  • by Bodhammer ( 559311 ) on Thursday March 08, 2012 @03:23PM (#39291839)
    better looking "anatomical peaks"!
    • better looking "anatomical peaks"!

      I prefer valleys.

    • by girlintraining ( 1395911 ) on Thursday March 08, 2012 @03:36PM (#39292065)

      better looking "anatomical peaks"!

      Yeah, me too. To date, I have yet to see a video game character with a realistic-looking male crotch. Those poor, poor bastards. And yet, so many guys look at those video game characters as heroes in spite of their status as eunichs. I hope that with this latest advancement in technology, men will finally get some anatomical upgrades so they can be, you know... men.

  • by elrous0 ( 869638 ) * on Thursday March 08, 2012 @03:25PM (#39291873)

    there is a rising sentiment in the gaming world that the current generation consoles are 'good enough' and that the next generation of consoles might be the last

    If developers can't find a way to improve games beyond the next generation, it's not because we've reached some peak of gaming possibilities, it's just because those particular developers have reached the peak of their imaginations.

    Somewhere right now their is a young guy sitting somewhere who has an idea in the back of his head which will become the next great innovation in gaming. It will require a lot more computing power than the current generation of PC's, much less consoles. If he were to pitch it at EA, he would be laughed at. If he tried to explain it at a Game Developers Conference, he would be greeted by blank stares and derision. He's probably already used to hearing responses like "That can't be done", "Who would want THAT?", "That could never be done on a console", etc. But one day people will look back and say "Wow, how could they *not* have seen that that was the future?" and "How could they have been so arrogant as to think that gaming had peaked with the millionth variation of the FPS?".

    What's more, I suspect that even Sweeney is off-base. The next real innovation won't be about improving resolution or framerates to some theoretical max, or making an even prettier FPS. It will be some whole new way of thinking about gaming that is just in the mind of that weird guy right now. Most of us can no more imagine it now than some guy playing Pacman could have foreseen Half-Life 2. But it's coming.

    Every generation thinks it's special. But never be so arrogant as to think your generation has somehow reached the pinnacle of achievement in ANY area.

    • Somewhere right now their is a young guy sitting somewhere who has an idea in the back of his head which will become the next great innovation in gaming. It will require a lot more computing power than the current generation of PC's, much less consoles. If he were to pitch it at EA, he would be laughed at. If he tried to explain it at a Game Developers Conference, he would be greeted by blank stares and derision. He's probably already used to hearing responses like "That can't be done", "Who would want THAT?", "That could never be done on a console", etc. But one day people will look back and say "Wow, how could they *not* have seen that that was the future?" and "How could they have been so arrogant as to think that gaming had peaked with the millionth variation of the FPS?".

      68% of you won't re-post this, but the 42% of you with VISION will. Our voices will be heard! No fees for gaming, or we'll QUIT VIDEOGAMES!

      • by rgbrenner ( 317308 ) on Thursday March 08, 2012 @03:33PM (#39292031)

        68% + 42% = 100% eh? Maybe quitting video games would be a good thing for you. It would give you more time to study math.

        • by aevan ( 903814 )
          Nono, totally unrelated: 68% of the people won't repost. 42% who have vision will. So 32% of the people who repost are of the 76% that can see, meaning he considers 24% of the populace to be blind. Apparently there is a high level of head injury in his area resulting in eye trauma.
      • For the humor impaired, this was an intentional parody of the moronic chain posts on Facebook, complete with terrible math, ambien-level hyperbolic drama, and random capitalization, inspired by the dumbass Facebook-post-esqe quoted paragraph.

        You'd think I'd know better than to try a gag like this on Slashdot (or the internet in general) by now.

    • Why is an innovation inherently going to make use of more computing power?

      And yes, there are pretty clearly areas where there is no practical room for improvement. For example, we have digital audio quality that can exceed the perception of even the best humans, so for humans, there is no reason to go further. That's not to say that there isn't room for improvement, but rather, for such an improvement to be useful, we'll need a better human.
      • by jdgeorge ( 18767 )

        Why is an innovation inherently going to make use of more computing power?

        Didn't the post you're "responding" to that say about the innovation "will be some whole new way of thinking about gaming", rather than just higher resolution, FPS, etc. And did it mention anything at all about using more computing power?

        If you're just looking for a place post your two cents on a subject, you could at least make it a reply to something vaguely related to what you're talking about.

        It's not that I think what you're saying is wrong; it's just a nonsequitor in this thread.

        • by Trepidity ( 597 )

          The post he was replying to said:

          Somewhere right now their is a young guy sitting somewhere who has an idea in the back of his head which will become the next great innovation in gaming. It will require a lot more computing power than the current generation of PC's, much less consoles.

          That was also my reaction on reading that--- why should we assume that the next great innovation in gaming will necessarily involve "a lot more computing power"? It's possible that there exist such innovations, but I'm also pr

      • Well, there's some small limited-scope audio baubles that could be improved.

        For example, having audio recorded at 192khz allows you to do slow-motion effects without the audio turning into bass sludge (you'd get to hear all that neat stuff you normally can't).

        Better HRTF and simulation algorithms would allow you to directly generate audio based on geometry interactions, media density, temperature etc - instead of using all pre-recorded sounds and pre-defined characteristics (such as room size, simplified ge

        • Well, that's what I get for not closing my anchor tag. Slashdot extended that URL to the whole phrase and lopped out some words while it was at it.

          I meant to say it "simulates light emissions from sources"

    • Somewhere right now their is a young guy sitting somewhere who has an idea in the back of his head which will become the next great innovation in gaming.

      But it will never see the light of day because it is genuinely innovative, rather than an rehash of previous ideas that is easily marketed thanks to technological stats.

      People value what they can measure.

    • You're ignoring Sweeny's entire point and arguing that a different proposition - "gaming is as good as it could ever be!" - is false. So what?
    • by vux984 ( 928602 ) on Thursday March 08, 2012 @04:05PM (#39292535)

      Most of us can no more imagine it now than some guy playing Pacman could have foreseen Half-Life 2. But it's coming.

      The guy playing pacman (released in 1980) only had to move a couple cabinets over to play Battlezone (also released in 1980) to foresee Half Life 2 and FPS's in general.

    • Somewhere right now their is a young guy sitting somewhere who has an idea in the back of his head which will become the next great innovation in gaming.

      And somewhere behind him is a woman throwing all of his stuff out of a bedroom window because he hasn't turned around from his gaming in 7 hours....

    • by AJH16 ( 940784 )

      Nice theory, but in the days of Pacman, people COULD and DID envision a future with things like HL2. More realism was always and has always been the goal. The problem is now that we are getting to a point that many people consider to be "good enough," there is a lot of questioning as to what the future will hold. Most likely, the answer is a combination of incremental upgrades of realism coupled with increased focus on either a) marketing for big titles or b)different ways of thinking of gameplay, though

      • At the end of the day, that is what separates a game like HL2 or Mass Effect from a game like Angry Birds.

        I can almost guarantee you that Angry Birds will stand the test of time far better than Half-Life 2 or Mass Effect. In fact, if there's one single game released in the past 5 years that will be considered a classic two or three decades from now, it's Angry Birds.

        • by AJH16 ( 940784 )

          Sorry, let me clarify I didn't mean to imply that Angry Birds doesn't have staying power, but rather that there is more depth to the characters of a game like HL2. People who are Half Life fans are fans because of the story. People who are angry birds fans are fans because they hit the right balance of a simple mechanic with a likeable enough character that caught on culturally (in the same way giga-pets did). It will still be well remembered in the future, but how many people really wish for more gigape

    • If developers can't find a way to improve games beyond the next generation, it's not because we've reached some peak of gaming possibilities, it's just because those particular developers have reached the peak of their imaginations.

      Somewhere right now their is a young guy sitting somewhere who has an idea in the back of his head which will become the next great innovation in gaming. It will require a lot more computing power than the current generation of PC's, much less consoles. If he were to pitch it at EA, he would be laughed at. If he tried to explain it at a Game Developers Conference, he would be greeted by blank stares and derision. He's probably already used to hearing responses like "That can't be done", "Who would want THAT?", "That could never be done on a console", etc. But one day people will look back and say "Wow, how could they *not* have seen that that was the future?" and "How could they have been so arrogant as to think that gaming had peaked with the millionth variation of the FPS?".

      I actually have an idea like this. I'll go ahead and practice my elevator pitch here:

      Zombies have been a staple of FPS games since the beginning. You can't go much further back than Doom. But what makes a zombie scary?

      It's not that they're undead. It's not that they're brainless, or that they can't always be killed. It's the numbers. A gamer can go one-on-one with anything. Look at how many JRPGs end with a boss fight against god, for crying out loud.

      No game captures the sheer numbers of zombies. Killing Fl

      • I want millions. I want the entire population of New York City, all eight million people, turned into shambling, flesh-eating monsters.

        Sounds like you haven't been to New York lately.

    • Most of us can no more imagine it now than some guy playing Pacman could have foreseen Half-Life 2. But it's coming.

      People still play Pac-Man today. Do you think anyone will be playing Half-Life 2 in 2036?

      The truth is that game design has for the most part regressed since the NES/SNES era, focusing too much on flashy effects at the expense of gameplay, and dominated by one crappy genre (FPS). The only 3-D games I ever found worth playing were the Zelda series.

  • Anatomical? (Score:3, Insightful)

    by girlintraining ( 1395911 ) on Thursday March 08, 2012 @03:27PM (#39291931)
    When the article's authors have shoehorned a word so obviously not related to the subject matter into the subject line, and then go on to repeat it over and over again, only one of two things can be true:

    1. There were no better words in the dictionary, and rather than taking the sensible approach of creating a new one, they opened to a page at random, stuck their finger on it, and started using whatever their finger touched.

    2. Author was trying to sound trendy and interesting.

    As a footnote, salahamada is a made-up word waiting patiently for its debut. Give it a little love?
    • by Pope ( 17780 )

      And a salahamada to you as well on this glorious day!

    • I interpreted "human anatomical peaks" in the sense that we have anatomically-caused limits of visual resolution and color that we can perceive. The "peaks" part may also communicate that some of us see better than others (*adjusts eyeglasses*). The overall limit or "peak" is directly related to the scale of our bodies and how our eyes are put together. The phrase is a slightly unusual shorthand in this summary (of course I didn't read the article), but it makes sense to me.
  • Ha (Score:2, Informative)

    by Anonymous Coward

    Tim's explanations of first- and second- and third-order approximations are somewhat bizarro. Unreal doesn't use second-bounce in its lighting. All game engines are first-bounce only unless they contain some realtime radiosity simulation, and very very few do. This has been true since Wolf 3D and is true today.

    And once you have a system for second-bounce, third- and fourth-bounce can be trivially computed (over multiple frames if need be), and the results are hardly different to second-bounce.

    I wish I kn

  • Anatomical Peaks (Score:5, Insightful)

    by Archangel Michael ( 180766 ) on Thursday March 08, 2012 @03:36PM (#39292083) Journal

    While describing the layer and textures, it is going to be offset by what is known as "uncanny valley". There is a point at which the reality is flawed because it looks too real for the context.

    I'm even starting to see uncanny valley on magazine covergirls after they've been photoshopped till they are almost unrecognizable. There is a point where you stop fixing flaws and start making them.

    • It's not so much about being too perfect, but more closely related to what I call "uneven reality". If one aspect seems less real than the rest, like for example picture-perfect facial detail but choppy motion, that tends to trigger the uncanny valley response. It's our brain going "I recognize this as human, but something is very very wrong with them". The same can occur with sounds, as if a non-realistic image or machine is paired with a human voice, we perceive it as a disembodied human, which can be

  • What about AI? (Score:5, Insightful)

    by i_ate_god ( 899684 ) on Thursday March 08, 2012 @03:37PM (#39292105)

    Everyone talks about how far we can push graphics.

    But what about pushing the AI?
    What about procedural generation of the game?
    What about vastly improved physics including a destrucable world?

    I'd rather see these things pushing hardware development than how many polygons you can crunch in a second.

    • I completely agree that there are more important things to gaming than pretty graphics. I'd love to see self generating environments, NPCs with voice acting and even quests.

      Skyrim (and perhaps other games) tried implementing a random element to quests where the quest line remains static but the location of items is dynamic. But this still falls far short, and after visiting the same cave a couple of times it makes no difference if I need to find item X or Y in there.

      • by Sir_Sri ( 199544 )

        Self generating environments is a graphics problem largely. Probably you'll need a lot of science to keep the workflow for artists at that level from being unmanageable. You'll have to algorithmically generate characters, faces etc, an artist will just set the parameters for the particular character they want and then tweak a little bit for uniqueness/flavour. And yes, there's some AI in making sure your core game logic will support whatever is generated.

        Real time speech synthesis might be interesting, th

    • I'm not interested in two thousand times the processing power if it's still the same crap gameplay. I still like some of the vintage games with chunky block graphics because they're fun to play.
    • Humans are visually stimulated, men even more so then women. Do you get it now?
    • by MobyDisk ( 75490 )

      Unfortunately, many people stopped pushing AI when games went online. Single player games seem to use poor dumb AI + scripted actions.

    • Turing Test (Score:5, Funny)

      by DarthVain ( 724186 ) on Thursday March 08, 2012 @05:27PM (#39293641)

      Best test for gaming AI...

      Some day Xbox or PS will sell a multiplayer game, but simulate all the other players. When no one notices, I will say we have achieved the pinnacle of gaming AI. Of course we may have to train an AI to chronically swear and make racist slurs, but if that is what progress takes so be it!

  • I enjoy reading the responses from armchair know-it-all's that seem to think Sweeney is some sort of light-weight when it comes knowledge about rendering.
  • by Twinbee ( 767046 ) on Thursday March 08, 2012 @03:46PM (#39292249)
    I think 2000x GPU power is very much underestimating the potential for a number of reasons:

    1: Raytracing / global illumination. In comparison to games with true global illumination [skytopia.com], current technology 3D worlds with only direct illumination (or scanline rendering) look crude and unconvincing. Objects appear 'cookie-cutter' like and colours tend not to gel with the overall 3D landscape.

    Toy Story 3 took around 7 hours to render each frame [wired.com]. To render in real-time for a video game (say 60 FPS), you would need a processor that was around 1 million times faster than what we have today. And AFAIK, that's mostly using Reyes rendering (which incorporates mostly rasterization techniques with only minimal ray tracing.

    2: Worlds made of atoms, voxels or points. This makes a world of difference for both the user and the designer. Walls can be broken through realistically, water can flow properly, and explosions will eat away at the scenery.

    2000x? Pah, try 2 TRILLION as a starting point.
    • by Anonymous Coward

      Exponential improvement in technology is the historical norm, yet it can still be difficult to fathom.
      2000X should be achievable by 2024, at 2x improvement per year; or by 2029 at 2X every 18 months.
      Some of us should see 2 trillion-fold improvement in about 40+ years at 2X per year; or by 2075 at 2X every 18 months.
      Barring the occurrence of any variety of manmade and natural disasters, of course.

    • by Sir_Sri ( 199544 )

      Keep in mind the resolution he targeted as well. 8000x4000. Toy story (or any movie) is aiming for a screen 20 metres across at a somewhat higher resolution, and they can just brute force it, because well, they have the CPU time.

      Don't underestimate the value of dedicated hardware either. Pixar is probably stuck using variants of the same hardware the rest of use, which isn't great for ray tracing (GPU's don't play nice with out of order memory, and CPU's have shitty floating point performance). Hardwar

  • Okay, I like the sound of this. Get me four of those graphics cards so I can SLI the tits out of my hydro bill.

    Sure, there is only so much data my eyeballs can process, but larger displays do serve a purpose. For example, I would love to have a 4k projector shooting at my wall, instead of two 27" monitors. Actually, I'd like two, stacked on top of each other. Why ? Because then my wall becomes a giant display surface. Even right now, I can't really mentally process the entirety of my pixel space at on

  • Eh.. (Score:5, Interesting)

    by Quiet_Desperation ( 858215 ) on Thursday March 08, 2012 @04:04PM (#39292513)

    To me the real problem is focusing on the wrong details. Take Skyrim for example. Is it really a big thing if they, say, tripled the detail on the existing characters? Do the NPCs need pores or drops of sweat?

    Or would it be more interesting to walk into Whiterun, and there's a 100 NPCs walking around, or you assault a fort with the Stormcloaks and there's 100 other soldiers at your side attacking the 100 Imperials in the fort, and clouds of arrows raining down [nice knowing ya, shieldless dual wielders :-) ]? It's a "more detailed objects" versus "more objects in the world" sort of argument, I guess. I'd rather see the power applied to "more objects" at this point, IMHO.

  • I think main challenge is the interaction between player and environment. On something like MW3 that is limited to blowing up the odd chicken, window or set piece designed into game.

    I want to swish my ( virtual ) hand through a river and see ( and feel ) the water flow around it.

    Any true physics model would require awesome cpu capacity. We have at one end mindcraft ( where the atoms are decidely blocky ) and second life ( where behavioural programs can be attached to objects).

    My dream would be a virtual u

  • by roemcke ( 612429 ) on Thursday March 08, 2012 @04:14PM (#39292679)

    By using eye tracking, we dont really need to render the whole screen at high resolution.
    We only need to render the part the eyes are looking at at high resolution

    The ability of the eye to percieve high resolution is only limited to a very small area, and the brain fakes it by moving the eyes around.
    By superimposing a small image with high dpi on top of a larger image with low dpi, we get a high resolution window into the larger image.
    If this high res window follows the eyes around, the brain will percieve a large high resolution image.

    Naturally for this to work, the smaller image has to be updated to show the same part of the scene that it is replacing.

    This can also be used to emulate a high resolution screen by keeping an area your screen black, and using a projector to project the smaller high-dpi image on the black area.

    Oh, and by the way. Remember this post and use it as prior art in case some troll patents "A method of simulating high resolution images by combining multiple images of different scales and resolution"

  • by brainzach ( 2032950 ) on Thursday March 08, 2012 @04:20PM (#39292771)

    Good enough does not mean that you have to match the anatomical limits of human perception. That is asking for perfection.

    Unless the increase of graphics performance will lead to new radical ways of gaming, then the current GPU performance is good enough.

    It is not like it was 10 years when 3D graphics opened the door to new types of gameplay, like the creation of Grand Theft Auto III. Now 3D gaming has matured and there isn't any more frontiers to discover other than just better graphics, which are just marginal improvements.

    I am thinking innovation will come from input devices like touch screens, Kinects or another technology that no one has thought of yet.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...