Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Games Entertainment

Distributed Playstation 303

withinavoid writes "News.com has a story up about the next generation Playstation 3. Apparently the game developers are asking for a 1000 times performance increase and that's just not possible, so they are looking at distributed computing as a possibility. "
This discussion has been archived. No new comments can be posted.

Distributed Playstation

Comments Filter:
  • Tekken 4 (Score:1, Funny)

    by A.Soze ( 158837 )
    Does this mean it will take Tekken 4 even longer to come out?
  • by Indras ( 515472 ) on Friday March 22, 2002 @10:22AM (#3206843)
    See here, if you don't believe me: http://www.misinformer.com/archive/01-01/15.html [misinformer.com]
  • Distributed (Score:2, Interesting)

    by yatest5 ( 455123 )
    is obviously the way forward. They can link them through the new Wireless networks that are propagating all over the US - won't that be an exciting prospect!
  • If they put some thought into it they could get what they want... Modularity.. Make it possible for me to buy video-ram, system ram, make add-on's available, granted that makes the "all systems are the same" for the programmers go out the window... but if I had the ability to switch from standard video chipset to the ultra-insane-fast $399.95 video upgrade option that adds the physics module for jiggley breats to the fighters in virtua fighter 73, then dammit I'll buy it!

    By the time they even get the spec done the technology will be there... (SMP for the playstation 3... that would be a no-brainer to really jack the performance in there.)
    • If you want modularity, just buy a PC.

      Why does everyone want to turn game consoles into PC's? I enjoy the simplicity of the modern console game; just pop in a cartridge or CD, and play. That's it. No sysfiles to configure, no add-ons to buy (at least necessary to play most games, the N64 had a memory upgrade to play certain games, most notably the latest Zelda release).

      I just wanna play dammit!


    • Distributed Consoles? no. I can believe we'd have a distributed OS, a distrubuted computer, but these computers could never be used for games.
    • Modularity is exactly what you don't want in a console. The appeal for the consumer and the developer is common ground. If you want to play game X on console Y, all you need is game X and console Y. You do not need to upgrade your memory, video card, etc if you are the consumer, and you don't need to code for every possible configuration if you're the developer

      Look at peripherals for consoles. The only highly successful add-on for a console is basically the memory card (and also probably the Sony Dual Shock controller for the PSOne). Other peripherals don't have anywhere near the market penetration to make it worth coding for. For examples of this, look how many 4-player games came out for the Dreamcast, N64 and GameCube (which had 4-support built in) versus the PSOne/PS2 (which required an add-on). Because of this, modularity is not going to happen.

      but if I had the ability to switch from standard video chipset to the ultra-insane-fast $399.95 video upgrade option that adds the physics module for jiggley breats to the fighters in virtua fighter 73, then dammit I'll buy it!

      You might, but history shows that most people wouldn't. Even games that required a measly memory upgrade on the N64 didn't sell nearly as well as games that did not. People (read: parents) don't want to have to figure out what needs to go with a game to play it. They want to know what system the game is on, and if their kid has that system.
  • Atari (Score:3, Funny)

    by Decimal ( 154606 ) on Friday March 22, 2002 @10:26AM (#3206863) Homepage Journal
    Distributed console computing... interesting! Does anybody have a link on how I can hook up my Atari 2600 consoles for something like this?

    Just think how much faster PONG will run!
  • Fantastic! (Score:4, Funny)

    by BoBaBrain ( 215786 ) on Friday March 22, 2002 @10:26AM (#3206864)
    Does this mean we can look forward to playing "CORBA Command"?
  • does that mean that if no one else happens to be playing their playstation 3 at the same time you are you can't play at all or the game will look and play terrible?

    i'm not so sure this is a great idea.

  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 22, 2002 @10:26AM (#3206866) Homepage Journal
    Apparently the game developers are asking for a 1000 times performance increase...

    Developers: We want a 1000 times speed increase

    Sony: Would you settle for a press release containing a bunch of buzzwords

    Developers: Which ones?

    Sony: Let me think: "distributed computing", "biotechnology", "linux", "grid computing" and "Moore's Law"

    Developers: OK, if you throw in some hookers at the next Comdex in Vegas

    Sony: Deal
    • by Anonymous Coward

      Developers: OK, if you throw in some hookers at the next Comdex in Vegas

      It would have been funnier if you said "more hookers", or "midget hookers", or "hookers with donkeys". The base hookers is a given.

    • by RFC959 ( 121594 ) on Friday March 22, 2002 @12:54PM (#3207768) Journal
      Developers: We want a 1000 times speed increase
      Marketing guys: Yeah, and we want a 1000 times raise and an office 1000 times as big. Get bent.

      Hardware guys: *kick the developers in the nerts* Give us games 1000 times better and we'll think about it, you pathetic freaks.

  • Kewl .... (Score:2, Interesting)

    by fr0zen ( 528778 )
    This sounds great ... ermmm but how are they goning to implement that ?

    Previous experience with 8xP3 cluster rendering PVM via 100bt already shows some signs of information passing problems, i guess they won't be using it for rendering then ;)

    But its still interesting at what the internet could offer for them ... i think they're over zealous with the news of seti over performing hehe ...

    Maybe the distrbuted net might be in the "neighbourhood" area rather then internet based hehe ...

    I really hope they'll lower their prices no matter what they come up with ... but as we all know, sony was never known for being a "cheap" brand ;(
  • Sony Hype Machine (Score:5, Informative)

    by Spankophile ( 78098 ) on Friday March 22, 2002 @10:27AM (#3206874) Homepage
    Hmm, the last time Sony announced their "Next Generation Console" (aka PS2), it was still over a year away, while the PS1 was still selling well. The real kicker (aka purpose) though was that it was enough to keep a large number of gamers away from the DreamCast, which was a great system.

    It would appear to me that hyping a PS3 while the PS2 is still selling strong would be an attempt at keeping people from getting an Xbox or GameCube.
    • Just like Microsoft leaking plans for the XBox2, like _three_ months after the Xbox debut. (link: http://www.theinquirer.net/21030211.htm and http://www.theinquirer.net/18030204.htm)

      You're just seeing the market in action. Calculated pre-release information helps keep people talking, and in some cases can't be helped.
  • by skippy5066 ( 563917 ) on Friday March 22, 2002 @10:28AM (#3206878)
    Forgive me if my math is off, but if Moore's law states that processing power roughly doubles every 18 months, wouldn't a 1000-fold increase occur in about 15 years?

    Maybe they're designing the next generation Playstation on a Pentium machine. Did they ever fix that bug?


    -Jeff
    • Actual performance of machines improves much faster than Moore's "Law" * would predict. Moore's Law really only applies to how fast you can flip a series of logic gates back and forth. The rest of the improvement comes from research into things like better algorithms, better processor design, faster buses, etc.

      *"Moore's Interesting Trend" would be more technically correct.
  • How would this work? (Score:5, Interesting)

    by martinmcc ( 214402 ) on Friday March 22, 2002 @10:30AM (#3206892) Homepage
    Its always good to see technology being pushed, but I really can't see the need for '1000' times the power for games. There is so much untapped power in the current generation consoles at the minute - compare early playstation games with the most recent, tekken 1 comparred to tekken 3 for example. In a year or so time when developers have much more experience with the hardware, I expect to see the same sort of leap. developers wanted the hardware sped up so much just sounds to me like laziness.

    My other concern is how would they achieve the distribute network. The thing I like about my consoles is that I stick in the disk/catridge and play, no pissing around. I hack on my PC, I play on my console, and thats the way I like it. If I have to start into configuraing and debugging (which as it gets more complex is bound to happen) then the whole reasn for the console goes out the window.
    • What this guy said.^

      Yeah, some machine in Japan is going to render my video for me and get it back to my box quick enough so I have more than 5 FPS? I don't know what the hell this article is trying to achieve, but it isn't accuracy.

      Distributed systems work on problems like Seti or cracking codes, not real-time game playing! 1000 times power? That would put the PS3 onto the supercomputer 500 list, wouldn't it? Or are they talking 1000 times graphics power- that would be enough for photorealistic VR, for God's sake. (Excuse me if I'm wrong, but my bullshit meter started hitting 100 around the 2nd paragraph and I couldn't make myself finish the article)

      "We can't wait for Moore's Law"- then make it multiprocessor, jackass. If, for some stupid reason, they did make the thing distributed, you'll have millions of people/machines trying to do work on everyone else's machines, meaning it will be just as slow + all of the network latency. Either these people are dumb as shit (which I doubt) or they are just trying to bang the drum a bit for sales.
    • The distributed processing could only be used for certain things. It couldn't be used for renering graphics for example because for every machine you connect you have to display more graphics.

      What this power *could* be used for is shared world processing. In all current games if something is offscreen it either doesn't exist, or it exists in a "frozen" state. If you have a thousand machines, each one can handle a little peice of the background processing and hand the information off to anyone who needs it. It can add quite a bit of realism when the rest of the world keeps working normally even when you aren't looking at it.

      It can be a problem if anyone mods their game though. Not only could they cheat, but they could potentially wreak havok for every connected player. Probably of the best ways to deal with this is do all calculations on 2 or 3 different machines and make sure the results match. This also helps prevent issues when some machines disconnect without warning.

      -
  • by s4m7 ( 519684 ) on Friday March 22, 2002 @10:31AM (#3206901) Homepage
    I've heard way too much talk about consumer applications of distributed computing lately. The trick is they are not really consumer apps at all, but merely a front. If you have a distributed network with an installed base of 10 Million machines, that's a lot of idle time you can sell off to other companies. And you can bet that that's exactly what Sony has in mind.

    Think about it: the memory bandwidth on the PS2 is at least twice as fast as the FASTEST network connections available. That's nothing to say for the pitiful 2-3Mb cable pipe that is available to most people.

    The graphics are what the developers want to see the improvement in. I don't think you are going to see any improvement in performance in this area brought about by distributed computing. If it's possible, I'm really curious as to how.
    • ? i dont get it.

      how are they going to sell these cycles to other companies? especially since these cycles would not be sony's to sell - but mine.

      and - if the memory bandwidth is so great, yet the network bandwidth is so pitiful - how does that make it a suitable topology for distributed sellable bandwidth/cycles?

      • I think the intent of the distributed network is to handle background events in a large shared world. I describe it in this post. [slashdot.org]

        On the other hand when someone is playing a multiplayer card game for example, 99.9% of the cycles and network connection are going to complete waste. It would make sense to use the excess to sell distributed processing. It could eaily make the game network a free service rather than a pay-for-play service. Heck, they could potentially let you earn credit of some sort. Leave your PS3 hooked up to the network during your vacation and come home to a pre-release coupon. 10% off on a hot new game - and get to start playing it 2 weeks before it is even available in stores.

        how are they going to sell these cycles to other companies? especially since these cycles would not be sony's to sell - but mine.

        Whenever you play a game they have full control of your cycles. Whenever you connect to their netork they have full control over all the data you upload and download. They could do anything they like without telling you, but it would probably be safer for them to include some wording in the licence about it - "by connecting to our game network you agree to receive, process, and transmit distributed network data".

        if the memory bandwidth is so great, yet the network bandwidth is so pitiful - how does that make it a suitable topology for distributed sellable bandwidth/cycles?

        For graphics you are moving huge amounts of data, but you have to update the screen several times a second. A distibuted network is useless for this kind of data.

        For some projects you only need to send/recieve a few hundred or a few thousand bytes, but they can take an hour to process. Seti signal analysis or molecular protine folding problems for example. Distributed networks are great for these kinds of problems.

        -
  • by Ami Ganguli ( 921 ) on Friday March 22, 2002 @10:33AM (#3206906) Homepage

    Unless I'm misunderstanding something about the article, this makes no sense at all. Rendering a video game isn't nearly the same kind of workload as rendering a movie. The former requires low-latency, whereas the latter can be farmed out and done in batches.

    There's no way you're going to get a 1000x performance boost by distributing a video game over the Internet.

    I would bet that the real idea is to build in support for distributed multi-player games, and somewhere between the engineers and the marketroids things got horribly twisted.

    • by iamr00t ( 453048 ) on Friday March 22, 2002 @11:47AM (#3207351) Journal
      Basicaly, we all know that it was hard for PS2 developers to make games for it initialy.
      The reason was the Emotion engine in PS2, that it explicitly multithreaded, i.e. you have to make your program use all threads (unlike PPRO for example, where CPU does it for you).

      It's really a whole new way to program.

      Now it seems that Sony convinced some developers to lean it there's nothing stopping them from making more threads (there are 16 in Emotion if I am not mistaken).

      Oh, and it has nothing to do with distiributed computing over the Internet. The application architecture is similar, but that's it. And yeah, no batches here :)

      As for IBM involvement, here is the article in Wired Magasine about their cell computer [wired.com]

      Oh, and ahother one about PS2 and PS3 [wired.com], that one is quite old, but explains where Sony is going.
      • by voronoi++ ( 208553 ) on Friday March 22, 2002 @02:41PM (#3208583)
        Err, the reasons the PS2 is hard to work with is:

        1) Poor tools
        2) Arcane DMA alighnent issues
        3) Misguided selection of VU integer instructions
        (no imul, you have to jump through hoops to do xor, only 16bit, yet flags are in the upper 16bits)
        4) Hard to singlestep the VUs
        5) Very limited blending modes in the GS
        6) Very limited iterator precision in the GS
        7) No hardware clipping
        8) Wierd GS rules where rendering horizontal triangles is much slower than large vertical ones
        9) Non perspective correct iterated vertex colour
        10) Limited vram
        11) 1.0 * 1.0 == 0.999999999 in the VUs

        I.e. it's cheap and flawed (but hey that's a challenge and some people seem to like it)

        Then you go

        "It's really a whole new way to program."

        You are not a programmer are you? Didn't think so.

        "Now it seems that Sony convinced some developers to lean it there's nothing stopping them from making more threads (there are 16 in Emotion if I am not mistaken)."

        What are you talking about? I can't possibly imagine why you would want 16 threads in PS2 game.
        Genrally CPU multi threadding is quite costly. Most games are written to run at a solid 60fps*, so you can often get away with out multithreadding
        stuff like the AI, the Renderer or some trickle loader.

        BTW The only console that really had seriously multi-thredded games was the N64.

        * Due to field rendering you have 2x the fillrate (and more vram) at 60fps than at 30fps. Dropping to 30fps is bad!

    • It makes sense to me.

      Don't just think distributed graphics processing, think distributed storage, and distributed AI.

      This would allow for P2P massivly multiplayer RPGs.
      Worlds could span and grow endlessly, as you could download details of the lanscape from the people who virtually hang out in said landscape.

      This amounts to an nearly infanite amount of storage, for creating huge complex, and detailed worlds. Of course the problem would be synching so everyone sees the same world. but some games might not require as much synching as others.
      These online worlds could would be rapidly evolving.

      AI NPC's could evolve by learing how other users play, and learning from other AI's they meet traveling from Playstation3s to Playstation3s.

      Bassically we are talking about a gigantic computer on wich to run genitic type algorithems, allowing for wolrds that might actually grow in depth and realism over time.
      The possibilities are mind boggling really.

      Compared to todays MMORPG's I would say that such an advancrment would open up the possibilities for video games at least 1000x.
    • IMHO this makes sense on the server side for massive on-line role playing games. But they could also make reference here to a sort of P2P equivalent to share information about what is going on in the virtual world.

      Or of course they could also be talking of having more than one CPU like on a SMP box. A bit like the GPU is taking load of the CPU...

      Who knowns, it might even just be a marketing ploy.

  • Why don't they just dip their nuts in the PC parts bin like MS? I don't see how they can compete when they have to put so into R&D. If they used Linux, it would be *that* much better. Talk about the killer app.

    Remeber, OpenGL only exists with the support that it has at this point because of a video game.
  • Distrubuted computing would be impossible unless everyone had Fibre optics, theres just too much latency.
    • How exactly is fibre optics going to reduce latency?

      Perhaps it'll make more bits travel across the line in a given time period, or allow longer distance runs, but it certainly won't reduce latency.
      • Actually, it will, because the speed of light through a fibre is faster than the speed of electricity through a wire.

        If somebody made a fully optical switch... :D
  • Wasn't the original intent of TAO [tao-group.com] to create an OS that ran in a distributed hetrogenous(=running on any mixture of processors) environment? This is now being used as the basis of the new Amiga OS, and by various mobile phone companies as it allows them to use any chip (Motorola, ARM, etc) without having to re-port their software (with the plus of having a very compact JVM on top of it). Did they continue the multi-processing aspect of their OS or was it lost over the past few years in 'refocussing'? If Sony do go the Linux route and pour a lot of money into creating a parallel processing set of libraries, it will be amusingly ironic to then use them to DiVX one of their DVDs ultra-fast...

    Phillip.
  • Dumb (Score:5, Insightful)

    by rho ( 6063 ) on Friday March 22, 2002 @10:39AM (#3206930) Journal

    I got as far as "maybe the Playstation 6 or 7 will be based on biotechnology", or some such garbage.

    Please. This story is nothing more than a trumped up press release targetted towards the Xbox and GameCube in an attempt to either 1) slow their sales or 2) engender positive mindshare for the Playstation.

    Distributed computing? In other words, "imagine a Beowulf cluster of these..."

    • "maybe the Playstation 6 or 7 will be based on biotechnology", or some such garbage.

      Yeah. my reaction was "maybe the Playstation 8 or 9 will be based on warpfieldtechnology".

      I'll be there in a minute mom! I'm remodulating my PS9!

      -
  • by dimer0 ( 461593 ) on Friday March 22, 2002 @10:39AM (#3206932)
    If you wish to run Gran Turismo 4 in full resolution with highest graphics settings, 4 Playstation 3's are recommended.

    Well, I guess if they're rack-mountable, I'm game. Bring it on.
    • It's funny you should say that... When I got GT1 there was talk GT2 was going to support the Link Cable. I wrote a couple of gaming gurus about whether or not the second PSX could be used to handle a few extra cars but no human, bringing the 1-player opponent count up.

      I never got an answer, but I never heard any solid argument against that idea. I was hoping GT3, with its link capabilities (6-player or 6-monitor solo games) would make this dream a reality, but alas!

      GTRacer
      - Now if I could just get my Japanese PS2 to link with the U.S. one...

  • But how? (Score:2, Interesting)

    by lennygrafix ( 180262 )
    Can anybody explain me how one could use distributed computing for realtime stuff?

    I mean, I can imagine(being NO expert), that distributing all the data, waiting for it to be processed and sent back, takes more time then actually doing it yourself...in such a case.

    or am I wrong...?
  • by jdavidb ( 449077 ) on Friday March 22, 2002 @10:45AM (#3206962) Homepage Journal

    That sounds like a practical solution. I'll just buy a beowulf cluster of PS3's and ...

    (Do these guys think I can offload the processing for my games to someone else's PS3? Won't that PS3 be busy trying to run someone else's game?)

    • I'll just buy a beowulf cluster of PS3's and ...

      With that thought in mind, maybe the idea is to have the consumers buy more than one PS3, and install them in a rack. Or maybe have in-box rack space to add in extra mother-boards for multiple PS3s. With a custom bus/interconnect they could have fairly high bandwidth for distribution.
      Then you have distributed right in your own home. Just add more PS3s until your performance reaches tolerable levels, different for each game. Sony sells many more PS3s, multiple to each customer. What a marketing plan!

      • Until you consider the economics of such a situation...

        Isn't it the case that Sony/MS/Nintendo sell the hardware at what amounts to a net loss? Don't they only begin to make money once the user buys 'X' number of games, revenue derived from licencing deals, etc?

        If their plan is to have the user buy multiple platforms to be used in unison, then they had better figure out a way to manufacture these boxes at a dramitcally lower cost per unit.

        From the consumer perspective, I can justify spending $300 to get the latest and greatest console platform, but having to shell out an additional $300 to get decent performance for my $60 Game Title? Or having to buy 4 $300 units?

        This is a business plan that is doomed to failure.

        Sony and Nintendo would indeed be wise to borrow Microsoft's idea, and assemble platforms based on mostly commodity hardware. Farm out the R&D to people like NVidia or ATI for graphics accellerators, and bus architectures.

        This would allow the title developers to create games that could concievably run on 3 platforms, PLUS PCs, with only minor differences.

        Seems to me that would have a lot of appeal, for consumers and developers.

        Increasing performance by a multiple of 1000 is ridiculous in the span of a single generation. Doubling, tripling, or quadrupling? Maybe.

        NVidia now owns all of the 3dfx SLI patents... They could do some "distrubutive processing" inside the single chassis if they thought this was the way to go. Effectively doubling the number of Graphics chips needed/used may allow them to gain some economics of scale to keep prices within reason, while boosting performance by factors of 2 or 4.

  • How about..... (Score:2, Interesting)

    by 8127972 ( 73495 )
    Linux support just like the PS2? It would be one hell of an cheap beowolf cluster.

    (And you can run GT3 when you're bored!)

  • by Erich ( 151 ) on Friday March 22, 2002 @10:53AM (#3207001) Homepage Journal
    Generating fast traditional processors is getting harder and harder to do. Look how fast a P4 is compared to a P3 or a P2, in terms of actual performance per transistor count. It sucks. In fact, per transistor count, smaller, simpler chips (386) do better. Since most of the performance improvement in chips comes from process migration instead of architecture (386s would run a lot faster in a .13 micron process...) one idea is to put a bunch of simple processors on a single chip.

    There are several problems with this. Memory bandwith, power consumption, etc... but the main one is that most normal applications are written for a single thread.

    Imagine how many MIPS 4K cores you can fit in 300mm^2 in 4-5 years. That's a lot of power. Sure, they might only run at 1-2Ghz, but there will be 64 of them on a die. If you can harness that power, it might give your game developers much of that huge performance boost they want.

    Think beowulf-cluster-on-a-chip. As with multiple-workstation distributed computing clusters, the trick is not in setting the thing up, but in figuring out how to distribute your work.

    • Imagine how many MIPS 4K cores you can fit in 300mm^2 in 4-5 years.

      AAAAAAAH! This is my livlihood. 300mm^2 makes me scream. If you think the average consumer will be able to afford a game console that has a CPU that's 17mm on a side, I want your credit rating. Are you related to the guy whose name appears to be "Object of Envy" [slashdot.org]?

      Half that size isn't bad. If you had mentioned about cramming CPUs into 150mm^2 or even 100mm^2 (I think the Game Cube processor is below 50mm^2), that would have been more realistic.

      • Aren't the P4's 460mm^2 or something? And everyone's at 300mm wafers now, you can get more yield on bigger wafers...

        Anyway, regardless of actual chip area, the theory is the same... in general, several tiny processors can be better than a single big one.

    • but the main one is that most normal applications are written for a single thread.

      Sorry, but current video games do not fit your definition of a 'normal application'. The PS2 is actually a highly parallel machine. It is also quite different from any platform that game developers had ever programmed before. In fact Sony's delays in getting out a good set of programming tools to developers so that the PS2 could be fully utilized is a large part of the reason why it took so long for games to start coming out for it. GT3 is a bit of an exception, but that one game had to carry the console for quite a while...

      Perhaps a few years ago I would have accepted your argument, but not today...
      • The PS2 is a highly parallel machine, however it's explicitly ILP and DLP-parallel. So, you have DLP-type instructions (like MMX or AltiVec or whatnot) and you also have it executing several instructions per execution set (VLIW).

        This parallelism is typically very different than thread-level parallelism, as it isn't as easy to communicate over a network to another processor as it is to just pass things from one instruction to another throug a register.

        However there is interestin research in doing fun stuff with multiprocessors. So who knows what will happen...

    • Imagine how many MIPS 4K cores you can fit in 300mm^2 in 4-5 years. That's a lot of power. Sure, they might only run at 1-2Ghz, but there will be 64 of them on a die. If you can harness that power, it might give your game developers much of that huge performance boost they want.

      Two reasons why this doesn't work so well in practice:

      • Amdahl's Law.

        Short version: 64 processors is never 64x faster than 1 processor. Things usually start to suck at 4 or 8.

        Long version: As you try to parallelize to more processors, the time spent on serial (unparallelizable) parts of the task and on communication starts to limit your performance.

      • Communications bandwidth.

        Think about graphics cards for a minute. Remember how much arse two SLI-rigged Voodoo 2 cards kicked back in days of yore? Ever wonder why 3dfx didn't just put 16 of them on a chip a few years down the road?

        Evolution of graphics cards' feature sets is only part of the answer. The other reason is memory bandwidth.

        Most graphics cards are memory-bound under common conditions. Clocking the GPU twice as fast, or having two GPUs, would accomplish little if you still have to wait for texel fetches from card memory. This is especially bad if you're trying to build a 64-core rendering engine. To send all of that texture data and all of that triangle data you're going to need not just completely insane bandwidth, but 64 or *more* (if multitexturing) _independent_ memory channels, each with a silly amount of bandwidth.

        Texel caches on-die don't help you. They'll be too small to do any good. What you need is 1000-way interleaved DRAM on your console's board, and a bus running fast enough to radiate leakage in far-infrared to transfer all of that data.


      In summary, while multiple cores are a good idea, and help up to around 4 or 8 cores, a massively parallel on-chip solution won't help you for game rendering, because the working set is too large to be cached per-core.

      The only approach I've heard of that even *might* reduce this problem is the PixelFusion approach, and I strongly suspect that that smacks into memory bandwidth and Amdahl's Law problems too (it renders many pixels in a tile in parallel, but you still need texel information for all pixels for blending; z filtering doesn't save you if you still need 4 texels per pixel and don't have very bad overdraw).
    • Well IBM is thinking in the same way. Their blue gene supercomputer will consist of 1 million processors. The plan apparently is to cram 32 simple cpu:s into one chip, 36 chips on a board, 4 boards in a tower, and totally a few hundred towers. Someone else posted the link already, but I found an article describing it so interresting I'll post it again, here [wired.com] you go.
  • Which of the following processes do you want to run in the background?

    (a) A search for extraterrestrial intelligence.
    (b) A search for Mersenne Primes.
    (c) A rendering engine allowing the geek next door to play Tekken with really, really good graphics.

    Take your time.
  • Give it enough umph to play a single player or maybe 2-person game, and whenever any more buddies than that come over to play, I'll tell them to bring their PS3 with them. I don't see that as being too unreasonable. Of course it would also mean more sales for Sony.
  • More power would be cool in the PS3. Something like having the GS as powerfull as a GeForce3 (Get we get a "real" stencil in the PS3 ?!)

    Ignoring power for the moment, less complexity would be even better. It's a real b!tch having to manage ** 6 ** CPUs in parallel !! (EE, VU0, VU1, GS, IOP, and SPU) Throw in DMA transfers on top of that and it's enough to make a person pull their hair out.
    • Every Game console launch I have ever cared about has had a more powerful graphics engine than the top consumer gaming PC card of the day. I don't expect this to drop with the PS3 so the question is how will it stack up against a GF5?
    • Ignoring power for the moment, less complexity would be even better. It's a real b!tch having to manage ** 6 ** CPUs in parallel !! (EE, VU0, VU1, GS, IOP, and SPU) Throw in DMA transfers on top of that and it's enough to make a person pull their hair out.

      I don't know - I have very little problem with it... I put in the disc, and press the power button. Then I follow the instructions on the screen. Maybe you need to make sure your controller is plugged in.

      (For the sarcasm impaired, my point is that internal complexity does not affect consumer use. Most game companies now say that, once you get the hang of it (and now that there are code libraries available), the PS2 isn't that hard to program for, and offers nice flexibility).

      --
      Evan

  • I find this especially interesting, because this article [xengamers.com] explains that Sony isn't even thinking about the PS3 yet.

    For those of you who don't wanna click the link, the relevant quote is:

    More intriguing however, when asked about the status of development on PlayStation 3, Kutaragi-san responded, "Nothing has been started yet."

    Yet another wonderful CNet SNAFU.

  • Yeah, riiiiight.

    Let's just say, for the sake of argument, that the hardware they manage to scrounge up (allowing Sony to keep the PS3 price inline with the initial retail price of the PS2) is 4x as powerful. So they still need to get 250x more power to satisfy the game developers. OK, fine. But distributed computing??

    The way I see it, some Sony brainchild figures that most PlayStations are left "off" most of the time. So why not use that time to let other people around the world use *their* PS at a higher level? While Japan is sleeping, US PS3s could be using the spare Japanese processing power to improve the gaming experience.

    Two problems:

    1. Bandwidth. How much, and how?? Are they assuming everyone has broadband and that the PS3s are always hooked into it?
    2. "Slow" times. I seriously doubt that PS3 use doesn't have spikes, even averaged worldwide. So some countries are going to be the losers, with limited or no boost from distributed computing.
    It's all moot, anyways. I'd need a MUCH better television (and stereo) to be able to appreciate 1000x more gaming power.
  • The idea is a nice one, but the way the article presents things is just silly. Here's a good example:

    Looking further ahead, Okamoto saw even bigger changes for Sony's game business. "Maybe the PlayStation 6 or 7 will be based on biotechnology," he said.

    Yeah, and maybe it will still suck compared to Ninetendo. Listen, by the time they're at PS6 or 7, they better be marketing a damn Holodeck.
  • Wow. pay 500$ for a console, only to have it spend half its time working on someone else's games...on the otherh hand, plug it into a 56k line and watch someone playing Quake 7 have a stroke as their FPS drops to 3

    "Sir, looking at our usage staticstics it seems to be that your Sony(tm) Playstation(tm) 3(tm) has not been doing its fair share of our distributed computing"
  • This article definitely forgot to include Nintendo comments. So to be fair to them, I'll include my fav quote from Nintendo's president :
    "It's not about the hardware you stupids, it's about the game."

    BTW, as the end of the article reveals, M$ did an excellent job at localizing their HW and SW. Omitting to do international research on things like controller sizes and text size in dialogs: Wow, no need to have expensive offices in Tokyo and Europe to end up making such stupid mistakes. Not to mention that they could have simply copy the way Sony did it, as they had done for the rest.

    PPA. the girl next door.
  • 1000x performance increase? I think somebody is smoking crack. An equivalent would be like setting a goal to put a supercomputer on everybody's desktop.

    What in the world could they do with that much computing power? Holodeck?

    I think 4x increase would be mind blowing. Although, Bill Gates was once quoted as saying that 640k should be more than anybody will never need.
  • Get rid of those hideous jaggies. Damn, it pisses me off everyone wants to release games for the system that makes everything the most ugly.

    Shorter load times couldn't hurt either...

    Seriously, I wonder what the heck they would do with distributed computing. Obviously, it's not going to give you any better graphics at all. Maybe in multiplayer games you could split up collision detection/physics work. Maybe this means they want to make p2p massively multiplayer games. Maybe they want to make insanely cool new AI systems.

    This could really kick ass...but it's probably just hype.

    BUT FIX THE DAMN JAGGIES FIRST!!!! ; )
  • Speaking at the Game Developers Conference (GDC), an annual trade show for the creative and technological sides of the game industry, Shin'ichi Okamoto, chief technical officer for Sony Computer Entertainment, said research efforts for the PlayStation 8 are focusing on neurological implants, a method for allowing the game player to control game merely by thought.

    Okamoto said the method also appears to hold the most promise for dramatically boosting the performance of the PlayStation. Instead of being reliant on a hardware "processor", all game computations would be performed in the user's own cerebrum. Unfortunately, this means that game developers can not work on a strictly "fixed platform" basis any more, considering performance will greatly improve with intelligence.

    "I think we can easily overcome this barrier," Okamoto said, "Instead of hardware requirements as we see them now, we could instead have IQ requirements. Like, we would say that the minimum requirement for Gran Turismo XII is a high-school diploma, but we would recommend at least a college-level education to get any decent performance. But then for games like Resident Evil, well, any idiot could play that."

    The gaming industry was reeling with excitement by this announcement, and Okamoto was further pressed for details on how this technology would actually be implemented. After a few minutes of uncomfortable shuffling and avoiding eye contact, he eventually admitted that he was merely "making shit up".

  • Let's keep in mind that the PS3 is probably still some time away; 1000 times the performance is not as stupid as it sounds, only almost. There's also the question of what Mr. Okamoto (Sony's CTO in case you didn't read the article) really means by "performance" - CPU speed? HDD capacity? Screen resolution? Frames-per-sec? Or some mysterious combination of them all? Most likely, he was just trying to build up some hype - same as the fantasy that PS6/7 will be based on biotechnology. Yeah, right.

    Also, I don't see distributed computing as something which will be very useful for playing games; sure, with a high-speed link between several PS3's you might be able to fake SMP, but the games would have to be optimized for it, or the performance increase would be abysmal compared to the extra cost of having to buy two PS3's. You might as well just get yourself a PC and have a gaming rig that's easier to upgrade, runs a wider variety of apps, has a decent-resolution monitor and gives you a choice of what OS you want to run. Of course, the PS3 might have all this, but don't bet on it.

    Btw, I wonder what Pete Isensee (the Xbox developer guy) means by saying that Microsoft can't get stuff right until version three. Windows is WAY beyond 3.0, and there's still plenty of room for improvement (note the careful wording there).
  • PS9 (Score:3, Funny)

    by llamalicious ( 448215 ) on Friday March 22, 2002 @11:43AM (#3207336) Journal
    Come on... PS9 was already released, with the telepathic interface. Surely it's 1000x faster than PS2

    Don't any of you watch T.V. ????
  • "Looking further ahead, Okamoto saw even bigger changes for Sony's game business. "Maybe the PlayStation 6 or 7 will be based on biotechnology," he said."

    When they come to install Crash Bandicoot in my sternum, I am running the other way.

  • by asv108 ( 141455 )
    Saddam is behind this push for distributed computing, he bought 4000 PS2's and realized that he couldn't cluster them together. [psxnation.com]
  • Uh huh. (Score:2, Insightful)

    This is a ridiculous hype-fest for even The very fact that the guy followed up the distributed comment by some random buzzwordism about biological computing should tip you off.

    Here are some problems with a distributed gaming console that I can think of off the top of my head:

    - Latency: The main reason you'd want a lot of processor power in gaming is to calculate physics and graphics. This needs to be done on a damn-near-real time basis. No distributed computing network can provide this. High end clustering, maybe, but nobody is going to pay for multiple PlayStatia to play one game.

    - Availability: Sony KNOWS that they are making a device akin to a toaster. When you turn on the console you should be able to play your game. Without worrying about your network connection, whether your neighbor's microwave is disrupting the Super National Ultra Wireless Grid, etc.

    - Infrastructure: Don't even get me started. Sony would have to build millions of wireless POPs in a grid across the entire country. Or wire everyone's house when they buy a PlayStation.

    - System Load: Say the PS3 is 10x more powerful than it is now. That means you still need 100 of them to reach the "1000x" figure they are blathering about. This means that if America has a million networked, always-on PS3s, only 1% of them cam be in use at any given time. During peak hours this is probably not possible.

    In other words, this is dumb. Tell me if I'm wrong.
    Justin
  • Just to clarify.... (Score:2, Interesting)

    by aqu4fiend ( 528775 )

    ... As far as I can tell from the article, they're talking about *internally* making the PS3 a multi-processor system.


    They are looking into basing the architecture on some of IBMs research into distributed computing (specifically, something called grid computing [ibm.com]).


    They are *not* talking about *actual* distributed computing using the PS3 - this is purely about the internal design being based on a distributed model to get more performance.

    • I think you're wrong. I don't think they are even considering doing anything of the sort.

      Grid computing, as defined in that IBM article, implies geographic seperation. Getting 1000X or even 100X of the PS2's processing power into the PS3 within 1 or 2 years is unrealistic. The price of the system simply does not allow it.

      Even with internal multiprocessing, you'd still need a huge number of processors.

      Justin
  • by Iron Chef Japan ( 531022 ) on Friday March 22, 2002 @12:18PM (#3207560) Homepage Journal
    Well I was excited about all the Cell development and this PlayStation 2 stuff, but Ken Kutargai (the guy behind playstation and SCEI president) recently made some very grim statements at the South Korean PlayStation 2 launch. On the topic of PS3 Kutaragi-san said "Nothing has been started yet." He made some very grim statements about online gaming too saying; "If broadband connections capable of delivering 10Mb/s are affixed to game consoles, the industry as we know it will be over. By that time, perhaps 2005 or later, games would be available for download rather than sold in stores." This news [gamefu.org] came right after many analyst's came out saying how skeptical [gamefu.org] they were about Sony's online plans. This comes right after the Nintendo-Square and Nintendo-Capcom deals, which by the way Kutaragi mad, summoning top Square officials [gamefu.org] to the SCE headquarters to explain the deal, as he was out of town when the deal was made (the Square one) and had no prior knowledge about it. The memory card shortage [gamefu.org] doesnt help much either.
  • I'd "like" a trillion times increase, the hell with this 1000-fold stuff. That's chump change!

    Maybe what they really meant to say was they're investigating parallel processing, not distributed computing. If they wait long enough that they can get a 10 times increase in graphics processing power and design the system such that it can run 100 of those processors in parallel, well then there's a 1000 times increase (of sorts, it's not really that easy, nor would that likely turn out to be a reasonable proposition for consoles that are meant to cost <$300!). But otherwise I think this is just marketing being out of touch with reality.
  • Crazy XBox fans already have done this. They take a PS2, put in GTA3, and smash it to bits with a hammer. Distributed all over!
  • I find this to be quite amusing. Distributed computing? Biotechnology? Developers demanding 1,000 times more computing power? All of the developers I know that have touched the PS2 have demanded fewer processors and an architecture that makes a semblance of sense. You can't distribute the computing on a game system when .05 seconds is a nauseating lag. Maybe if you were running a MMPORG, you could use each console to compute the region of space that they were in. Even then the most computationally costly part of gaming, the rendering, needs to be done locally in real time. The only way this could even make sense is if Sony was focusing on massively multiprocessor systems, an idea that seems unlikely considering A: the relative costs and B: sony's claim of shared memory. Did the Blue Meanies spike the water supply?
  • I can most assuredly state that what developers want isn't even 10x more power. It's better libraries. The PS2 is a royal stiffy to use because the interfaces are archaic and in some cases, simply lacking. Compared to the XBox (just a windows PC in fancy wrapping) or even the GameCube (which is proprietary, but relatively easy to work with), the PS2 is somewhat underpowered and difficult to develop on. This role reversal from N64 vs. PS1 days makes it much more challenging to produce quality titles.

    The interesting thing to note is that artists don't have the tools capable of managing nor time in the schedule to spend making billions of polygons for each model. Increased content means longer development time. As it is, the graphics chips on the current consoles take away so much of the real work that games have little to worry about when fitting in gameplay CPU requirements. Even memory constraints are fairly relaxed these days. I mean, I never thought I'd see the day when the STL is used for console games. :-)
  • All of the developers I know would much rather have developer libraries that don't suck.
  • by Jeppe Salvesen ( 101622 ) on Friday March 22, 2002 @01:11PM (#3207898)
    The lead developer for "Microsoft's Xbox Advanced Technology Group", Pete Isensee, said something interesting:
    "Microsoft has this stigma about not getting it right until version three. We didn't have a choice with Xbox. If we didn't get it right with version one, Sony and Nintendo would eat us alive."
    What is the implicit message? I would say : "As long as we have direct, real competition, we will produce quality products on time"
    • Isn't that what everyone wants? Microsoft came in as the underdog and the outsider in the console arena, and they wanted to be able to compete, so they set about creating a machine that was better than either of its competitors. Mayber they're setting an example for anyone wishing to compete with them: start by creating a better product, then complain about being the underdog.
  • "Moore's Law is too slow for us," Okamoto said, referring to the long-held truism that semiconductor power doubles roughly every 18 months. "We can't wait 20 years" to achieve a 1,000-fold increase in PlayStation performance, he said.

    Hey, Sony! How getting your head out of the tech closet and think about making games today that don't play like ass?

    How much do you want to bet that even with a playstation three hojillion the Resident Evil series still won't have "custom features" like the friggin' ability to sidestep or save your games anywhere but the God forsaken typewriter?

    Honestly, can we get some late 80's gameplay dynamics up in this thing?

    With that kind of power, I can only imagine that it is just that much more easy to make a game series like Resident Evil or Syphon Filter look and play like total doo-doo.


  • This brings up two points.

    Point one is that this guy is wrong; the Japanese xbox disk drive scratched up disks. While the US release went great, the same cannot be said for overseas.

    The other point is that the software itself had to be changed for different regions in unpredicted ways.Not only were languages different, but That includes the Xbox start-up screen, which had to be redesigned for the Xbox's European launch because nobody realized that the German "einstellungen" wouldn't fit in the same text space as "settings."

    So with all of these differences(using the xbox as an example) how is Sony going to make a distributed world-wide game? Everyone would have to be using basically the same software, right? Unforseen changes needed for different regions could cause problems.

  • I like my PS2, but if they're even considering the possiblity of doing a distributed network of PS3's over the Internet, they need to make sure they've designed a more solid unit.

    The PS2 is already notorious for having problems when the cooling fan gets clogged up and fails, and that's often with use by people who turn it off when they're done playing.

    Ideally, you want a low power consumption unit that doesn't really ever power off completely. It should be designed to stay on all the time, so it can share CPU resources with other gamers whenever you're not actively playing on it.

    Of course, this won't really go over so well unless/until broadband prices drop and it becomes more commonplace. Right now, I think even a lot of DSL customers would unplug a box designed this way because they only have 14K per second or so of upload bandwidth, and they might want to use it for other things besides an idle PS3.
  • Apparently the game developers are asking for a 1000 times performance increase...


    Why 1000x? Is this anything other than an a number they just pulled out of their ass?


    Q

  • Dreams or Reality? (Score:2, Interesting)

    by YT ( 79213 )

    Is there any harm in aspiring for these things to come true? What if Sony pulls off distributed computing for the PS3? Will the people here still be saying "that's stupid"? What if Sony has biotech running on the PS6 or PS7? If it wasn't for people coming up with crazy ideas would anything get invented? Innovation is important part of pushing things forward. If nobody tried to do the crazy things, then how would we know if it would be possible?

    When Kenndy said let's go to the moon. What if eveyrbody had listened to the poeple shouting "It can't be done, it's stupid, it's a dumb idea." There are people out there working on fusion, anti-matter, FTL travel, grand unified theory, cures for cancer, etc... Are these people stupid and dumb? Hell, all Sony wants is a 1000 fold increase from the PS2. If they want to put biotechnology in their PS6, fine whats the problem with that?

    You want to hear about something stupid and dumb? What about a "nextday delivery service?" or "being able to hear actors talk in movies?" or "going to the moon?" Frick! now these are stupid and dumb ideas.

  • All this talk of latency and bandwidth is assuming the distributed computing is across a WAN. This is not necessarily the case. Imagine if the PS3 looks like the PS2 but is "stackable"? Want to upgrade your PS3? Just buy another and stack it on top of your current ones.

    As for the real-time arguments, a lot of pre-rendering can be done before it gets to the point of being displayed. The renderer could even learn some lessons from the micro-processor world with super-scalar architectures and branch prediction.

    Finally, the old "how much power do you really need" and "what's the point if I just have a standard tv/monitor" arguments: imagine how much power rendering an interactive movie with life-like characters real-time would take. It's WAY beyond anything we can do in the home today.

    Phillip.
  • I can just imagine the back of the box... "This game requires a PS3Cluster(tm) of 16 Playstation 3's to function at playable speed."


    Although, I figure that they're just planning on making one box with 16 processors on a single die.


    Cryptnotic

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...