Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
PlayStation (Games) Programming Hardware Technology

The Art of PS3 Programming 99

The Guardian Gamesblog has a longish piece talking with Volatile Games, developers of the title Possession for the PS3, about what it's like to make a game for Sony's next-gen console. From the article: "At the end of the day it's just a multi-processor architecture. If you can get something running on eight threads of a PC CPU, you can get it running on eight processors on a PS3 - it's not massively different. There is a small 'gotcha' in there though. The main processor can access all the machine's video memory, but each of the seven SPE chips has access only to its own 256k of onboard memory - so if you have, say, a big mesh to process, it'll be necessary to stream it through a small amount of memory - you'd have to DMA it up to your cell chip and then process a little chunk, then DMA the next chunk, so you won't be able to jump around the memory as easily, which I guess you will be able to do on the Xbox 360."
This discussion has been archived. No new comments can be posted.

The Art of PS3 Programming

Comments Filter:
  • It uses OpenGL (Score:5, Informative)

    by AKAImBatman ( 238306 ) <akaimbatman@gmaiBLUEl.com minus berry> on Friday January 27, 2006 @02:53PM (#14582150) Homepage Journal
    Apparently, the machine's use of Open GL as its graphics API means that anyone who's ever written games for the PC will be intimately familiar with the set-up.

    As a programmer, I can attest to OpenGL being a God-send. Not only are programmers intimately familiar with the technology, but it was designed from the beginning with portability in mind. Direct3D, OTOH, tends to follow Microsoft's practices of hiding what's really going on behind the scenes. It's been a little while since I've bothered with Direct3D, but one of Microsoft's biggest features used to be their own SceneGraph known as "Retained Mode". For some reason, Microsoft believed that everyone would want to use their Scenegraph only and damn technological progress. Most programmers who were in the know immediately bypassed this ridiculousness and went straight for the "Immediate Mode" APIs, which weren't as well documented. (Thanks Microsoft)

    Wikipedia has a comparison of Direct3D vs. OpenGL here: http://en.wikipedia.org/wiki/Direct3D_vs._OpenGL [wikipedia.org]

    Other than that, a computer is a computer, and game programming has always required a strong knowledge of how computers operate. So it's not too surprising that it would be "just like any other programming +/- a few gotchas".
    • Re:It uses OpenGL (Score:2, Informative)

      by HeavyMS ( 820705 )
      "It's been a little while since I've bothered with Direct3D"
      We can tell Immediate Mode/Retained Mode is ancient history..

      • We can tell Immediate Mode/Retained Mode is ancient history.

        Thank God. It was stupid to begin with, yet Microsoft kept pushing it version after version. I know it was still there at least as high as DirectX 5.0.

        With OpenGL having been ported to just about every platform in existence, it just doesn't make that much sense to bother with DirectX unless you really have to. (Which thanks to Microsoft's interference, does happen to professional developers. Poor bastards.)
        • Re:It uses OpenGL (Score:1, Flamebait)

          by drinkypoo ( 153816 )
          OTOH every 3d graphics card manufacturer provides an OpenGL library. There's really no reason you have to be tied to Direct3D. There are however good reasons to use other parts of DirectX, specifically DirectInput. I hear SDL has something like DirectInput now, but I haven't looked at it yet. SDL definitely provides a layer over audio that will pipe stuff to DirectAudio, though, and you can use OpenGL for cross-platform graphics, which is pretty much the method that I think makes sense.
          • Re:It uses OpenGL (Score:2, Informative)

            by Trinn ( 523103 )
            As someone who has done a lot of programming in SDL, yes it has an input layer that is fairly good, integrated into the main SDL event loop. The major missing piece as far as myself and my fellow developers are concerned is a library for mapping SDL input events to some sort of internal game events (control customization). Eventually we plan to write one, but we are...lazy.
        • Re:It uses OpenGL (Score:5, Insightful)

          by Haeleth ( 414428 ) on Friday January 27, 2006 @04:36PM (#14583312) Journal
          [Retained Mode] was stupid to begin with, yet Microsoft kept pushing it version after version. I know it was still there at least as high as DirectX 5.0.

          Look, you're welcome to hate Microsoft if you choose, but your memory is rather inaccurate.

          Get this: Retained mode was not meant for games. Microsoft never "pushed" it for games. Immediate mode was always there for games to use. Games were always supposed to use immediate mode.

          It's been a while since I read the documentation for ancient DirectX versions, but IIRC it actually said, right there, quite explicitly, in the documentation, that retained mode was not meant for high-performance graphics and that games should use immediate mode.

          The idea of retained mode was that it provided a much simpler interface. It was intended for use by multimedia applications that did not require the power and flexiblity of immediate mode, but just wanted to throw a few 3D meshes on screen and move them about a bit, without all the hassle of coding all the data structures and transformations by hand. It didn't catch on, and it eventually died, but it wasn't stupid by any means, and something very similar will be making a comeback in Windows Vista.

          At least, I say it wasn't stupid. Maybe it was stupid. I don't see how providing a simplified API for simple applications, and a complex API for complex applications, is "stupid", but then I use Microsoft software out of choice, so clearly I don't hate Microsoft badly enough yet for me to be able to judge their decisions objectively.
          • Immediate mode was not even available in DX until version 5.
            • So? 3D hardware wasn't supported AT ALL until DX5. DirectX was 2d only until '97. Back in the day Microsoft was planning on using OpenGL for 3d games, but the standards body moved at such a glacial pace (and had such a snobbish "opengl is for high end graphics only" attitude) that they said "screw it" and did their own thing.
    • Re:It uses OpenGL (Score:5, Informative)

      by amliebsch ( 724858 ) on Friday January 27, 2006 @03:11PM (#14582337) Journal
      Thee article you cite to doesn't really support your conclusion of OpenGL being a "god-send." Instead, the article seems to conclude that at this stage, for all intents and purposes, the two APIs are functionally equivalent.
      • Re:It uses OpenGL (Score:3, Informative)

        by AKAImBatman ( 238306 )
        Thee article you cite to doesn't really support your conclusion of OpenGL being a "god-send."

        OpenGL is a God-send for a couple of reasons, IMHO:

        1) The API is well known by developers, and has remained stable from version to version. This reduces the amount of R&D and training that need to be done for a game.

        2) Use of OpenGL allows for portable code. While you can't completely get away with writing the same code between a PC version and a Console version, much of the rendering engine at least has a chan
        • Re:It uses OpenGL (Score:5, Insightful)

          by jinzumkei ( 802273 ) on Friday January 27, 2006 @03:46PM (#14582744)
          1) The API is well known by developers, and has remained stable from version to version. This reduces the amount of R&D and training that need to be done for a game.
          Uh most games nowadays use D3D.

          2) Use of OpenGL allows for portable code. While you can't completely get away with writing the same code between a PC version and a Console version, much of the rendering engine at least has a chance of getting reused.
          If you write a flexible enough rendering engine this wont matter so much.

          3) Carmack says so. ;-)
          yeah, ok. good reason

          4) New features actually go through a standards process, meaning that they get more documentation than just "whatever Microsoft feels like telling you".
          Which also means it takes long YEARS for a new version to come out, how long have we been waiting on OpenGL 2.0? Some cool things have come out since and OpenGL is always playing catch now.

          5) DirectX is a non-portable skill. It ties you to Windows and the X-Box(s). OpenGL "ties" you to the Gamecube, Windows, PS2, PS3, Linux, Macintosh, etc.
          Graphics Programming is a portable skill, I've never met a good graphics programmer who couldnt switch between the two on the fly. Honestly if you can only do graphics in 1 or the other that's pretty worthless.

          I'm sorry the whole DX vs OGL war is really old and really lame, Neither are a "god-send". They are both tools, use the one that is best for the job.
          • Uh most games nowadays use D3D.

            Most games for windows use D3D. Consoles are still a big business, and OpenGL rules the day on those. Also, major engines like Doom III and UnrealEngine 3 have Direct3D and OpenGL modes to help with portability.

            3) Carmack says so. ;-)
            yeah, ok. good reason


            That's a joke. Smile. :-)

            Which also means it takes long YEARS for a new version to come out, how long have we been waiting on OpenGL 2.0?

            I believe you mean, "how long were we waiting on OpenGL 2.0?" And you're right. Quite aw
            • Most games for windows use D3D. Consoles are still a big business

              And most independent games are for Windows.

              • Why do independant gamers do this to themselves?

                Why not code in OpenGL?

                OpenGL drivers are freely available on evey major platform, just as C compilers are. Why limit yourself to a single market when you can target them all?
                • For the same reason cross platform developers don't all use Java. You end up having to tweak things for every platform anyway. Just because something compiles doesn't mean it works right.
            • Correct me if I'm wrong, but I seem to recall, for example, Doom 3 having several different OpenGL 'paths;' straight-OpenGL, Geforce, Geforce 3+, ATI, and something-or-other else, mainly due to propriatry extensions.

          • If you write a flexible enough rendering engine this wont matter so much.

            True, but part of the reason why you use a graphics API is to spare programmers the work of rolling their own rendering engine in the first place, right? A wrapper layer shields the layers above (maybe), but somebody's still gonna have to write the layers under it. Far from trivial.
        • I agree for the most part, but just a couple points

          -There's the joke that goes "Don't buy anything from microsoft until at least the third version.". Direct3D definitely fits into that stigma. The early versions of directx were apparently garbage I think Direct3D v3.0 was the version that Carmack blasted when he opted to use OpenGL. I've read that nowadays he is much happier with the API, and he's even working on an Xbox360 game - which is noteable considering that the PS3 uses OpenGL.

          DirectX is a non-p
        • 3) Carmack says so. ;-)

          Read anything from Carmack in the last year or so? He's big into pimping XNA [microsoft.com], which means he's presumably gone all the way over to DirectX.
    • I can't believe this got modded to +5. Direct3D retained mode was generally frowned upon and completely ignored even in the days of DirectX 3. At least since version 6, Direct3D is a perfectly usable API and the TextureStageState setup as introduced in DirectX 7 just beats the crap out of OpenGLs glTexEnv mess.

      Too bad it's not cross platform
    • No offense but I think this is deceptive about how informative it is.

      They didnt choose OpenGL as opposed to Direct3D They chose it as opposed to a proprietry API. Thats quite different.

      There are plenty of arguments for Direct3D being just as capable as OpenGL a good quantity of people believe its better. This, however, is largely irrelevant to Sony, who are in direct opposition to the people who make Direct3D.

      So instead of choosing OpenGL as a wise development move its far more likely they chose it because
  • 8 Threads? (Score:2, Informative)

    by Kent Simon ( 760127 )
    I'm still baffled into how you can efficiently break up a game into 8 threads.

    ok controller input on one..
    graphics on another..
    physics on a third .... woops problem...need critical sections for this to operate with the graphics thread.
    networking on a fourth .... woops problem, need ritical sections for this to operate with the physics thread..
    sound... ok no problems here, thats 5.


    See, even dividing it up into 5 threads causes problems, you need to make sure that you are allowed to do something on
    • Re:8 Threads? (Score:5, Informative)

      by Dr. Manhattan ( 29720 ) <sorceror171@nOsPAM.gmail.com> on Friday January 27, 2006 @03:27PM (#14582537) Homepage
      I'm still baffled into how you can efficiently break up a game into 8 threads.

      TFA says they are contemplating a job-queue organization, with cores taking jobs as they become available. Provided the size of the 'jobs' are limited so they fit comfortably within the overall time it takes to calculate a frame, it should work fairly well. A lot of physical-simulation problems are close to 'embarassingly parallel', anyway.

    • by hobbit ( 5915 )

      6) Monsters
      7) Aliens
      8) Baddies

    • Re:8 Threads? (Score:5, Informative)

      by karnal ( 22275 ) on Friday January 27, 2006 @03:31PM (#14582591)
      One thing to think about though, regarding threading.

      Just because you have critical sections in one thread that may have to hang out waiting for another thread, doesn't mean that at some point in time the two threads can't execute simultaneously while not needing data from one another. At times like that, you get speedup (especially since you have seperate cores/processing units/whatever)
    • Re:8 Threads? (Score:3, Informative)

      by AKAImBatman ( 238306 )
      I can think of a few ways off the top of my head, but none I'd actually like to try coding. For example, you can divy up the collision detection process across different threads to have each processor test a given percentage of objects. Similarly, you can assign the physics handling for different objects across different processors.

      The article suggests that this be done by having a single "controller" processor rapid fire the tasks to the other processors. While this would work, it's also less efficient tha
    • Re:8 Threads? (Score:3, Interesting)

      by AuMatar ( 183847 )
      Its worse than that.

      Controlers- no reason to have a thread, you use interrupts and wake up once a second when an input changes. No thread needed.

      Sound- unless you're doing heavy duty software mixing (not hardware mixing or channels), you don't need a thread, its all interrupt driven.

      Network IO- a thread probably isn't the best way to do it for the client. Just poll for IO once a frame. Or use select to sleep if you have no other processing to do

      AI- this one makes more sense, a smart AI can try and pr
      • Most games are not still single threaded. If you use direct play you automatically have multiple threads. Pc games use threads all the time. There a lot more uses on the pc than a console, eg. mouse, ai, streaming. The developers will figure out a way to use them efficiently. In fact I don't think I've worked on one single threaded game.
        • Mouse? That alone proves you don't know what the hell you're talking about- why would the mouse need a thread? The mouse is interrupt driven. So is the keyboard. There's 0 point in using a thread to control an interrupt IO system- immediate processing happens on the interrupt, delayed processing is handled by the main thread when it reaches the check for input phase of its work.

          Streaming? Thats the very definition of what DMA is meant for. No need for a thread to stream content- you decode it to me
          • Actually the graphics on the mouse, the pretty little cursor you see in pc games, usually has it's own thread. This is so that when the game slows down the user isn't frustrated with the mouse being really jerky. Windows is message based, so if your not processing the messages nothing happens. Carmack is pretty keen on keeping his engines single threaded because he doesn't believe the overhead is worth it. So i'm not sure what games you talking about unless they are old dos games or something based on i
          • No, games usually use a second thread for sound. There's not much point in threading AI.
          • What games? I only have a few, Doom 3, Half Life 2, UT2K4, they're all multithreaded.

            But, I was under the impression, and I'm not some expert, that the benchmarks showed games like these running better on processors that don't necessarily have better threading capabilities, which leaves me with the impression that while these games are multithreaded the bulk of the work still relies on only a handful or even one thread. Again, no expert so I could be totally wrong.

            I know personally I love threading
          • You do know the difference between a thread and a process, right?
      • "I really don't see most games making use of 8 threads. Most games now are still single threaded"

        Thats exactly the problem the industry is trying to fix. The move to more than one threads to make things efficient.

        Take a look at it this way. Processors are made on 65nm processes, which will go down to 45nm processes. We're reaching the limits of moore's law. Their overclockability will increase as dies are stacked like PCBs, but that will reach an end too. Next step really is a break from the traditional sin
      • Most games now are still single threaded.

        At last years game developer conference both Intel and AMD were saying that games should go multithreaded, that future CPU performance improvements were largely going to come from multiple cores not clock rate. Intel and AMD were both demonstrating current games taking advantage of threading. I forget what the game was but one racing game uses a second thread for optional effects. When running a single thread you get a small amount of dust, smoke, flames, etc. Ho
      • I assumed that you'd parallelize smaller jobs. For instance, suppose you're decoding mpeg. Mpeg is broken up into macroblocks. You could get a theoretical 8x performance boost by dividing up the work across all the processors. Don't think in terms of big threads, think about little, task specific threads.
    • I'm still baffled into how you can efficiently break up a game into 8 threads.

      You don't simply break up a game into 8 threads. Remember 7 of those 8 processors are not really general-purpose CPUs, but SPEs, more like vector processors. Moreover, the SPEs don't have direct access to main memory, they have to be spoon fed from the main processor or some sort of DMA controller.

      So the main CPU would have to set up the SPEs and then handle all of the networking and I/O and stuff, while the SPEs would handle the
    • There are other ways to divy up work.

      If your intention is to put independent tasks out to different processors, you will run into huge issues like the ones you describe.

      Instead, consider the beginning of each logical step in the game loop as a "constriction/delegation" point: You constrict, meaning that only one thread is running right now. Then, say, it's time for particles. You now wake up your eight particle worker threads, divy up the gargantuan 2000 particle emitter loop into 250 emitters each. You the
    • Re:8 Threads? (Score:5, Insightful)

      by astromog ( 866411 ) on Friday January 27, 2006 @03:55PM (#14582836)
      What I find interesting about the question of "What can I do with 8 threads?" is that most people seem to assume that you can only have one graphics thread. Why not have 2? Or 3? Or 6? The Emotion Engine's core design is based around having two parallel programmable units handling graphics at the same time, for example one animates the surface of a lake while the other makes the pretty refracted light patterns on the bottom. Yes, it's nastier to program than standard single-thread-for-each-task programming, but it makes for a very powerful architecture when used properly. Similar things can be done with other parts of a game, and if you design your data layout and flow correctly you minimise the need for synchronisation. You could draw your frame with 7 parallel threads, then flip all the SPEs over to handle the physics, input, etc update for the next frame. It's all just a matter of thinking about how you design your game.
      • Thats great and all, except the cell chip isnt responsible for the graphics, that would be why they stuck an nvidia chip in there. (the original plan was to use the cell chip to render the graphics as well, however they found that the cell chip was not quit up to the task).
        • The nvidia chip is responsible for drawing the graphics, not controlling and generating the geometry. The Emotion Engine used the same system and it works very well.
    • Really not to hard. A lot of the time you can break any logic of game objects down to run in their own threads so it shouldn't be to hard to schedule the engine to run as many game objects as you can while still processing physics and and graphics and such. Not as easy as a linear program but not so hard either and it allows you to get more done in a given slice of time.
    • 1) cut the screen down the middle
      2) cut the screen horizontally 3 times
      3) ???
      4) profit from 8 cores
    • The thing I'm hoping those 8 cores will be used for is to fix those things I hate most. If you have that much extra power, spend some of it on the little things.

      You can make good ripples on water, and do other geometry things. Make the trees have REAL leaves (like that great Cell demo). Make more individual blades of grass and such. Just little things that act correct so the world looks more "real" and less "here is a random bush so you don't notice there are no bushes".

      Hair, clothes, weapons. Make them a

    • I'd bet that controller input won't use much more than 10% of a CPU, so you still have to find other things to do for this CPU..
    • In theory it is easier than it sounds just like the article says.

      For instance take a simple pool simulator with 4 or 5 balls. You hit the cue ball in to the triangle of pool balls. Collision detection isnt utilized until after the balls have been moved so you simply give each ball and SPE to work out where its moved to and you are processing them all at once. Not to hard.
      The process can be used for testing actual collisions as well so that can be done for each ball on each SPE as well.

      Thing is there are som
    • I am NOT a programmer at all, but conceptually, why would you not have everything running round robin style on each processor, or allocate whatever was needed processor wise as a trffic router, then say 6 for all functions, 2 for traffic? Please don't flame me. Just link me out to a FAQ that explains it better, but in my mind, that would make max use fo the style of engine it had.
    • I'm a little late here, but TFA contains a very good suggestion that I've used: job queues. That guarantees that you don't have physics resources idling while graphics jobs are waiting to be done, and vice-versa. A perk of the jobs and job queues model is that it's simple to understand (as multithreaded models go), and there tends to be a much simpler relationship between the software design and the dependency/deadlock analysis than with other models. In general, job queues give you more rigor with less
  • SPE overhead (Score:4, Insightful)

    by ClamIAm ( 926466 ) on Friday January 27, 2006 @04:20PM (#14583122)
    As game developers use the 7 SPE chips more, I wonder how much of the main CPU's time will be taken up by things like managing threads and packing up work for the SPEs. It's almost similar to an operating system, where the main CPU would almost be like a kernel, managing memory and allowing different threads to talk to each other.

    (If the OS analogy is flawed, sorry).

    • No, your analogy is pretty much on target. Video games are a form of Real-Time system. Real Time Operating Systems [wikipedia.org] are the origin of our modern, pre-emptable, multi-tasking environments. The only difference between game programming and most RTOSes is that game programmers tend to prefer to manage the task splits in a manual, calculated-time-to-execute fashion rather than a pre-emptable fashion as games don't lend themselves well to being pre-empted.
    • The correct issue here is that it seems the Cell is not a SMP http://en.wikipedia.org/wiki/Symmetric_multiproce s sing [wikipedia.org] processor, but an asymmetric or at least NUMA one.

      So, you don't just get to create 8 threads all of which can do arbitrary jobs, but you must explicitely split and divide the jobs and assigned them to specific processors (and do this from the thread that runs on the master processor). In a SMP system, the OS can move threads across processors freely, but not so here, and because of that pro

      • So, you don't just get to create 8 threads all of which can do arbitrary jobs, but you must explicitely split and divide the jobs and assigned them to specific processors (and do this from the thread that runs on the master processor).

        Er, I wasn't trying to make it sound different than this. Sorry if I did.

        The need to constantly copy chunks of memory between individual processors would require either a very clever OS, or much hard work for the programmer.

        Yeah, it'll be interesting to see what tricks pe

        • Er, I wasn't trying to make it sound different than this. Sorry if I did.
          I'm tired - I probably need to apologize because I didn't pay attention :)
  • Missing the point. (Score:5, Informative)

    by Anonymous Coward on Friday January 27, 2006 @05:38PM (#14583951)
    A lot of people seem to be approaching the concept of the Cell processor improperly. The chip itself is not designed for the "Design a game in 8 threads" approach people seem to be thinking of. It's designed based on a forman/worker metaphore. The main chip handles the work of figuring out what comes next, the SPE's do the heavy lifting.

    Don't think
    Processor 1 = AI
    Processor 2 = Physics
    Processor 3 = ...
    etc.

    Instead picture the main CPU going through a normal game loop (simplified here)
    Step 1: Update positions
    Step 2: Check for collisions
    Step 3: Perform motion caluclations
    Step 4: AI

    At the beginning of each step the main CPU farms out the work to the SPE's. So, you have a burst of activity in the SPE's for each step, thun a lull as the main core figures out what to do next.
    • But is that a behaviour the programmer would have to explicitly define? Shouldn't something like the foreman/worker behaviour be built-in to the main chip already?
    • I hadn't really thought of designing in terms of foreman/worker for the PS3 and I seem to like it, but you get to this statement:

      At the beginning of each step the main CPU farms out the work to the SPE's. So, you have a burst of activity in the SPE's for each step, thun a lull as the main core figures out what to do next.

      Unless I completely mis-understood all of my processor architecture / optimizing compiler classes, to get the best usage of a processor's resources you want all of those resources to be

  • by Anonymous Coward on Friday January 27, 2006 @07:43PM (#14585010)
    At the end of the day, people who say "at the end of the day" just REALLY need to stop saying "at the end of the day".
  • The ps3 will surely have way more bus contention issues than a PC. While the high level issues of concurrent programming will be comparable to any multi-processor architecture, once you get into the low level details, the similarities will end.
    • by Anonymous Coward
      The ps3 will surely have way more bus contention issues than a PC

      You do realize that the memory in each SPE is *local* memory, right? The 7 SPEs can all run flat out without creating any bus contention whatsoever on memory access.

      And as for the DMA hardware and interconnect buses, those have immense bandwidth. I really wouldn't be concerned about contention on DMA to main memory.

      The more likely problem is DMA latency, since small DMA requests may get delayed by longer ones. However, even that is unlikel
  • by ulatekh ( 775985 ) on Saturday January 28, 2006 @04:40PM (#14589732) Homepage Journal

    The limiting factor on computing speed in the last several years has not been processor design or clock speed, but memory speed. Normal architectures feature two levels of fast SRAM to insulate the processor from the latencies inherent with accessing DRAM over a shared bus. That doesn't get rid of multi-cycle delays, it just tries to reduce their likelihood. Data cache misses are expensive, but instruction cache misses are even more expensive -- all the pipelining that modern processors use to handle large workloads efficiently will break down every time the processor stalls loading instructions from main memory.

    The PS3's Cell processor offers a different solution to the problem -- sub-processors with fast local memory, and an explicitly programmed way to copy memory areas between processors (the "DMA" that the article mentions). The SPEs allow significant chunks of the batch-processing-style parts of a game to run on a processor that has no memory latencies, for data or instructions. Since memory-stall delays can run into the double digits, you can expect the performance increase from fast memory to be in the double digit range too. I've seen a public demo of some medical-imaging software that ran ~50x faster when rewritten for Cell. (The private demos I've seen are similarly impressive, but I can't describe those in detail. :-)

    A traditional multi-processing architecture, like the 3 dual-core chips in the X360, has no such escape from the memory latencies. All coordination of memory state between processors (i.e. through the level 2 cache) is done on demand, when a processor suddenly finds it has a need for it. Prefetching is of course possible, but the minor efficiency gains to be made from prefetching (when they can be found at all) is vastly outweighed by the inherent efficiency of explicitly-programmed DMA transfers. Multi-buffering the DMA transfers allows the SPEs to run uninterrupted, without having to wait for the next batch of data to arrive -- something that isn't really possible with a traditional level-2-cache in a traditional multiprocessing system.

    In short, the very nontraditional setup of the PS3's Cell chip is capable of vastly outpowering the traditional multiprocessor setup of the X360, mostly due to successfully eliminating memory latency.

    Yes, writing code that can run like this is a major freaking pain in the ass. But so what? The biggest reason most code is hard to run on such an architecture is that the code was poorly thought out, poorly designed, and not documented. Any decently-written application can be re-factored to run like this. Besides, this is the future: Cell really does seem to solve the memory latency problem that's crippling traditional computing architectures, and the performance difference is astounding. If you can't rise to the level of code written for such a complex architecture, then your job is in danger of getting outsourced to Third World nations for $5 an hour...as it should be. So quit your whining.

    (First post in ten months. Feels good!)

    • the funny thing is, last I checked, IBM also made/making the 360 and Revolution procs, and from the way Cell seems to be coming along, IBM seems to care about it a helluva lot more than the others - mostly because Cell will be usable for more than just gaming (contrary to most peoples opinions that "its just another gaming proc" - ya, a gaming proc thatll be used (somewhere the clusters) for rendering MRIs, simulating nuclear explosions, realtime weather and geological simulations - damn near everything in
    • Memory latency is a huge issue on desktop gaming, apparantly. I spoke with someone from Blizzard about it, who said that it's one of the main performance factors of why World of Warcraft runs signifigantly faster on some PCs.

Technology is dominated by those who manage what they do not understand.

Working...