Forgot your password?
typodupeerror
Graphics Games Technology

Carmack On 'Infinite Detail,' Integrated GPUs, and Future Gaming Tech 149

Posted by Soulskill
from the building-a-better-virtual-rocket-launcher dept.
Vigile writes "John Carmack sat down for an interview during Quakecon 2011 to talk about the future of technology for gaming. He shared his thoughts on the GPU hardware race (hardware doesn't matter but drivers are really important), integrated graphics solutions on Sandy Bridge and Llano (with a future of shared address spaces they may outperform discrete GPUs) and of course some thoughts on 'infinite detail' engines (uninspired content viewed at the molecular level is still uninspired content). Carmack does mention a new-found interest in ray tracing, and how it will 'eventually win' the battle for rendering in the long run."
This discussion has been archived. No new comments can be posted.

Carmack On 'Infinite Detail,' Integrated GPUs, and Future Gaming Tech

Comments Filter:
  • by walshy007 (906710) on Friday August 12, 2011 @02:24PM (#37072524)

    Years ago he posted here on occasion and I even remember seeing small active discussions on rendering technique technical points etc.

    It has been ages since I've last seen this, which makes me ponder if he even reads slashdot these days when topics related to him are posted.

  • by Suiggy (1544213) on Friday August 12, 2011 @02:29PM (#37072582)

    It should be noted that John Carmack believes that Ray Casting, not Ray Tracing, will win out in the long term.

    Unfortunately, many people outside of the graphics field confuse the two. Ray Casting is a subset of Ray Tracing in which only a single sample is taken per pixel, or in other words, in which a single ray is cast per pixel into the scene and a single intersection is taken with the geometry data set (or in the case of a translucent surface, the ray my be propagated in the same direction a finite number of times until an opaque surface is found). No recursive bouncing of rays is done. Lighting is handled through another means, such as with traditional forward shading against a dataset of light sources, or using a separate deferred shading pass to un-hinge the combinatorial explosion of overhead caused by scaling up the number of lights in a scene.

    John Carmack has been quoted on saying that full-blown ray-tracing just isn't feasible for real-time graphics due to the poor memory access patterns involved, as casting multiple rays per pixel, with multiple recursive steps ends up touching a lot of memory in your geometry data set, which just thrashes the cache on CPU and modern/future GPU hardware alike.

    When people talk about real-time ray-tracing, they almost always invariably are referring to real-time ray-casting.

    • by Vigile (99919) *

      I think he has come full circle on that though and thinks ray TRACING will win. Did you listen to the full interview?

      • by Suiggy (1544213)

        I admit, I made my post before watching the video. Now that I've watched it, I think my position still holds. He talked a lot about using ray-tracing in content preproduction and preprocessing tools during development, and he talked about building ray-tracing engines in the past to experiment with the performance characteristics. This is the same research he did where he came to the conclusion that ray-casting will be better than ray-tracing. Where his position has changed is in that he used to think it wou

        • by Creepy (93888)

          I'd hope he is talking about ray tracing - I'm basically doing ray tracing inside of polygons in shaders today, and I'm sure he has, too. I really don't think ray casting has enough advantages over polygons that would make it worth it, and has significant disadvantages that would need to be worked around (no reflections, shadows, etc). Back in the 1980/1990s, programmers used the painter's algorithm instead of zbuffer because it was significantly slower to use zbuffer up to a certain number of polygons, and

          • At 13:30 he seems to be talking about path-tracing. That accumulation step that he describes with an arbitrary cut-off to make the frame deadline and the random jitter makes it quite certain. The random jitter in time reusing previous pixel results somes like a very bizarre and trippy form of "motion" blur - it would look similar but blurring would be proportional to path length casting from that pixel. I might have to hack that up and try it...

          • by kvezach (1199717)
            It's possible to make a low-res realtime version of photon mapping. See https://www.youtube.com/watch?v=GckOkpeJ3BY [youtube.com] for an example. It isn't as good as proper photon mapping, but it *does* give indirect lighting.
    • When people talk about real-time ray-tracing, they almost always invariably are referring to real-time ray-casting.

      Developers of dozens of realtime raytracers are lining up to disagree with you.
      e.g. VRay RT, Octane, etc.
      http://www.youtube.com/watch?v=6QNyI_ZMZjI [youtube.com]

      Raytracing is extremely straight-forward and parallel. The only thing you have to do to make it feasible for use in games is either A. throw more power at it, B. cheat where you can, C. combination of A&B.

      Not to mention companies that make dedic

      • by Suiggy (1544213) on Friday August 12, 2011 @02:51PM (#37072898)

        The video you posted is not real-time frame rates, it's interactive frame rates. It takes a few seconds to fully recompute the scene once you stop moving the camera. And note how there's only a single car model. imagine scaling up the amount of geometry to a full world. With ray-tracing, as you scale up the complexity of the geometry, you end up scaling up the required computational complexity as well due to radiosity computations. Full real-time ray-tracing on huge worlds in real-time is a pipe-dream. What you will be able to do with ray-casting or rasterization with deferred shading composition to simulate things like reflections or radiosity will always be more than what you can do with ray-tracing, and so games developers will always choose the former.

        • by Sycraft-fu (314770) on Friday August 12, 2011 @03:18PM (#37073260)

          I think part of the problem is that you get a bunch of CS types who learned how to make a ray tracer in a class (because it is pretty easy and pretty cool) and also learn it is "O(log n)" but don't really understand what that means or what it applies to.

          Yes in theory, a ray tracing engine scales logarithmically with number of polygons. That means that past a point you can do more and more complex geometry without a lot of additional cost. However that forgets a big problem in the real world: Memory access. Memory access isn't free and it turns out to be a big issue for complex scenes in a ray tracer. You have to understand that algorithm speeds can't be taken out of context of a real system. You have to account for system overhead. Sometimes it can me a theoretically less optimal algorithm is better.

          Then there's the problem of shadowing/shading you pointed out. In a pure ray tracer, everything has that unnatural shiny/bright look. This is because you trace rays from the screen back to the light source. Works fine for direct illumination but the real world has lots of indirect illumination that gives the richness of shadows we see. For that you need something else like radiosity or photon mapping, and that has different costs.

          Finally there's the big issue of resolution that ray tracing types like to ignore. Ray tracing doesn't scale well with resolution. It scales O(n) in terms of pixels, and of course pixels grow in a power of two fashion since you increase horizontal and vertical resolution when you get higher PPI. Then if you want anti-aliasing, you have to do multiple rays per pixel. This is why when you see ray tracing demos they love to have all kinds of smooth spheres, but yet run at a low resolution. They can handle the polygons, but ask them to do 1920x1080 with 4xAA and they are fucked.

          Now none of this is to say that ray tracing will be something we never want to use. But it has some real issues that people seem to like to gloss over, issues that are the reason it isn't being used for realtime engines.

          • by Suiggy (1544213)

            Yeah, not to mention that with full ray-tracing, as you add more lights to a scene, it increases the overall complexity per pixel per ray bounce linearly as well. That's why doing deferred shading is so nice because it unbuckles the lighting from the rasterization or ray-casting/tracing step and lets you scale up the number of lightings independently with a fixed amount of overhead and linear cost per light for the entire scene, instead of per pixel or per ray or per polygon.

            Going with pure ray-tracing does

            • by Rockoon (1252108)

              Yeah, not to mention that with full ray-tracing, as you add more lights to a scene, it increases the overall complexity per pixel per ray bounce linearly as well. That's why doing deferred shading is so nice because it unbuckles the lighting from the rasterization or ray-casting/tracing step and lets you scale up the number of lightings independently with a fixed amount of overhead and linear cost per light for the entire scene, instead of per pixel or per ray or per polygon.

              I highlighted the key thing that you overlooked. They are both linear to the number of lights.

              • by Suiggy (1544213)

                Again, your ability to comprehend fails to you. Scaling linearly per ray cast (and you doing multiple ray casts per pixel with ray-tracing) and scaling linearly with respect to the entire frame buffer are entirely different.

                Ray-tracing: O(2^b * L * log N)
                Ray-casting + deferred shading: O(log N) + O(L)

                • by Rockoon (1252108)
                  Amazingly, you think that lighting can be handled without respect to scene geometry.

                  You do realize that the deferred lighting passes must deal with the actual scene geometry, and not just the frame buffer, right?
                  • by Suiggy (1544213)

                    Wrong again.

                    With deferred shading, lighting is done in a second pass (or multiple passes for different types of lights) directly against the g-buffer. The g-buffer contains a surface normal, depth value, and lighting & material coefficients for each pixel and is populated by rasterizing or ray-casting/tracing your scene geometry into it. With rasterization, each of your polygonal meshes are rendered into the g-buffer once without regard for lighting, during the lighting stage, each light is applied to t

          • by ShakaUVM (157947)

            It gets even worse when you realize you have to do subsurface scattering to get realistic looks for a lot of surfaces (like, oh, skin). Then you no longer can terminate a photon when you reach most surfaces, but then have to further reflect and refract photons from that point.

            It does make for nice looking materials, though... without it, you get those iconic hard and shiny surfaces in ray traced images, like the famous metal balls.

            (http://en.wikipedia.org/wiki/Subsurface_scattering)

          • by DrKnark (1536431)

            Then there's the problem of shadowing/shading you pointed out. In a pure ray tracer, everything has that unnatural shiny/bright look. This is because you trace rays from the screen back to the light source. Works fine for direct illumination but the real world has lots of indirect illumination that gives the richness of shadows we see. For that you need something else like radiosity or photon mapping, and that has different costs.

            Forgive my ignorance, but I don't quite understand why ray tracing can't hand

            • You basically have it right. You trace a ray for each pixel, starting at the monitor, and it bounces back to the light source. To deal properly with indirect illumination and thus get good shadows and so on, you'd have to do it the other way, you'd have to trace rays from the light source, bounce them off materials, and see where they end up. That, of course, gets real CPu intensive since you can be tracing rays that never intersect with the display.

              That strategy is more or less what radiosity is. It simula

        • I won't go into what constitutes realtime vs interactive, as I can make even the fastest game engine out there 'interactive' as long as I run it on low enough hardware with all the features enabled and running it across multiple HD screens.

          But the converse is also exactly my point - raytracing is something that scales very well, you just throw more computational power at it.

          I did also mention 'cheats'; in that a game engine doesn't necessarily rely on any one single technology to begin with. We've now got

      • by Suiggy (1544213)

        I also forgot to mention that conventional rasterization and ray-casting is also just as parallel in nature as ray-tracing. In fact, more so because it has much better memory access patterns as I mentioned in a previous post. Memory-access is the biggest limiting factor in building scalable, parallel systems. If you don't have good memory-access patterns, you might as well being doing sequential work, because it's getting serialized by the hardware memory controller anyway.

        And this situation isn't improving

      • The data access problem of backward rendering is unsolvable ... it will always access data without regard to object coherency. For primary and shadow rays forward renderers will always be able to be more efficient when efficient occlusion culling is possible and subsampling isn't needed.

        The video you linked has a realtime preview ... in 1/60th second it probably doesn't get that much further than the raycasting solution (primary rays).

      • by billcopc (196330)

        The only thing that makes it "realtime" is that it has a relatively high redraw rate (for a raytracer). That's why there is a lot of fuzz when the camera pans around. It might render only a few thousand rays between redraws, which does give fast feedback but also slows down the rendering process overall. Most raytracers will churn 100k rays before updating the preview.

        This is analogous to progressive jpeg decoding, where you start with a very chunky low-res preview and gradually work your way up to the f

      • Raytracing is extremely straight-forward and parallel.

        If only.... In the real world that would be an exclusive or, so pick one of the two. On small uncomplicated scenes it is straight-forward to make the tracing of each ray happen in parallel. As you add more geometry though it begins to behave differently. You will need a giant cache of rays to handle all of the bounces otherwise all of the recomputation will kill your performance. This was well known in the graphics community prior to GPUs and the same sc

    • by Rockoon (1252108)
      Unfortunately, many non-programmers such as yourself dont understand algorithmic complexity and as such fail to realize that O(P log N) will eventually beat O(PN) once N is large enough, even though the constants in the first are much larger than the constants in the second.

      Carmack knows that raytracing will eventually be superior in performance to rasterization because it is inevitable.

      The thing is that when the critical N is reached, O(P log N) isnt just going to be slightly better, its going to be en
      • by Suiggy (1544213)

        I can't tell if you're trolling or you just wrote up a reply without fully reading my post. I never said rasterization is better. I alluded to the fact that ray-casting will win out over rasterization, and in the very near future. What I said was the ray-casting will win out over ray-tracing. The algorithmic complexity of ray-casting + deferred-shading is better than recursive ray-tracing.

      • by NoSig (1919688)
        Everything you wrote is mathematically accurate, yet the actually interesting thing is exactly how big N has to get. The mathematics of big-O notation tells you nothing about that, yet that is the crux of the matter. For example, if N has to be bigger than 10^123478234897298, the apparent better asymptotic complexity has no impact on the real world. The point is that if I tell you that one algorithm is O(1) and the other is O(2^n), you haven't actually learned anything useful about which algorithm you shou
        • by Rockoon (1252108)

          Everything you wrote is mathematically accurate, yet the actually interesting thing is exactly how big N has to get.

          You know that this very question has been researched, right? I am amazed that you are intent to discuss this issue without having actually done any research in this matter.

          You might want to start with this 2005 paper from Intel [intel.com] where they do some performance comparison for both hardware rasterization and software raytracing for various scene complexities.

          That paper in particular illustrates how close we are. We are approaching the crossover point with the number of on-screen primitives right now. GPU'

          • by NoSig (1919688)

            Everything you wrote is mathematically accurate, yet the actually interesting thing is exactly how big N has to get.

            You know that this very question has been researched, right? I am amazed that you are intent to discuss this issue without having actually done any research in this matter.

            What I'm discussing is what can and cannot be concluded from big-O asymptotic complexity. You were drawing a mathematically correct conclusion that "for big enough N, raytraycing is better." You then made an incorrect further conclusion that "eventually, raytraycing is better." Big-O notation never guarantees that you'll ever be able to solve an input so big that the complexity estimate becomes accurate as to which algorithm is better. You then chose to heed my advice and present data on what actually matte

      • The thing is that when the critical N is reached, O(P log N) isnt just going to be slightly better, its going to be enormously better from then on out

        Obviously, but many commenters seem to gloss over the fact that polygon rendering can also be reduced to O(N log N) by multi-resolution rendering.techniques.

    • by loufoque (1400831)

      John Carmack has been quoted on saying that full-blown ray-tracing just isn't feasible for real-time graphics due to the poor memory access patterns involved, as casting multiple rays per pixel, with multiple recursive steps ends up touching a lot of memory in your geometry data set, which just thrashes the cache on CPU and modern/future GPU hardware alike.

      It's not good for a vector processor, but it's still pretty good for a many-core processor.

      • by Suiggy (1544213)

        You still run into the same problems regardless of whether you're using a vector processor with no branch-prediction and no cache, to if you're using a bunch of in-order cores with cache coherency, or full-blown out-of-order cores with cache coherency. You end up pulling in a combinatorial explosion (ie. exponential number) of cache lines or memory accesses per recursive ray tied to the complexity of your scene.

        People like to talk about how you can just throw more cores and distributed computing architectur

        • by loufoque (1400831)

          You still run into the same problems regardless of whether you're using a vector processor with no branch-prediction and no cache, to if you're using a bunch of in-order cores with cache coherency, or full-blown out-of-order cores with cache coherency. You end up pulling in a combinatorial explosion (ie. exponential number) of cache lines or memory accesses per recursive ray tied to the complexity of your scene.

          GPUs have no cache.
          (Fermi has one, but it doesn't really work, so we might as well not count it)

          • by Suiggy (1544213)

            GPUs eventually will get working caches. AMD's next-generation GPUs are shipping with stuff comparable or better than Fermi, and nVidia's Kepler is getting something better than Fermi.

            Furthermore, Intel's Larrabee, Knight's Ferry, and Knight's Corner HPC compute products have cache, yet feature just a bunch of simple in-order, non-speculative x86 cores.

            There's a wide spectrum of what's out there, and yes, you can have vector processors with cache, there's nothing saying that you can't do that.

            • by loufoque (1400831)

              Anyway, the point is that of course ray casting is better suited to that hardware, but a lot of raytracing applications, like in medical or semiconductor imaging, already benefit from GPUs greatly.
              And those things actually run in real (or interactive) time.

          • What do you think memory access coalescing is, and why do you think it needs to be on aligned boundaries? Exploiting spatial locality of reference is still a cache, even it only has a single line. Given the huge impact of non-aligned access within a warp it seems silly to pretend that a GPU is not a cached architecture.

  • They rely too much on drivers, they should just attack the hardware and make their own renderer rather than relying on crappy OpenGL and DirectX...
    But of course, they don't really want to invest in R&D. Game development is more of a "do something dirty quickly and then throw it again" kind of thing.

    • by hedwards (940851)

      That would be a tremendous step backwards. You can get away with doing that if you're programming for a console, in fact that's how it used to be done. The problem is that as soon as you've got any variation at all in the hardware you very quickly start to have to code for every individual unit that you're going to support.

      Need multiple resolutions? Well, you're going to have to make sure you code for them rather than handing them off to a 3rd party library. Unit have extra RAM? Well, you're going to have t

      • Even a few years ago, hell, probably still, for all I know, there was the DX8 path, the DX9 path, the openGL-nvidia path, the openGL-ATI path, and so on.

        Or fifteen years ago, when part of the setup was picking exactly the correct video mode (hope your monitor and card support VESA 2.0 modes) and sound card, down to IRQ and DMA settings....

        • by hedwards (940851)

          I spent a very small amount of time playing around with hamlib for the GBA and that's how things were done on it. If you wanted to draw something on screen, there'd be a particular set of registers to write to, same for most other things that you might want to do. All in all it was a pretty nice set up.

      • by loufoque (1400831)

        The problem is that as soon as you've got any variation at all in the hardware you very quickly start to have to code for every individual unit that you're going to support.

        If your software is of good quality, it is generic and easily retargetable.

        Also, if you just consider GPUs, the interfaces to program them (CUDA in particular) has been there for quite a few generations and probably will stay for a long time still.
        And all NVIDIA (and probably also AMD) drivers use the same code, with relatively few low-l

    • by NoSig (1919688)
      That is the bad old days of computer graphics and sound - if a given game wasn't written for your particular hardware, too bad. It's hard to write to the hardware when there is a proliferation of distinct graphics cards out in the world and many more are added every year. On top of that, the way to talk to a given graphics card is often secret. There's a reason that people use OpenGL and DirectX.
      • by loufoque (1400831)

        You say this as if the market wasn't essentially restricted to two vendors, with one clearly preferred by gamers.

        • by 0123456 (636235)

          You say this as if the market wasn't essentially restricted to two vendors, with one clearly preferred by gamers.

          But a game written for OpenGL or Direct3D in 2001 still runs on modern hardware. A game written to write directly to 2001 hardware does not.

          Writing directly to hardware without a standardised API is retarded and pretty much guarantees that in ten years time the software won't work or performance will be lousy if it does.

          • by loufoque (1400831)

            But a game written for OpenGL or Direct3D in 2001 still runs on modern hardware. A game written to write directly to 2001 hardware does not.

            This is irrelevant, since the industry does not try to make money out of old games. What they want is to make money at launch, then milk the cash cow for some time with a couple of people working on DLCs, then move on.

            Also notice how your old Playstation games don't work without a Playstation (short of using an emulator). This is not a serious issue. Games are not meant

            • by gl4ss (559668)

              Cell was seriously over hyped as platform for generic computing - as was ps3, which is seriously short on memory. dreamcast would make a fine example of using shitty general sdk(the wince based one) vs. doing native, because on it

              now, there's a difference between game engine developers and content-jockeys. content jockeys just create content on autodesk tools, totally dependent on old school artistic plot writing and art creation - thus, their games very rarely manage to amaze people on the virtual world si

        • by NoSig (1919688)
          They make more than 1 graphics card each. Btw Intel sells more graphics cards than anyone else. You've probably got one integrated on your computer without knowing about it. There are also many flavors of each graphics card that the big companies come out with.
          • by loufoque (1400831)

            All flavours of NVIDIA cards run the same base OpenGL implementation, likewise for ATI/AMD.
            Intel GMA cannot even start most recent games.

            • by NoSig (1919688)
              If you listen to the interview, you'll hear John Carmack saying that built-in cards might out-perform dedicated cards for some things in future if they grant better memory access by virtue if using system memory directly. How do you know that the hardware interface between graphics cards never change? That doesn't sound right to me, but if you have an inside source, feel free to enlighten us.
            • by chammy (1096007)

              All flavours of NVIDIA cards run the same base OpenGL implementation

              Nope, sorry. There is a reason why there exists a "legacy" Nvidia driver.

    • by LWATCDR (28044)

      Let me make a guess. You don't program do you?
      1. AAA games already cost a lot to make. You want to spend $200 for a game?
      2. Hardware is changing fast. If you write for the hardware what hardware do you write for? Which card? All of them? What about the cards that come out while you are spending the three years developing the game?

      Now you may be confusing drivers with game engines but even then you would be wrong just not insane.

      • by loufoque (1400831)

        Let me make a guess. You don't program do you?

        I write software tools for high-performance computing that work on a variety of hardware, including all variations of x86, POWER, PowerPC and Cell, ARM, GPUs, multi-core, clusters... and other more confidential architectures (many-core, VLIW, FPGA, ASICs...)
        Supporting a lot of different hardware is not an insurmountable problem if you have a good design (and a good test farm).

        Now you may be confusing drivers with game engines but even then you would be wrong jus

        • At first i thought you were insane because writing for individual vendors was kind of crazy. However, between Intel, nVidia and ATI, you've cast a pretty wide net.

          Then I realized it's even crazier wanting nVidia, ATi and Intel to all make their hardware communicate the same way through out the generations.

    • Really?

      I mean, the main criticism for id games has been that they are less games and more tech demos. Practically every game engine id ever sold has been used for much more interesting games by other people, but id still gets license fees.

      If they don't invest in R&D, why are they doing their own engine design at all, and more importantly, just what are they investing in?

      And even if you're right about OpenGL and DirectX being "crappy", which I highly doubt, the fact is that they are at least somewhat por

      • by loufoque (1400831)

        It wasn't a commented directed at Id in particular, but at the game developer industry in general.

        We'd be back to the bad old days when you actually had to check the system requirements for your sound card and video card manufacturer.

        Don't you have to do this anyway?
        It's true consoles have stopped PC games from requiring new hardware, but a few years ago, you needed to replace your graphics card every other year to play new games.

      • Directx is only 'cross platform' if you count different video cards and the xbox360 as different platforms.
        Opengl 'is' cross platform, you can run it on your pc, it's used on mac's, it's used in linux, it's used in most cellphones, it's used in most consoles(sans xbox and xbox360).

        • Directx is only 'cross platform' if you count different video cards and the xbox360 as different platforms.

          I'm pretty sure that an Xbox 360 and a PC are different platforms. "Cross-platform" only means "runs on different platforms" it does not imply some minimum number of platforms. If I write something that works on both OS X and Windows it is "cross platform" even if it won't run on Linux or a cellphone.

          • It's not cross platform because xbox360 runs on windows built for the ppc chip inside. cross platform would be directx being able to run on mac os x.

        • I didn't say "corss-platform", you did. I said "somewhat portable across hardware," which is true -- you can develop a game for nVidia, ATI, and yes, even Intel, and if you stick to the API, you don't need code specific to any of those cards.

    • by tempest69 (572798)
      I'm going to agree, at least partially. I think we do rely too much on drivers to overcome the problem of a very diverse video card system.

      This bugs me.
      I see no compelling reason for the video manufacturers to postpone a more modern standard for video cards. Video cards have simple compatibility at a ~20 year old standard. It is really depressing to see that when I install an OS, the video is set to some horrible resolution. When I go to download the driver it's a mess, because the site is nearly unu
  • Did anyone else catch that?

    That's probably the most interesting thing he said all day. Throw an ARM core on the GPU, provide some sort of standard API for using it, and you eliminate all those pesky latency issues. Modern GPUs have enough RAM that one could potentially push the entire renderer onto the GPU, with the CPU and main memory only being responsible for game logic.

    Of course, he also seems to be implying that this might go the other way, with integrated graphics winning out...

    • by Rockoon (1252108)
      I thought it was interesting that he said hes spent the last 1.5 years working on raytracers. Carmack is very much a research guy now, not a developer. He pays others to develop.
    • by gl4ss (559668)

      pyramid3d.

      shit idea, computer on a board. yo dawg..

      anyhow, graphics AND game logic(game physics) are very closely connected. when they're not, you end up with stuff like nwn-engine derived shit(where the game is like a game from 386 era, but with plastered on super fancy graphics, while the game still just runs basically in a 2d field, or when a game is full of let's say grass thats shaded, but while that grass has no meaning to gameplay or even ai's field of view).

      • shit idea, computer on a board. yo dawg..

        Too late. My current video card has many times more RAM than my first computer. It also has two GPUs, which can perform certain tasks ridiculously faster than my CPU despite "only" operating at a few hundred megahertz. Tons of stuff can be trivially offloaded to the point where the CPU is just coordinating things, and we hardly even need such a low-latency connection, at least for non-game-related task.

        ARM chips are so small, cool, cheap, such a small power draw that they're insignificant next to everything

    • by mikael (484)

      That was done 20 years ago with TIGA graphics boards. There was a simple BSP tree demo (flysim) which downloaded the scenery database and renderer into the cards memory. All the host PC did was to convert keyboard events into camera motion commands.

You had mail, but the super-user read it, and deleted it!

Working...