Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Graphics PC Games (Games) Games

DirectX 'Getting In the Way' of PC Game Graphics, Says AMD 323

Bit-tech recently spoke with Richard Huddy, worldwide developer relations manager of AMD's GPU division, about the lack of a great disparity between PC game graphics and console game graphics, despite the hardware gap. Quoting: "'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.' Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.' 'I certainly hear this in my conversations with games developers,' he says, 'and I guess it was actually the primary appeal of Larrabee to developers – not the hardware, which was hot and slow and unimpressive, but the software – being able to have total control over the machine, which is what the very best games developers want. By giving you access to the hardware at the very low level, you give games developers a chance to innovate, and that's going to put pressure on Microsoft – no doubt at all.'"
This discussion has been archived. No new comments can be posted.

DirectX 'Getting In the Way' of PC Game Graphics, Says AMD

Comments Filter:
  • Yeah right (Score:5, Interesting)

    by ggramm ( 2021034 ) on Saturday March 19, 2011 @07:02AM (#35541046)
    I worked for Microprose in the 90's. Back then we had direct access to hardware, but the technology was limited. GFX power increased and new tricks came. Now a days it wouldn't be possible to do all that.

    DirectX is the sole reason we have good games and graphics on PC. No one wants to reinvent the whole wheel and Microsoft works a lot with GPU manufacturers to come out with new technology.

    DirectX is not the reason, it's the lazy developers who just port the game from consoles to PC. They don't spend the time to make a PC version that uses DirectX and newest graphics cards to their fullest capability, so why on earth they would do that if you remove DirectX.

    There is no DirectX on Linux and just look at how laughtable the situation is. Yeah theres nethack and some clone of Civilization 2 with worse graphics, but it's far from both console games and PC games that gamers play. It's a joke.
    Microsoft has supported PC gaming to great lengths. We all should thank Microsoft that the situation is even so good. Who we should bitch at are the lazy developers and AMD, who also has been lagging behind. NVIDIA and Microsoft is basically doing all the innovation, and their hardware is miles ahead of AMD's. Microsoft, Intel and NVIDIA. All great companies with great products that are truly working for PC games.
  • by goruka ( 1721094 ) on Saturday March 19, 2011 @07:31AM (#35541158)
    Discaimer: I am a pro game developer, wrote a few engines for commercial games, etc. I know what this guy means and ill try to explain it a bit better. The biggest problem with the DX model (which was inherited from GL) is the high dependency on the CPU to instruct it what to do.
    State changes and draw commands are all sent from the CPU, buffered and then processed in the GPU. While this speeds up rendering considerably (the GPU is always a frame ore two behind the CPU) it makes it limiting, to get feedback from the GPU about the rendering state, and since the all the DX/GL commands are buffered, retrieving state or data means flushing/sync.
    From modern algorithms related to occlusion estimation, or global illumination to overall reduction of state changes, it would benefit greatly if, for most tasks, the GPU could act by itself by running an user-made kernel that instructs it what to do (commands and state changes) instead of relying on DX, but for some reason this is not the direction GPUs are heading to, and it really doesnt make sense. Maybe Microsoft has something to do with it, but since Directx9 became the standard for game development, the API only became easier to program in versions 10 and 11, but didn't have major changes.
  • Re:Unification? (Score:4, Interesting)

    by JackDW ( 904211 ) on Saturday March 19, 2011 @07:44AM (#35541192) Homepage

    This is a very good point, the overhead of API calls can be a significant bottleneck.

    I'd suggest that a good solution is to move applications to entirely managed code (e.g. C#), so that there is no need for any hardware-enforced barrier between the kernel and the applications (c.f. Singularity [microsoft.com]). In the best case, you may end up with a situation in which a JIT compiler inlines parts of the kernel's graphics driver directly into the application code, effectively run-time specialising the application for the available hardware. We already see hints of this happening, for instance the use of LLVM bit code in Apple's OpenGL stack [wikipedia.org].

  • Re:Yeah right (Score:4, Interesting)

    by Tapewolf ( 1639955 ) on Saturday March 19, 2011 @07:53AM (#35541208)

    Amen, mod parent up. Troll? wtf? what shill modded troll?

    Well, I suspect the reason it is considered a troll is because it rewrites history and ignores the facts in order to support its conclusion.
    Stuff like ignoring the thriving DOS games market prior to 1998 or so when Windows finally took over. Brushing OpenGL and SDL under the carpet. I imagine that picking things like nethack and freeciv as a snapshot of linux gaming when you had Wolfenstein 3D, Sauerbraten and various other 3D-accelerated games was what pushed the moderators over the edge. I certainly wouldn't pick Solitaire as an example of what windows gaming looked like, and I loathe Windows.

  • by LordHavoc ( 1394093 ) * on Saturday March 19, 2011 @09:01AM (#35541474) Homepage

    The way things work on consoles is approximately similar to Windows/Linux/Mac, except for these important distinctions:
    1. the hardware is a known target, as such the shader compilers and other components are carefully optimized only for this hardware, they do not produce intermediate bytecode formats or make basic assumptions of all hardware.
    2. the APIs allow injecting raw command buffers, which means that you do not have to use the API to deliver geometry in any way shape or form, the overhead goes away but the burden of producing a good command buffer falls on the application when they use these direct-to-hardware API calls.
    3. the APIs have much lower overhead as they are not a middle-man on the way to the hardware, but an API implemented (if not designed) specifically for the hardware. For example Microsoft had the legendary Michael Abrash working on their console drivers.
    4. the hardware memory layout and access bandwidth is known to the developers, and certain optimization techniques become possible, for example rendering to a framebuffer in system memory for software processing (on Xbox 360 this is done for certain effects, on PS3 it is heavily utilized for deferred shading, motion blur and other techniques that run faster on the Cell SPE units), in some cases this has other special implications, like storage of sound effects in video memory on PS3 because the Cell SPE units have a separate memory path to video memory and thus can tap into this otherwise "unused" bandwidth for their purposes of sound mixing.
    5. 3D stereo rendering is basic functionality on consoles.

    The article is making the argument that we should be able to produce command buffers directly and insert them into the rendering stream (akin to OpenGL display-lists but new ones produced every frame instead of statically stored).

    It is also making the argument that we should have explicit control over where our buffers are stored in memory (for instance rendering to system memory for software analysis techniques, like id Software Megatexture technology, which analyzes each frame which parts of the virtual texture need to be loaded).

    There are more subtle aspects, such as knowing the exact hardware capabilities and designing for them, which are less of a "No API!" argument and more of a case of "Please optimize specifically for our cards!", which is a tough sell in the game industry.

    AMD has already published much of the information that studios will need to make use of such functionality, for example the Radeon HD 6000 series shader microcode reference manual is public already.

    Intel also has a track record of hardware specifications being public.

    However NVIDIA is likely to require a non-disclosure agreement with each studio to unlock this kind of functionality, which prevents open discussion of techniques specific to their hardware.

    Overall this may give AMD and Intel a substantial edge in the PC hardware market - because open discussion of graphics techniques is the backbone of the game industry.

    On the fifth point it is worth noting that NVIDIA Geforce drivers offer stereo rendering in Direct3D but not OpenGL (despite it having a stereo rendering API from the beginning), they reserve this feature only for their Quadro series cards for purely marketing reasons, and this restriction prevents use of stereo rendering in many OpenGL-based indie games, another case of consoles besting PC in functionality for ridiculous reasons.

  • by Anonymous Coward on Saturday March 19, 2011 @09:08AM (#35541506)

    Nope. Right now the GPU-CPU situation looks like my boss dictating an email to his secretary - it probably wouldn't take as long if he just told her to inform the recipient he's going to be late. The developers want all possible API ops moved to the GPU where the CPU doesn't get in the way. They still want a standard API and most certainly don't want to develop straight for the metal.

  • by Zevensoft ( 1784070 ) on Saturday March 19, 2011 @09:35AM (#35541584)
    I've programmed DS game engines as well as high performance industrial OpenGL, and the frustrating thing about OpenGL (or DX, they're both just wrappers around NV or AMD) is the inability to send data in the other direction, ie. from the GPU to the CPU without killing performance. The DS didn't have that problem because the vertex processor was decoupled from the pixel processor, and even still you could redirect outputs wherever you like, as well as having full access to the 4 channel DMA controller! We would do occlusion culling on the vertex processor before animation, and also reducing polygon counts for the rasteriser.
  • I suspect one of the reasons for this is that Microsoft has taken the view, in the last 6-7 years, that the GPU can be used for accellerating and enhancing the desktop experiance (Aero, IE9). Their other goal, to a certain extent, is cross platform compatibility. Making it possible to write casual games from Windows, phone, and xbox.

    Disclaimer: I wrote a game way back in 1994, directly interfacing the VGA card. In straight x86 assembly. I was total bare metal 17 years ago. I haven't really kept up on game development much since then. However, I wrote a clone of it in XNA recently. It took me about 4 hours to replicate 9 months of work from 1994. That includes the time to download, install, and learn XNA. My, how things have changed.

  • Re:Yeah right (Score:5, Interesting)

    by CastrTroy ( 595695 ) on Saturday March 19, 2011 @10:07AM (#35541750)
    Yes, things were so much better back in the day when you had to have a very specific graphics card, or audio card, or joystick, otherwise the game wouldn't work. Developers had to code for each piece of hardware individually. If you bought a 3dfx voodoo card, there was a bunch of game you could play, and a bunch you couldn't. If you bought the gravis ultrasound, you were very much out of luck because most stuff was coded for the soundblaster, and a lot of stuff lacked support for your third party sound card. Joystick support was a complete mess. Also, games don't look 10 times as good, because then they could only run on 1% of the machines, and that is not a big enough market. Sure faster computers exist, but the computers that most people own are probably about as powerful as a console, especially if you look at the graphics chip.
  • Re:Unification? (Score:4, Interesting)

    by Rockoon ( 1252108 ) on Saturday March 19, 2011 @11:04AM (#35542058)

    As for video, why can't you generate that into a texture and draw it as a quad?

    Textures arent any different than frame buffers when you get right down to it. You still need to lock its buffer/etc.

    But in all honesty, the bus is so slow that you never want to write individual pixels over it anyways.... once you have settled on shuttling millions of bytes at a time over the bus for efficiency reasons, then it really doesnt matter what the boiler plate is surrounding that operation is... aggregated over all those pixels the overhead can only be minimal.

    I think AMD's point tho is that something like DirectX enforces the rasterization paradigm when the hardware could be so much more if it wasnt forced to offer good performance for that specific API.

    We are at the point now where the number of computations per second performed by todays GPU hardware should be enough to handle realtime raytracing.. nothing spectacular yet in the secondary ray department.. maybe just a few secondary rays per pixel.. interesting/unique stuff. But the hardware simply doesnt expose the functionality in a way that allows the leveraging of its horsepower in that way effectively, and that could in fact be blamed on DirectX bring the only API that matters. What if the hardware could be designed differently so that fill rate (as an example.. lots of triangles leading to lots of overdraw requires lots of fill rate) wasnt as important?

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Saturday March 19, 2011 @12:43PM (#35542558)
    Comment removed based on user account deletion
  • by rasmusneckelmann ( 840111 ) on Saturday March 19, 2011 @02:28PM (#35543240)
    I don't think many (if any) game developers are using either OpenGL 4 or DirectX 11 at their full potentials yet. Especially DirectX 11 is designed to allow a lot of multithreading and decoupling the GPU pipeline from the CPU. If you implement a naive rendering engine with OpenGL or DirectX, sure, you'll find that most of the time you're just sitting around waiting for synchronization and buffers flushing. But if you design your software around multithreading and the new API features, you can squeeze a lot more juice out of the system. Also, I'm sure there's a lot of geometry shader pipeline tricks waiting to be discovered, which will further decouple the GPU from the CPU. I wouldn't be surprised if we "soon" see the merging of the vertex and geometry shader pipelines, might even together with compute shaders. When that happens, the differences between OpenGL and DX is propably going to be very minor (and very, very close to the hardware layer).

What is research but a blind date with knowledge? -- Will Harvey

Working...