DirectX 'Getting In the Way' of PC Game Graphics, Says AMD 323
Bit-tech recently spoke with Richard Huddy, worldwide developer relations manager of AMD's GPU division, about the lack of a great disparity between PC game graphics and console game graphics, despite the hardware gap. Quoting:
"'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.' Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.' 'I certainly hear this in my conversations with games developers,' he says, 'and I guess it was actually the primary appeal of Larrabee to developers – not the hardware, which was hot and slow and unimpressive, but the software – being able to have total control over the machine, which is what the very best games developers want. By giving you access to the hardware at the very low level, you give games developers a chance to innovate, and that's going to put pressure on Microsoft – no doubt at all.'"
Yeah right (Score:5, Interesting)
DirectX is the sole reason we have good games and graphics on PC. No one wants to reinvent the whole wheel and Microsoft works a lot with GPU manufacturers to come out with new technology.
DirectX is not the reason, it's the lazy developers who just port the game from consoles to PC. They don't spend the time to make a PC version that uses DirectX and newest graphics cards to their fullest capability, so why on earth they would do that if you remove DirectX.
There is no DirectX on Linux and just look at how laughtable the situation is. Yeah theres nethack and some clone of Civilization 2 with worse graphics, but it's far from both console games and PC games that gamers play. It's a joke.
Microsoft has supported PC gaming to great lengths. We all should thank Microsoft that the situation is even so good. Who we should bitch at are the lazy developers and AMD, who also has been lagging behind. NVIDIA and Microsoft is basically doing all the innovation, and their hardware is miles ahead of AMD's. Microsoft, Intel and NVIDIA. All great companies with great products that are truly working for PC games.
Re: (Score:3, Informative)
Funny, Steam games run just fine.
--
BMO
Re: (Score:2)
You are right. But don't forget that DOS was just an OS and it was the exe file, often assembly coded with huge tables for graphics, collision detectors etc that did all the work.
Re:Yeah right (Score:5, Insightful)
There is no DirectX on Linux and just look at how laughtable the situation is. Yeah theres nethack and some clone of Civilization 2 with worse graphics, but it's far from both console games and PC games that gamers play. It's a joke.
Don't blame the lack of DirectX for the lack of games on Linux. OpenGL works just fine on it, as it does on Windows.
And Mac, much to the delight of the four people who want to play games under OS X.
As far as getting rid of graphics APIs, yeah, that's exactly what we need: to go back in time fifteen years, and make devs write their games for every piece of graphics hardware under the sun. There's a damn good reason the industry started using them, and its still as relevant today as it was back then.
Re:Yeah right (Score:5, Informative)
And Mac, much to the delight of the four people who want to play games under OS X.
Last I heard you are about 5 orders of magnitude off with respect to Mac users playing World of Warcraft. :-)
Re:Yeah right (Score:5, Funny)
Re: (Score:2)
Haha. Good answer! :D
Re:Yeah right (Score:5, Insightful)
Don't blame the lack of DirectX for the lack of games on Linux. OpenGL works just fine on it, as it does on Windows.
And Mac, much to the delight of the four people who want to play games under OS X.
Don't forget iOS ! Pretty popular gaming platform these days and it supports OpenGL ES 2.0.
Re: (Score:2)
Don't blame the lack of DirectX for the lack of games on Linux. OpenGL works just fine on it, as it does on Windows.
Really? I've been told that the proprietary OpenGL drivers on Linux aren't that good quality, especially AMD's.
Re:Yeah right (Score:4, Informative)
Really? I've been told that the proprietary OpenGL drivers on Linux aren't that good quality, especially AMD's.
You might use Mozilla's list of Blocklisted Graphics Drivers [mozilla.org] as your guideline to the reliability of drivers in general at this time since they are currently going through it. They assert (in other sources as well) that only nVidia has a working OpenGL pipeline on Linux.
Re: (Score:2, Informative)
The NVidia one is feature complete with the windows one. Runs beautifully. only thing it doesn't have is hybrid SLI/ Optimus. and that could be possibly fixed when wayland comes out.
NVvidia has been putting alot of love into linux even if it is tough love like not giving us open source drivers or following standards like KMS.
Re: (Score:3)
Don't blame the lack of DirectX for the lack of games on Linux. OpenGL works just fine on it, as it does on Windows.
I completely agree! I have World of Goo, Braid, Osmos, Penumbra, Nexuiz/Xonotic, NetHack (of course), Scorched 3D, Battle for Wesnoth (get it, it's awesome), emulators for pretty much every console on the planet except the really powerful ones, and also a ton of games in Wine (Civilization IV with no DRM, pretty much every one of Telltale's adventure games, Baldur's Gate, Psychonauts, Portal/HL2) and they all work perfectly (in fact, sometimes better than on my Windows XP partition). OpenGL is more than wha
Re:Yeah right (Score:4, Informative)
Yup, been there. I recently tossed out 'direct to metal' CD versions of Descent, Tomb Raider, Motocross Madness, and many others, that were chipset-specific, made for architectures like the Rendition Vérité [wikipedia.org], 3dFX Voodoo [wikipedia.org], S3 Virge [wikipedia.org], etc. Not because they aren't great games, or because I couldn't run them on a DOS virtual machine or boot to a DOS environment, but because I don't have the video card they were written for, or even a slot to plug one into. However, the majority of Windows DirectX 3 games from ~1996 are install-and-play on even Windows 7. ATI (nee AMD) and NVidia were the graphics chipset makers that rode on DirectX instead of a native hardware API, and are the winners. It's too bad that a cross-platform and cross-vendor platform like OpenGL didn't come out ahead also.
BTW, I worked for Diamond Multimedia (there's a Diamond card in each Wikipedia reference above) during the graphics good times of six-month upgrade cycles, and got to play with bleeding-edge 3D hardware while the public was still looking at a replica card in a CES glass box.
Re: (Score:2)
Microprose was an awesome company and I had all of their games from Silent Service up to and including Falcon 4.0. It's too bad that the company got swallowed whole and recycled so many times.
I agree that developer laziness is behind many development problems and it's not just limited to DirectX. Look at that steaming pile of horseshit called Bink, which was very popular at one point despite being a festering abscess of sloppy code. Look at some current (cough Miles cough) sound drivers that cause popular
Re: (Score:3)
I agree with some of your points but disagree with a few:
MS was largely successful with DirectX, and the goal of allowing developers to largely ignore the graphics hardware while concentrating on a standard API was successful. For ill or good, it's driven game design on the dominant platform for years, and arguably kept OpenGL on the defensive.
MS may provider driver support with their OS because it is to their benefit to have out of the box support, but they have never been best in driver support. They leav
Lowest common denominator is Intel's Graphics My (Score:2)
I think a large part of the lack of perceived difference between consoles and PC's these days have a bit to do with the least common denominator, which unfortunately also happens to be an aging dedicated gaming console
As I understand it, the lowest common denominator console for "grown-up" consoles (Xbox 360) is far more powerful graphically than the lowest common denominator PC (any PC with integrated video). Half a GB of RAM and an AMD Radeon X1900 beat 2 GB of RAM and an Intel "Graphics My Ass".
Re: (Score:2)
You do realize the 360 has a 500MHz ATI Xenos? Hardly top of the line. It's a 2005 graphics card.
You do realize that the GMA isn't even as powerful as a 2005 graphics card?
Re: (Score:2)
Actually, while the pixel processor might be slow, it has 12 MB of very very fast EDRAM. This allows you to do render-to-textures (and thus, post processing) with very little load on the GPU. This makes a lot of effects very cheap (time-wise) and you can pull off a lot of nice things with that.
Re: (Score:3)
It's always been the responsibility of the hardware OEM. Outside of the Linux world, OSes have stable kernel ABIs that allow hardware vendors to write drivers without having to worry about next week's minor kernel patch breaking them.
Do not project the unusual and disadvantageous situation with Linux onto every other platform.
Re: (Score:3)
Am I the only one who remembers the demo scene? Pure DOS. No DirectX.
Stars, Wonders of the World (1995) [youtube.com] - (Contains brief cartoon nudity near start).
Re: (Score:2)
Addendum, for those too pressed for time to watch the entire 6:20 demo, the intro finishes at 1:11. Highlights include the face-through-the-wall at 3:13 and the hula-hoop scene at 4:35.
Re: (Score:3)
I remember the demo scene. I remember having to use QEMM to get enough ram for the demos to run, then having them crash. I remember some demos working on my gfx card and not my friends', I remember having drivers for specific sound cards, etc.
Re: (Score:2)
No AI. Color cycling. Fractals. What you saw in the demo scene in the eighties, is now available as a visualisation plugin for Media player or Winamp. it looked impressive back then, but it were mere hacks pushing the metal to its fullest. Still they all used the same similar tricks. I watched a few of those again some time ago and was not impressed anymore.
Re: (Score:3)
Because, and this is the entire point of the article, you only have the API, you don't have the low-level access to the device. It's like only being able to run Java / CLR bytecode instead of native code on your CPU. If the abstractions picked by the people who designed the bytecode are a good match for what you're doing, then that's great, but if not then you're stuck.
The DirectX / OpenGL stack is optimised for certain uses. If you had complete access to the card at a low level (which can be done saf
Re: (Score:2)
Because, and this is the entire point of the article, you only have the API, you don't have the low-level access to the device.
ATI "could" change this any old time they wanted by exposing more of their own API within their driver. They are not doing this for one or more reasons which should be fairly obvious but I will take a stab at a couple of them now. One of them could be that some of their secret sauce would be exposed, the stuff in the driver that they claim it's so important to keep secret that they can't just give us code, but they have to dribble out partial specs for video cards so that OSS developers can do a half-assed
Re: (Score:3)
ATI "could" change this any old time they wanted by exposing more of their own API within their driver. ... The other reason of course is that nobody wants it. If you go back far enough then you will remember when games had a Mystique version, and a PowerVR version, and a 3dfx version... Those days are not anything we want to bring back.
I agree that we don't want to have a different API for every piece of hardware, but I don't think that's really the idea here. You don't want the GPU equivalent of assembly language, you want the GPU equivalent of C -- something as low level as possible without being hardware-specific, and then a compiler or equivalent with a back end for each different hardware architecture.
Re:Yeah right (Score:5, Interesting)
Re: (Score:2)
The GUS sound blaster support (it was emulated, by the way) was terrible and didn't work properly in many games. When games started to go to Windows the GUS was basically dead, because Gravis never managed to write decent drivers for it, which had something to do with the fact that the GUS did not support DMA streamed audio or something. It's wavetable/sequencer-based design was basically almost fundamentally incompatible with the way Windows sound drivers were supposed to work. I never managed to get decen
Re:Yeah right (Score:4, Interesting)
Amen, mod parent up. Troll? wtf? what shill modded troll?
Well, I suspect the reason it is considered a troll is because it rewrites history and ignores the facts in order to support its conclusion.
Stuff like ignoring the thriving DOS games market prior to 1998 or so when Windows finally took over. Brushing OpenGL and SDL under the carpet. I imagine that picking things like nethack and freeciv as a snapshot of linux gaming when you had Wolfenstein 3D, Sauerbraten and various other 3D-accelerated games was what pushed the moderators over the edge. I certainly wouldn't pick Solitaire as an example of what windows gaming looked like, and I loathe Windows.
Re: (Score:2)
Never attribute to malicious intent that which can be attributed to ignorance.
If you're like most people out there, you haven't really given *nix a try.
I'm not a dev. I don't know why OpenGL is not as popular as DirectX. I only know that DirectX has all the games I currently play. I can only guess why that is.
Re: (Score:2)
Any sufficiently advanced incompetence is indistinguishable from malice.
--
BMO
Re: (Score:2)
It starts out well, but practically everything in the last section is utter bullshit.
Re: (Score:2, Insightful)
As someone who has tried and got pissed off multiple times at the API's, yes the API's need to be much thinner.
Let's do a quick comparision of how stupidly inefficient game development is...
1. Xbox/Xbox350 - DirectX,Managed C#
2. Wii/Gamecube - OpenGL,C/C++
3. PS2/PS3 - OpenGL, C/C++
4. PC - DirectX, Managed and Unmanaged C,C++, C#, OpenGL
5. Max - OpenGL, C/C++,ObjC
6. Linux - OpenGL, C/C++
7. Android - OpenGL, Java/Native C/C++ maybe.
8 iOS - OpenGL, C/C++/ObjC
9 Windows Phone 7- DirectX, Managed C#
10, All the ot
Re: (Score:2)
Re: (Score:2)
The big titles for the Xbox 360 are developed in C/C++, no managed C#.
In fact, it's kind of obvious, considering the massive work it would require to port games between the Xbox 360 and the PS3.
Re: (Score:2)
Only XBLA and indie games run in managed C# using XNA, since Microsoft wants some sort of protection against running full unsigned code on the device (which makes the xbox the only console that hasn't been completly rooted).
Also, AMD can suck it. The reason for PC games to look the same as the console versions is not the fact that developers hate directx or opengl apis, it's the fact that it takes a buttload of work to make content in higher resolutions for PC only. Content budgets are already sky-high, and
Re: (Score:3)
Re: (Score:2)
And while we're at it, I'd like a pony! :D
Re:Yeah right (Score:5, Informative)
> Let's do a quick comparison of how stupidly inefficient game development is...
> 2. Wii/Gamecube - OpenGL,C/C++
> 3. PS2/PS3 - OpenGL, C/C++
Your facts are wrong. I've _shipped_ games on Wii, PS2, amongst other consoles. Currently, I do compiler support on the PS3 and am familiar with the rendering APIs that drive the RSX.
* The Wii does NOT use OpenGL. I personally know because I wrote an OpenGL implementation over _top_ of the native GX calls. While the GX*() API _is_ strongly _based_ on OpenGL, it is NOT OpenGL.
* The PS2 does NOT have OpenGL. You either
a) manually build a packet to set the GS registers,
b) use the sce*() calls, or
c) write your own API.
At one job, where I wrote the Wii-OpenGL, we had an in-house implementation of OpenGL running on the PS2, but that was, again, over _top_ of the native GS registers.
* There are 2 rendering APIs on the PS3. CGM and OpenGL. I could probably count on one hand the total number developers that have shipped their game with OpenGL. Almost no one ships OpenGL it because it is SLOWER and LESS EFFICIENT then CGM.
Please get your facts straight before looking like an ignorant fool.
Cheers
Re: (Score:2)
1. Xbox/Xbox350 - DirectX,Managed C#
2. Wii/Gamecube - OpenGL,C/C++
3. PS2/PS3 - OpenGL, C/C++
4. PC - DirectX, Managed and Unmanaged C,C++, C#, OpenGL
5. Max - OpenGL, C/C++,ObjC
6. Linux - OpenGL, C/C++
7. Android - OpenGL, Java/Native C/C++ maybe.
8 iOS - OpenGL, C/C++/ObjC
9 Windows Phone 7- DirectX, Managed C#
10, All the other mobile phones and devices- Not DirectX
It looks to me like you can hit everything on your list but WP7 and XBOX with OpenGL and C/C++. And it's not like WP7 is a huge market at this point. So if you don't want futz around with multiple APIs and languages, all you really have to give up is XBOX.
Comment removed (Score:4, Interesting)
Re:Yeah right (Score:4, Insightful)
As a game developer you can bitch all you want (in fact I'm gonna bitch about you in a minute) but I sure as hell don't trust your coding skills which means letting you have "bare metal access" so you can make my PC as crashy as Win9x is a big DO NOT WANT.
There is a difference between exposing lower level instructions on a GPU to the programmer and doing away with protected mode and virtual memory.
Re: (Score:3)
Unification? (Score:5, Insightful)
Isn't DirectX and OpenGL there so that developer can write application using DirectX 10 and have it working with any card capable of DirectX and having enough memory? Are we gonna have "Works best in Internet Explorer 6" again for graphic cards? I still remember that whole 3dfx thing and I didn't like it.
Re:Unification? (Score:5, Insightful)
The whole 3dfx era was horrific, and as someone has already pointed out below DirectX made a huge positive impact in PC gaming. The article describes a real problem though: if I want to hit 50fps then my rendering needs to execute in under 20ms. Performing 5k system calls to draw chunks of geometry means that each syscall needs to be less than 4us, or about 12000 cycles on a 3Ghz processor. That is not a lot of time to do all of the internal housekeeping that the API requires and talk to the hardware as well.
The solution is not to throw away the API. The interface does need to change drastically, but not to raw hardware access. More of the geometry management needs to move onto the card and that probably means that devs will need to write in some shader language. It's not really lower-level / rawer access to the hardware. It is more that shader languages are becoming standardised as a compilation target and the API is moving on to this new target.
Re: (Score:2)
Re:Unification? (Score:4, Interesting)
This is a very good point, the overhead of API calls can be a significant bottleneck.
I'd suggest that a good solution is to move applications to entirely managed code (e.g. C#), so that there is no need for any hardware-enforced barrier between the kernel and the applications (c.f. Singularity [microsoft.com]). In the best case, you may end up with a situation in which a JIT compiler inlines parts of the kernel's graphics driver directly into the application code, effectively run-time specialising the application for the available hardware. We already see hints of this happening, for instance the use of LLVM bit code in Apple's OpenGL stack [wikipedia.org].
Re: (Score:3)
I'd suggest that a good solution is to move applications to entirely managed code (e.g. C#), so that there is no need for any hardware-enforced barrier between the kernel and the applications
Superb idea! Why do something quickly in hardware, when you can do it slowly in software?
We already see hints of this happening, for instance the use of LLVM bit code in Apple's OpenGL stack
You realise that Apple only uses LLVM in the painfully slow case (i.e. when it has to execute shaders on the CPU, rather than the GPU), right? And that shaders are already JIT compiled for the target hardware on all OpenGL / Direct3D implementations? And that JIT compilation doesn't require managed code, nor does it require a VM?
Re: (Score:2)
Superb idea! Why do something quickly in hardware, when you can do it slowly in software?
I don't think you understand what I am getting at. I am not saying that memory protection and privilege levels should be enforced by software - that is not what Singularity does. The whole point of managed code is that memory protection does not need to be enforced at all. The result is that you can run everything in ring 0, in the same memory space. No matter how fast your hardware already is, removing these overheads makes it faster.
Re: (Score:3)
The whole point of managed code is that memory protection does not need to be enforced at all. The result is that you can run everything in ring 0, in the same memory space.
Yes, I understand, I've read the papers related to Singularity, and it remains a stupid idea. You're replacing a mechanism that the CPU can do in hardware, basically for free, with a software implementation. By running everything in ring 0, you make any VM bug into a a kernel-level hole. Still think this is a good idea? Pick your favourite VM from the JVM, Flash VM, and .NET VM. Now go and look at the number of exploitable bugs that have allowed code to break free of the sandboxing. For the JVM, the
Re: (Score:2)
Thanks for clarifying.
The argument here is not about security, but performance. The possibility of being able to "optimise out" an entire API, even across the system call barrier, is (I think) an interesting one, and that's what I was commenting on.
The thing is that hardware security does not actually come for free - there is a time cost, and the implication of the article we're commenting on here is that the time cost is significant.
You are of course right that VMs have not always been particularly secu
Re: (Score:3)
I think your brush might be a little broad, there. File permissions (even simple UNIX ones) are an additional layer of complexity, but clearly improve security. Similarly for privilege escalation facilities like sudo or UAC. A firewall (or even /etc/hosts.[allow|deny]) is more complexity, but also delivers clear security benefits. Etc.
Re: (Score:2)
It's a shame that you are being lambasted by posters who didn't understand your point. Yes - lifting the code to a higher level of abstraction would definitely enable specialisation. Managed code is one way to go, certification would be another. In either case eliminating direct memory access, or proving that it is safe woud allow the removal of the barrier between hardware access and user-land code. That is precisely what is needed in this case.
Re: (Score:2)
It certainly wasn't intended that way and I can't imagine why you would think that.
One of the great things about the Singularity approach is that the overhead of system calls is reduced to almost nothing. I'd have thought that the benefits for high-end graphics would be obvious.
Re: (Score:2)
Since you seem knowledgeable:
How is this handled on the consoles? Do the programmers go direct to the hardware, as they did in the days of the N64 and PS1, or do the modern PowerPC-based consoles also have a DirectX-style interface?
Re: (Score:2)
The Xbox 360 is most often* programmed in C# using the XNA API, which is very much a managed counterpart to DirectX.
* I can explain what I mean by this.
Sheer quantity (Score:2)
Xbox games (aside from the indie scene) are usually programmed in C/C++
I was under the impression that indie games had surpassed non-indie games in quantity; therefore, a randomly selected game would "most often" be an indie game and therefore use XNA.
Re: (Score:2)
I just wish they gave us an easier way to access the gfx card's pixel buffer - you know what ultimately comes out on the monitor. It's ridiculous the amount of code that's needed to write a simple pixel to a screen or window, especially if animation/video is involved.
Re: (Score:2)
Only pirates want that kind of direct access these days if you believe the FUD.
Re: (Score:2)
Re:Unification? (Score:4, Interesting)
As for video, why can't you generate that into a texture and draw it as a quad?
Textures arent any different than frame buffers when you get right down to it. You still need to lock its buffer/etc.
But in all honesty, the bus is so slow that you never want to write individual pixels over it anyways.... once you have settled on shuttling millions of bytes at a time over the bus for efficiency reasons, then it really doesnt matter what the boiler plate is surrounding that operation is... aggregated over all those pixels the overhead can only be minimal.
I think AMD's point tho is that something like DirectX enforces the rasterization paradigm when the hardware could be so much more if it wasnt forced to offer good performance for that specific API.
We are at the point now where the number of computations per second performed by todays GPU hardware should be enough to handle realtime raytracing.. nothing spectacular yet in the secondary ray department.. maybe just a few secondary rays per pixel.. interesting/unique stuff. But the hardware simply doesnt expose the functionality in a way that allows the leveraging of its horsepower in that way effectively, and that could in fact be blamed on DirectX bring the only API that matters. What if the hardware could be designed differently so that fill rate (as an example.. lots of triangles leading to lots of overdraw requires lots of fill rate) wasnt as important?
Re: (Score:2)
We're not quite there yet, but you are right that it should be close. Take a GTX-580 for example. It can sustain 800GFlops on certain code sequences. If we assume that real-time means 50fps and 1080p is the target resolution then if we could average out the workload we'd hit 8000Flops per pixel per frame. That's certainly enough to do something interesting.
Sadly it doesn't work like that. We hit that huge performance number on a SIMD array with a really deep pipeline and partially manual cache management. A
Credit (Score:3, Insightful)
Re: (Score:2)
Aw come on, the DOS4GW era was great!
Re: (Score:2)
But isn't asking to have "more direct, low level access" to the hardware EXACTLY like asking for the DOS days again, in a way? That was the first thing I thought. In reality, it would allow for faster game experience and better utilization of the hardware. Of course, this makes programming games a freaking nightmare as there are a million possible combinations, which would mean fewer games in that mode.
I always thought a "dedicated game mode" for the OS would be interesting, where all other services are
Re: (Score:3)
You're ignoring the fact that we already had OpenGL and that it had been in development and use for many, many years before MS decided to fragment the market. The real question is whether or not it's better than what was the status quo of OpenGL prior to all those stupid specialized APIs for the various graphics accelerators.
A better explaination? (Score:2)
I RTFA and i still didnt understand why the API is bottlenecking, why the draw calls are one third of the draw calls possible on the consoles and why going direct to metal gives you orders of magnitude performance boost after considering both hardwares. Does directX reject the stream processors? or what exactly?
Hardware needs to change DX is obsolete. (Score:5, Interesting)
State changes and draw commands are all sent from the CPU, buffered and then processed in the GPU. While this speeds up rendering considerably (the GPU is always a frame ore two behind the CPU) it makes it limiting, to get feedback from the GPU about the rendering state, and since the all the DX/GL commands are buffered, retrieving state or data means flushing/sync.
From modern algorithms related to occlusion estimation, or global illumination to overall reduction of state changes, it would benefit greatly if, for most tasks, the GPU could act by itself by running an user-made kernel that instructs it what to do (commands and state changes) instead of relying on DX, but for some reason this is not the direction GPUs are heading to, and it really doesnt make sense. Maybe Microsoft has something to do with it, but since Directx9 became the standard for game development, the API only became easier to program in versions 10 and 11, but didn't have major changes.
Re: (Score:2)
I'm not so sure that it's not the direction GPUs are heading. For example, NVidia's future Maxwell chip that combines a CPU and GPU into one will have true multitasking, and maybe take the pressure off the CPU completely. Perhaps AMD's Fusion already supports that kind of thing? You're right though, it would be great to have DMA to the GPU.
Re:Hardware needs to change DX is obsolete. (Score:4, Interesting)
Re: (Score:2)
Re:Hardware needs to change DX is obsolete. (Score:4, Interesting)
I suspect one of the reasons for this is that Microsoft has taken the view, in the last 6-7 years, that the GPU can be used for accellerating and enhancing the desktop experiance (Aero, IE9). Their other goal, to a certain extent, is cross platform compatibility. Making it possible to write casual games from Windows, phone, and xbox.
Disclaimer: I wrote a game way back in 1994, directly interfacing the VGA card. In straight x86 assembly. I was total bare metal 17 years ago. I haven't really kept up on game development much since then. However, I wrote a clone of it in XNA recently. It took me about 4 hours to replicate 9 months of work from 1994. That includes the time to download, install, and learn XNA. My, how things have changed.
Re: (Score:2)
From modern algorithms related to occlusion estimation, or global illumination to overall reduction of state changes, it would benefit greatly if, for most tasks, the GPU could act by itself by running an user-made kernel that instructs it what to do (commands and state changes) instead of relying on DX, but for some reason this is not the direction GPUs are heading to, and it really doesnt make sense.
It makes perfect sense if you are trying to sell new generations of GPUs and have code which targeted the old generation of GPUs work on the new one. You're asking them to create a situation which continually breaks backwards compatibility or which requires them to go to great lengths to emulate old hardware to preserve it, which will necessitate added silicon. For a games console it makes sense to give you that kind of access to the hardware but only insofar as it does not make development more complicated
Re: (Score:2)
A for() loop for drawing an object 5000 times is slow because there is a lot of cpu-gpu communication. Instancing fixes this but makes it less flexible (you cant change which arrays are drawn or most of the state between objects).
Re: (Score:2)
(super newbie programmer, merely guessing)
So rather than having to go to the CPU ask what's next, you just give the GPU data, package of commands(in form of algorithms, like the for loop) finish doing those then come back to the CPU for the next set?
i assume right now it's cpu does for, gpu draws, cpu does for, gpu draws
and what you want is cpu gives a package of stuff, gpu executes and draws
basically, have the GPU do some of the thinking
maybe?
Games Tied To Hardware? (Score:2)
Re: (Score:2, Interesting)
Nope. Right now the GPU-CPU situation looks like my boss dictating an email to his secretary - it probably wouldn't take as long if he just told her to inform the recipient he's going to be late. The developers want all possible API ops moved to the GPU where the CPU doesn't get in the way. They still want a standard API and most certainly don't want to develop straight for the metal.
Linux? (Score:2, Insightful)
Alright AMD. Make a game for Linux. That will give you the lower level access you want. Impress me :)
Console APIs vs PC APIs - an explanation (Score:5, Interesting)
The way things work on consoles is approximately similar to Windows/Linux/Mac, except for these important distinctions:
1. the hardware is a known target, as such the shader compilers and other components are carefully optimized only for this hardware, they do not produce intermediate bytecode formats or make basic assumptions of all hardware.
2. the APIs allow injecting raw command buffers, which means that you do not have to use the API to deliver geometry in any way shape or form, the overhead goes away but the burden of producing a good command buffer falls on the application when they use these direct-to-hardware API calls.
3. the APIs have much lower overhead as they are not a middle-man on the way to the hardware, but an API implemented (if not designed) specifically for the hardware. For example Microsoft had the legendary Michael Abrash working on their console drivers.
4. the hardware memory layout and access bandwidth is known to the developers, and certain optimization techniques become possible, for example rendering to a framebuffer in system memory for software processing (on Xbox 360 this is done for certain effects, on PS3 it is heavily utilized for deferred shading, motion blur and other techniques that run faster on the Cell SPE units), in some cases this has other special implications, like storage of sound effects in video memory on PS3 because the Cell SPE units have a separate memory path to video memory and thus can tap into this otherwise "unused" bandwidth for their purposes of sound mixing.
5. 3D stereo rendering is basic functionality on consoles.
The article is making the argument that we should be able to produce command buffers directly and insert them into the rendering stream (akin to OpenGL display-lists but new ones produced every frame instead of statically stored).
It is also making the argument that we should have explicit control over where our buffers are stored in memory (for instance rendering to system memory for software analysis techniques, like id Software Megatexture technology, which analyzes each frame which parts of the virtual texture need to be loaded).
There are more subtle aspects, such as knowing the exact hardware capabilities and designing for them, which are less of a "No API!" argument and more of a case of "Please optimize specifically for our cards!", which is a tough sell in the game industry.
AMD has already published much of the information that studios will need to make use of such functionality, for example the Radeon HD 6000 series shader microcode reference manual is public already.
Intel also has a track record of hardware specifications being public.
However NVIDIA is likely to require a non-disclosure agreement with each studio to unlock this kind of functionality, which prevents open discussion of techniques specific to their hardware.
Overall this may give AMD and Intel a substantial edge in the PC hardware market - because open discussion of graphics techniques is the backbone of the game industry.
On the fifth point it is worth noting that NVIDIA Geforce drivers offer stereo rendering in Direct3D but not OpenGL (despite it having a stereo rendering API from the beginning), they reserve this feature only for their Quadro series cards for purely marketing reasons, and this restriction prevents use of stereo rendering in many OpenGL-based indie games, another case of consoles besting PC in functionality for ridiculous reasons.
Easy workaround (Score:3)
Those of us who are old enough to remember a time before the GUI was the only show in town surely remember that "big" games almost always came with their own boot disk. Would it be so hard to go back to that, if the benefits were worth it? A DVD, or a flash drive, with a small Linux kernel, a library of drivers for the wide range of hardware out there and the game files - optimized for speed, with no loss of performance because a huge, bloated GUIed OS gets in your way. If the game developer uses an off-beat file system, it'll also prevent piracy!
Granted it'll also bring back the bad old days of cursing up a storm because the latest game didn't support your Gravis Ultrasound, but only the crappy SoundBlaster... and off course the game would have to include it's own TCP/IP stack if you want multiplayer... and a few gigs of drivers for the various motherboards, graphics adapters and so on and so forth that the casual gamer may or may not have - but at least you don't have to worry about a system put in place to simplify all that stuff getting in your way.
Get to the hardware? (Score:2)
By giving you access to the hardware at the very low level, you give games developers a chance to innovate
I am ready!
MOV DX, 03D4h
MOV AX, 06B00h
OUT DX, AX
Sounds like a good idea (Score:2)
Re: (Score:2)
That's what the Cell was, didn't work (Score:3)
The Cell is a mini vector processor cluster which is not completely unlike graphics cards and was, at the time it was released, more powerful than them.
You had the usual C/C++ toolchain available, and it was a fairly simple architecture to use compared to a GPU (and even compared to an x86 -- SIMD is simpler on the Cell than on x86).
Yet it was a failure, because game developers were completely unable to use it. Game development is a quick and dirty process, and they need to be multi-platform to sell more. There is no time to learn the specifics of a platform and designing your game to exploit it.
That's why they prefer having one API to rule them all (DirectX).
Even within the whole of the Ubisoft studios, there are only a couple of people capable of getting near 80% of the Cell processing power.
ummm... (Score:2)
Nothing is optimized fully for DX11/GL4 yet (Score:4, Interesting)
Re: (Score:2)
Console graphics can't really rival PC graphics. Take a look at this comparison of PC vs Xbox vs PS3 in GTA4 [gamespot.com]. Then consider the fact that most modern gaming PCs (i.e. quad core with a midrange GPU or better) easily run the game at 1920x1080 @60 FPS, whereas the consoles use lower resolution and get choppier frame rates.
Re: (Score:3)
Console graphics can't really rival PC graphics. Take a look at this comparison of PC vs Xbox vs PS3 in GTA4 [gamespot.com].
I took a look. Totally underwhelmed at the differences. The problem is we have reached diminishing returns in graphics quality per hardware improvement.
I remember the jumps in each generation of PC and console graphics before 2000, and each one was huge and made the earlier generation look dated. When the PS3 and 360 came out, they were clearly better, but they weren't *that* much better. The same goes for today's PC graphics vs the aging consoles.
I don't see this situation changing until realtime ray traci
Re: (Score:2)
You're not looking. The leaves on the trees, for instance. The PC version is clearly more detailed. When looking at them in full resolution, there's really no comparison.
Re: (Score:2)
You're missing the point. I don't dispute the graphics are noticeably better. I'm saying it's not impressive or worth caring that much about compared to previous generations.
Before 2000, if you bought a modern game one year, and then another one 3-5 years later, the difference would have been huge. Look at the jump between the last two console generations, from PS2 to PS3, compared to the PS1 to the PS2. The diminishing returns are blatantly obvious.
Re: (Score:2)
No, the problem is that Rockstar made a shoddy PC port for an already relatively old game. You can't exactly see the difference if they didn't bother making any! What you're seeing is the inherent higher quality a PC can do, not an optimized game specifically for the PC. Had they actually worked on making a proper PC port with better graphics, you'd probably be rightfully amazed.
Re: (Score:2)
The main difference is that consoles top out at 720p, whereas a decent PC graphics card can handle 1080p without too much trouble, even my somewhat antiquated nVidia GeForce 9400GT from a few years back, which cost me less than a hundred dollars at the time, can handle 1080p without too much trouble if it does get a wee bit choppy at times.
When I play anything on the PS3 the aliasing is probably the most notable distraction in terms of graphics, and that only really goes away with high resolutions. Probably
Re: (Score:2)
Crysis is a better example. The engine used in Crysis 2 (Cryengine3) has been dumbed down so it'll work on consoles.
Re: (Score:2)
And none of those look as good as something like Crysis 2 that's got some decent PC optimizations.
The power comparison isn't even close, and that's if you just look at CPU/GPU speed & features and ignore stuff like RAM. The consoles are so memory starved that it's had a major impact on game design.
Re: (Score:2)
Re: (Score:2)
It is possible for something which was innovative and liberating to become stale and restraining, you know.
Re:Funny, John Carmack thinks just the opposite (Score:4, Insightful)
Re: (Score:2, Informative)
That's not true.
If you're going to pretend to be knowledgeable, then it's a good idea to at least read the article.
Carmack's talkng about OpenGL vs DirectX. Arguably... DirectX is now a better API for writing games than OpenGL. I say arguably because I don't think it's a settled question - it is, however, one that is up for discussion - comparing Apples and Apples.
This article though... that's about the model used by both DX and OpenGL. Which basically means the CPU tells the GPU to draw each polygon (ok...
Re: (Score:2)
And there's a very good reason behind Carmack's exclusive use of OpenGL for the past decade. When he first tried giving Quake a hardware accelerated video backend, he chose Rendition Verite's proprietary API (google for "vquake"). After VQuake's release, it turned out that Verite cards are horribly broken with no chance of being fixed in the near future so the entire project was a huge waste of time and money. So Carmack made the right decision to give up on proprietary API crap and stick with vendor-neutra
Re: (Score:2)