Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
AMD Graphics Games

Game Devs Only Use PhysX For the Money, Says AMD 225

arcticstoat writes "AMD has just aimed a shot at Nvidia's PhysX technology, saying that most game developers only implement GPU-accelerated PhysX for the money. AMD's Richard Huddy explained that 'Nvidia creates a marketing deal with a title, and then as part of that marketing deal, they have the right to go in and implement PhysX in the game.' However, he adds that 'the problem with that is obviously that the game developer doesn't actually want it. They're not doing it because they want it; they're doing it because they're paid to do it. So we have a rather artificial situation at the moment where you see PhysX in games, but it isn't because the game developer wants it in there.' AMD is pushing open standards such as OpenCL and DirectCompute as alternatives to PhysX, as these APIs can run on both AMD and Nvidia GPUs. AMD also announced today that it will be giving away free versions of Pixelux's DMM2 physics engine, which now includes Bullet Physics, to some game developers."
This discussion has been archived. No new comments can be posted.

Game Devs Only Use PhysX For the Money, Says AMD

Comments Filter:
  • by EvolutionsPeak ( 913411 ) on Monday March 08, 2010 @02:18PM (#31403034)

    Sounds to me like AMD just wishes they'd thought of it first. There's no reason AMD couldn't offer similar deals.

  • Re:Maybe (Score:5, Informative)

    by hedwards ( 940851 ) on Monday March 08, 2010 @02:25PM (#31403124)
    If you noticed in the summary, AMD is advocating for a similar technology that works on their hardware as well as on nVidia's, seems like developers would prefer that for practical reasons.
  • by idontgno ( 624372 ) on Monday March 08, 2010 @02:27PM (#31403154) Journal

    Wow, that's so badly edited it's surreal.

    This is one of those days where even the "Preview" button doesn't help.

    That should read "Says the kid that the dog isn't playing with."

  • by Pojut ( 1027544 ) on Monday March 08, 2010 @02:35PM (#31403296) Homepage

    Ask, and ye shall receive: []

  • by Sycraft-fu ( 314770 ) on Monday March 08, 2010 @02:55PM (#31403540)

    Intel owns Havok (since 2007) and licenses it out all over the place. There's a page that has all the titles using it ( and it is not a small list. Havok also runs on the CPU exclusively (and will probably continue that way since Intel wants to sell quad cores) so works no matter what your graphics card.

    It's also not just physics anymore, there's Havok animation libraries and so on.

  • by jpmorgan ( 517966 ) on Monday March 08, 2010 @02:57PM (#31403562) Homepage

    Suggesting that OpenCL and DirectCompute are alternatives to PhysX is analgous to saying that OpenGL is an alternative to Unreal Engine.

    The basic reality here is that four years ago NVIDIA decided invest a lot of money in making GPUs more general purpose, to apply them to more problems than just 3D rendering. ATI didn't care and just focused on making the fastest 3D card possible. Today there are alternatives to NVIDIA's technology, most notably OpenCL... but it's worth remembering that OpenCL is very strongly derived from CUDA. In fact, most of the OpenCL spec looks like they ripped it out of the CUDA spec and changed the function calls from cudaSomething to clSomething.

    So yes, open standards are good. But in this case it does smell strongly of sour grapes. ATI made several bad business decisions and have been left playing catchup.

  • by Ironhandx ( 1762146 ) on Monday March 08, 2010 @03:11PM (#31403766)
    Tell that to AMD who have sold 2 million directx11 GPUs since release. (

    IGP are sufficient for 90% of users... but that hasn't changed since back in the Pentium 1 days. Many PCs were equipped with IGP or something that amounted to the same thing but in card form even then.

    Also: GPGPU is NOT meant for gfx processing on the fly at all, so it has absolutely nothing to do with devs having to target the lowest common denominator. You even state that its useless except for scientific purposes in your own comment. The entire purpose of the GPGPU move is towards scientific purposes where vast quantities of repeated calcs have to be done. Something that GPUs excel at.

    At least get SOME of your facts straight before spouting FUD.
  • Re:Maybe (Score:5, Informative)

    by ASBands ( 1087159 ) on Monday March 08, 2010 @03:24PM (#31403946) Homepage

    I've done some work with both PhysX and the things that AMD is pushing for. I try to keep with the Physics Abstraction Layer [], which lets me plug in whatever physics engine as the backend, which gives a pretty damn good apples-to-apples performance metric. Personally, my ultimate choice of physics engine is the one which exhibits the best performance. My experience may differ from others, but I generally get the best performance from PhysX on with an nVidia GPU and BulletPhysics with an AMD GPU. Sometimes, the software version of PhysX outstrips the competition, but I have never seen anything beat PhysX in performance with GPU acceleration turned on. And with PAL, it is easy to check if there is GPU support on the machine and swap in the physics engine with the best performance (PAL is awesome).

    Here's the thing: GPU-accelerated physics are just plain faster. Why? Because collision detection is a highly parallelizable problem. Guess what hardware we have that can help? The GPU. Another great part of using the GPU is that it frees the CPU to do more random crap (like AI or parsing the horribly slow scripting language).

    AMD is working on both BulletPhysics and Havok so they can do GPU acceleration. But I have a feeling that PhysX performance will remain faster for a while: PhysX was designed to natively run on the GPU (technically, a GPU-like device), while these other libraries are not. Furthermore, nVidia has quite a head start in performance tuning, optimization and simple experience. In five years, that shouldn't matter, but I'm just saying that it will take a while.

    So here is my message to AMD: If you want people to use your stuff, make something that works and let me test it out in my applications. You've released a demo of Havok with GPU acceleration. PhysX has been and continues to work with GPU acceleration on nVidia GPUs and will frequently outperform the software implementation. I'm all for open alternatives, but in this case, the open alternatives aren't good enough.

  • by Nikker ( 749551 ) on Monday March 08, 2010 @04:21PM (#31404722)
    The developers are getting paid to develop, so how is this a bad thing for umm developers? How do we know it's not a move on the part of the game house to hold out and make Nvidia pay their developers to add a bit to the game? And how is adding more features to a game a bad thing? Won't someone think of the poor developers who are given more money to do their jobs?
  • Re:Maybe (Score:3, Informative)

    by fast turtle ( 1118037 ) on Monday March 08, 2010 @06:12PM (#31406360) Journal

    The problem is, Nvidia bought the company that created the PhysX chip. The then incorporated the capabilities into their GPU and their drivers while refusing to allow a true PhysX card to work with anything except an Nvidia GPU in the system. In this case, Nvidia has done a major Dis-service to everyone as the original PhysX did run on anything that supported it but now it's a closed sorce, Nvidia only app/feature.

  • Re:Maybe (Score:3, Informative)

    by BikeHelmet ( 1437881 ) on Monday March 08, 2010 @07:20PM (#31407414) Journal

    Only half the market is going to be able to take advantage of it after all.

    1) PhysX runs on the CPU if no nVidia GPU is present. A $100 quad-core CPU easily handles it for most games.
    2) According to the Steam Survey, nVidia is approximately 66% of the PC gaming market. Two thirds.

Bell Labs Unix -- Reach out and grep someone.