Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Graphics Games

AMD, NVIDIA, and Developers Weigh In On GameWorks Controversy 80

Dputiger writes: "Since NVIDIA debuted its GameWorks libraries there's been allegations that they unfairly disadvantaged AMD users or prevented developers from optimizing code. We've taken these questions to developers themselves and asked them to weigh in on how games get optimized, why NVIDIA built this program, and whether its an attempt to harm AMD customers. 'The first thing to understand about [developer/GPU manufacturer] relations is that the process of game optimization is nuanced and complex. The reason AMD and NVIDIA are taking different positions on this topic isn't because one of them is lying, it’s because AMD genuinely tends to focus more on helping developers optimize their own engines, while NVIDIA puts more effort into performing tasks in-driver. This is a difference of degree — AMD absolutely can perform its own driver-side optimization and NVIDIA's Tony Tamasi acknowledged on the phone that there are some bugs that can only be fixed by looking at the source. ... Some of this difference in approach is cultural but much of it is driven by necessity. In 2012 (the last year before AMD's graphics revenue was rolled into the console business), AMD made about $1.4 billion off the Radeon division. For the same period, NVIDIA made more than $4.2 billion. Some of that was Tegra-related and it's a testament to AMD's hardware engineering that it competes effectively with Nvidia with a much smaller revenue share, but it also means that Team Green has far more money to spend on optimizing every aspect of the driver stack.'"
This discussion has been archived. No new comments can be posted.

AMD, NVIDIA, and Developers Weigh In On GameWorks Controversy

Comments Filter:
  • by Anonymous Coward

    This article is very confusing to me

  • Bottom line, if a game runs poorly on a graphic device AMD and NVIDIA directly get blamed. This program is merely NVIDIA's tack towards improving user perception. They know if you have a problem running software on one of their cards you will probably go buy a Radeon. The computing hardware in each card is far beyond the privy of any single developer to understand at this point. You need a glue layer and technical resources to properly expose the interfaces. The problem is when one vendor is specifically excluded from the glue layer. Both of these vendors have been cheating benchmarks by analyzing what game is attempting to access the features and then dumbing them down selectively in barely perceivable ways to artificially pump benchmark results. The problem I have with NVIDIA doing this is mostly that they typically have their own black box code (that is closed) and you have no idea how that is interacting. If it interacts poorly with your application you are just screwed. There is nothing to fix you must patch around it. Ergo, the state of the current NVIDIA drivers in Linux. =)
  • When opengl 1.0-1.3 (and dx5/6/7) was king, gpus were fixed function rasterizers with a short list of togglable features. These days the pixel and vertex shader extensions have become the default way to program gpus, making the rest of the api obsolete. It's time for the principal vendors to rebuild the list of assumptions of what gpus can and should be doing, design an api around that, and build hardware specific drivers accordingly.

    The last thing I want is another glide vs speedy3D...err I mean amd mant

    • It's time for the principal vendors to rebuild the list of assumptions of what gpus can and should be doing, design an api around that, and build hardware specific drivers accordingly.

      For the most part, they've done that. In OpenGL 3.0, all the fixed-function stuff was deprecated. In 3.1, it was removed. That was a long, long time ago.

      In recent times, while AMD introduced the Mantle API and Microsoft announces vague plans for DX12, both with goals of reducing CPU overhead as much as possible, OpenGL already has significant low-overhead support [gdcvault.com].

    • by mikael ( 484 )

      You should look at the latest OpenGL ES specification. This is OpenGL optimized for mobile devices and gets rid of most of the old API bits while still supporting vertex, fragment and compute shaders. Anything else is just implemented using shaders.

      But mantle gives you access to the hardware registers (those descriptors) while avoiding the overhead of updating the OpenGL state, then determining what has changed and hasn't, then writing those values out to hardware.

  • by DMJC ( 682799 )
    Frankly, it's time to stop blaming NVIDIA and start blaming ATi, yes everyone likes the underdog. But in this case seriously? They had 20 years to get OpenGL correct. Noone has been blocking them from writing their own drivers for Linux/Mac/Windows. Frankly I think that ATi has made a huge engineering mistake by only focusing on Win32 and by not supporting Unix from day one as a first class citizen, they've shot themselves in the foot, now they expect the industry to clean up the mess by conforming to ATi.
    • Which nvidia drivers do you compile?
      geforce is a binary driver and nv doesn't support 3D and is no longer supported.
      nouveau isn't developed by nvidia.

      • by Arkh89 ( 2870391 ) on Wednesday June 04, 2014 @12:25AM (#47162027)

        The part of the driver which is compiled as a kernel module to serve as adapter against the binary blob?
        You thought that it wanted the linux-headers package just for the fun of reading it on its own time?

      • by DMJC ( 682799 )
        You've always had to compile the interface layer between NVIDIA's blob and XFree86. Dont' even make me go into how complicated and messy that process was. The installer script is so much easier than the shit we used to deal with back in the TNT2/geforce 2 days.
        • by DMJC ( 682799 )
          ftp://download.nvidia.com/XFre... [nvidia.com] This is an example, notice where it says run MAKE and MAKE INSTALL. And you had to do it multiple times, in different folders... and half the time it broke. And you didn't know why. Absolute madness.
      • geforce is a binary driver

        And how do you go about having it support a kernel with an unstable ABI?

    • AMD supports openGL just fine, but they aren't gracefully failing sloppy programming. The Nvidia driver tends to try and make "something you probably sort of meant anyway" out of your illegal openGL instruction and AMD fails you hard with an error message. That's no reason to blame the manufacturer. The game developers deserve blame for sloppy coding and sloppy testing.
      • AMD supports openGL just fine, but they aren't gracefully failing sloppy programming. The Nvidia driver tends to try and make "something you probably sort of meant anyway" out of your illegal openGL instruction and AMD fails you hard with an error message. That's no reason to blame the manufacturer.

        Nvidia is hewing to the following:

        Robustness principle [wikipedia.org]

        In computing, the robustness principle is a general design guideline for software:

        Be conservative in what you do, be liberal in what you accept from others (often reworded as "Be conservative in what you send, be liberal in what you accept").

        The principle is also known as Postel's law, after Internet pioneer Jon Postel, who wrote in an early specification of the Transmission Control Protocol

        Its generally a good idea in when i

        • That's interesting. Coding as a kid, I more or less came up with the same principle for my little programs. I also later figured that it was misguided to leave robustness up to the implementation, instead of the specification (or in my case the function definition).

          API functions that have any reasonable expectations for default values should just define those defaults, not silently default to something seemly random and completely undocumented.

        • by Raenex ( 947668 )

          It's a garbage principle that makes a mess of the ecosystem, because then you have each implementation making different decisions on just how much slop you allow, resulting in programs that work differently on different systems. It's better to have hard errors.

          • While I agree that the principle can result in a mess if misapplied, my interpretation has always been that "be liberal in what you accept" only means that you should avoid defining rigid input formats full of arbitrary rules. If there are a number of different ways to say the same thing, such that it's still clear what was meant, accept them all as equivalent. Allow independent modifiers to be written in any order; don't put artificial restrictions on which characters can be used in a name, or the specific

            • by Raenex ( 947668 )

              While I agree that the principle can result in a mess if misapplied, my interpretation has always been that "be liberal in what you accept" only means that you should avoid defining rigid input formats full of arbitrary rules.

              If you read the Wikipedia article [wikipedia.org], you'll see that it came about as advice for implementing the TCP protocol.

              • If you read the Wikipedia article, you'll see that it came about as advice for implementing the TCP protocol.

                Yes, and I did say that it can result in a mess if misapplied. The right time to consider what to accept and how to interpret it would have been when writing the TCP standard, not at the implementation stage, but we can't always have what we want. It's still a good idea to be liberal in what you accept, perhaps especially so when the standard is lacking in detail, since you never know just how the sender might interpret a particular section. You need to make your software work with all other reasonable impl

                • by Raenex ( 947668 )

                  Yes, and I did say that it can result in a mess if misapplied.

                  You can't tell me it's being misapplied when the origin applies it in exactly that manner. You misunderstood, that is all.

      • Re: (Score:3, Insightful)

        by AndOne ( 815855 )
        AMD supports OpenGL just fine? That's gotta be the quote of the day. If AMD ever supports OpenGL just fine I'll throw them a fucking parade. Kernel panicking and bringing the entire system to it's knees because AMD doesn't check to see if a target is attached to a shader output? Sloppy coding yes, but not an acceptable response by the driver as you have no idea where the crash is coming from.. No OpenGL error message, just a crashing system. GPU to GPU buffer copies don't work unless you do any othe
    • by stox ( 131684 )

      Just by coincidence, a lot of Nvidia engineers were "inherited" from SGI.

    • by Yunzil ( 181064 )

      Frankly I think that ATi has made a huge engineering mistake by only focusing on Win32 and by not supporting Unix from day one as a first class citizen,

      Yeah, how stupid of them to focus on a platform that has 90%+ of the market. Clearly it would have been a better decision to dump all their resources into a niche platform.

  • "Since AMD debuted its Mantle libraries there's been allegations that they unfairly disadvantaged NVIDIA users or prevented developers from optimizing code."

    Get the idea?

    • Yea, but mostly those are no one cares cause its AMD and they get free passes when it happens but when its against nVidia its huge news everywhere
    • by DrYak ( 748999 ) on Wednesday June 04, 2014 @03:01AM (#47162439) Homepage

      AMD's perspective is that Mantle is less problematic:
      - Mantle's spec are open.
      - Also it's just a very thin layer above the bare hardware. Actual problems will mostly be confined in the actual game engine.
      - Game engine code is still completely at the hand of the developer and any bug or short coming is fixable.
      Whereas, regarding GameWorks:
      - It's a closed-source blackbox
      - It's a huge midleware, i.e.: part of the engine itself.
      - The part of the engine that is GameWorks is closed and if there are any problems (like not following standard and stalling the pipeline) no way that a developer will notice and be able to fix, even as AMD are willing to help. Whereas Nivida could be fixing this by patching around the problem in the driver (as usual), because they control the stack.

      So from their point of view and given their philosophies, GameWorks is really destructive, both to them and to the whole market in general (gameworks is as much problematic to ATI, as it is to Intel [even if it is a smaller player] and to the huge diverse ecosystem of 3D chips in smartphone and tablets).

      Now, shift the perspective to Nvidia.
      First they are the dominant player (AMD is much smaller, even if they are the only other one worth considering).
      So most of the people are going to heavily optimise game to their hardware, and then maybe provide an alternate "also ran" back-end for mantle. (Just like in the old days of Glide / OpenGL / DX backends).
      What does Mantle bring to the table? Better driver performance? Well... Nvidia has been into the driver optimisation business *FOR AGES*, and they are already very good at it. What is the more likely, that in case of performance problems developers are going to jump on mass to a newer API that is only available from one non-dominant PC player, and a few consoles, and completely missing on any other platform? Or that Nidia will patch around the per problem by hacking their own platform, and dev will continue to use the ?
      In Nvidia's perspective and way to work, Mantle is completely irrelevant, barely registering a "blip" on the marketing-radar.

      that's why there's some outcry against GameWorks, whereas the most Mantle has managed to attract is a "meh". (and will mostly be considered as yet another wanabe-API that's going to die in the mid- to long-term)

      • by yenic ( 2679649 )

        Nvidia has been into the driver optimisation business *FOR AGES*, and they are already very good at it.

        So good they've been killing their own cards for years now.

        2010 http://www.zdnet.com/blog/hard... [zdnet.com]
        2011 http://forums.guru3d.com/showt... [guru3d.com]
        2013 http://modcrash.com/nvidia-dis... [modcrash.com]

        This has never happened once to AMD cards, because they're more conservative with their optimizations. NV isn't even the price/performance leader and rarely is. So you get to spend more, and they optimize the crap out of your drivers and card until they break it.
        They're almost averaging once a year in killing cards. No thanks. While both

  • I'd also add that considering the NVIDIA Binary blob works on FreeBSD, Solaris, Mac OSX, and Linux as well as Windows, that it is well engineered. The AMD/ATi driver doesnt' even work correctly on Linux, and Apple had to write their own driver for Mac OSX. There is an officially available (from nvidia.com) driver for Mac OSX for their Quadro cards. It is pretty obvious that AMD/ATi has always favored Windows/Microsoft and has put minimal effort into supporting Unix based platforms. Now they're reaping what
    • Re: (Score:2, Interesting)

      by drinkypoo ( 153816 )

      It is pretty obvious that AMD/ATi has always favored Windows/Microsoft and has put minimal effort into supporting Unix based platforms.

      The same is true of nVidia, the definition of "minimal" over there is simply greater than it is at AMD. nVidia is well known to have aimed their cards directly at D3D support and filled in the gaps in [slower] software for OpenGL in the past. The difference is either in where they threw up their hands and said fuck it, or simply in the area of competence. They, too, put more of their effort into development for Windows. But they also manage to put together a working Linux driver. As you say, ATI can't even

  • Developer see NVIDIA has better experience in creating great hardware than AMD, I assumed.
  • Avoi9ding to answer (Score:5, Informative)

    by citizenr ( 871508 ) on Wednesday June 04, 2014 @07:12AM (#47163041) Homepage

    Nvidia PAYS for removal of features that work better on AMD

    http://www.bit-tech.net/news/h... [bit-tech.net]

    Nvidia pays for insertion of USELESS features that work faster on their hardware

    http://techreport.com/review/2... [techreport.com]

    Nvidia cripples their own middleware to disadwantage competitors

    http://arstechnica.com/gaming/... [arstechnica.com]

    Intel did the same, but FTC put a stop to it
    http://www.osnews.com/story/22... [osnews.com]

    so how exactly is that not Nvidias doing??

    Nvidia is evil and plays dirty. They dont want your games to be good, they want them to be fast on Nvidia, any means necessary. They use "means to be played" program to lure developers in, pay them off and hijack their games to further nvidias goal.

    For example how come Watch Dogs, a console title build from the grounds up with AMD GPU/CPU optimizations to run good on both current gen consoles, is crippled on PC when played on AMD hardware? How does this shit happen?

    This is something FTC should weight in just like in Intels case.

    • by nhat11 ( 1608159 )

      http://www.bit-tech.net/news/h... [bit-tech.net] - What? It's just story speculation here

      http://techreport.com/review/2... [techreport.com] - the article doesn't state that Nvidia pays anyone, it's a statement you made up yourself.

      At this point I decided not to waste anymore of my time after looking up the first 2 links

      • http://la.nvidia.com/object/nz... [nvidia.com]

        "The Way It's Meant to be Played"
        Nvidia pays you shitload of money for participating in this program, and can additionally guarantee certain sale goals (by bundling your product with their GPUs).
        In order to participate you only have to do two things, insert nvidia ad clip at the start of the game, and let nvidia rape your codebase.

        On paper Nvidia pays you for joint marketing campaign, but deep down in the paperwork you are letting them decide what your codebase will look lik

    • Re: (Score:3, Informative)

      by Ash Vince ( 602485 ) *

      Nvidia PAYS for removal of features that work better on AMD

      http://www.bit-tech.net/news/h... [bit-tech.net]

      Reading the link you posted above, it seems like a bit of a non-factual load of waffle. Nvidia deny paying, Ubisoft deny being paid, and the only sources mentioned are anonymous speculators we have no way of knowing are not just a few paid ATI shills.

      Nvidia pays for insertion of USELESS features that work faster on their hardware

      http://techreport.com/review/2... [techreport.com]

      Wow, another example of amazing journalism here.

      Some guy moaning about Crysis having loads of detailing that is only used in the DirectX11 game. He give loads of examples of this, then posts a summary page of wild speculation with no sources quoted other than h

  • AMD made about $1.4 billion off the Radeon division. For the same period, NVIDIA made more than $4.2 billion. Some of that was Tegra-related and it's a testament to AMD's hardware engineering that it competes effectively with Nvidia with a much smaller revenue share, but it also means that Team Green has far more money to spend on optimizing every aspect of the driver stack.'"

    While that's true for revenue, the difference in profits between AMD and NV are very close.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...