Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Cloud Graphics Games

NVIDIA GeForce GRID Cloud Gaming Acceleration 56

Vigile writes "NVIDIA today announced a new technology partnership with Gaikai, an on-demand gaming company that competes with OnLive, to bring GeForce GRID to the cloud gaming ecosystem. GRID aims to increase both the visual quality and user experience of cloud gaming by decreasing latencies involved in the process — the biggest hindrance to acceptance for consumers. NVIDIA claims to have decreased the time for game stream capture and decode by a factor of three by handling the process completely on the GPU, while also decreasing the 'game time' with the power of the Kepler GPU. NVIDIA hopes to help both gamers and cloud streaming companies by offering 4x the density currently available and at just 75 watts per game stream. The question remains — will mainstream users adopt the on-demand games market as they have the on-demand video market?"
This discussion has been archived. No new comments can be posted.

NVIDIA GeForce GRID Cloud Gaming Acceleration

Comments Filter:
  • No (Score:5, Insightful)

    by Anonymous Coward on Tuesday May 15, 2012 @09:33PM (#40012371)

    Not as long as Comcast controls my bandwidth.

  • by gman003 ( 1693318 ) on Tuesday May 15, 2012 @09:46PM (#40012449)

    I have to say, some of their comparisons seem... unfair.

    Their main chart compares three things: regular "console connected to display", "current cloud systems", and "GRID cloud rendering".

    First off, they cite 66ms latency just at the display level, which is definitely at the higher end of the spectrum. But at least they use the same latency in each.

    Their cloud/cloud comparisons are also quite suspicious. Reducing encode by 60%, yeah, I can buy that. Reducing *decode* - which, I remind you, is done client-side - by the same amount is also suspicious in light of their "this does not require an nVidia client, it will work with any h.264 decoder" claim.

    Then they claim to have reduced network latency, and significantly (75ms to 30ms). Now, I can vaguely see how they *could* - if they can reduce bandwidth usage significantly, they might eke out a bit less latency, but I highly doubt they can more than *double* their compression efficiency. Unless they're doing something crazy like putting a network interface directly on the GPU (one image they show contradicts that theory), I think this claim is also pretty dubious.

    The worst one is the "game pipeline" time. While I can believe that a newer, more powerful graphics card can definitely perform *twice* as well as an older one, I can also state definitively both that "you can put that new card in the home console as well" and "new games will expand to use that power, leaving you back where you started at 100ms render times". The former I can state because they've *already* released Kepler-based cards (to rave review, although my own seems to be backordered); the latter I can state because that's how the industry has worked since at least the late 70s.

    Long story short, they seem to be doing some *extremely* unscientific, biased comparisons. Do they probably have something here? Yeah. Is it literally going to be as good as an actual console (or better yet, PC) connected directly to the console, as they claim? No.

  • Re:No (Score:4, Insightful)

    by Zaelath ( 2588189 ) on Tuesday May 15, 2012 @10:14PM (#40012595)

    Cloud assumes bandwidth is free... they assume incorrectly. It might be cheap in US markets, but it's never free.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...