Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
PC Games (Games) The Internet Games

OnLive Gaming Service Gets Lukewarm Approval 198

Vigile writes "When the OnLive cloud-based gaming service was first announced back in March of 2009, it was met with equal parts excitement and controversy. While the idea of playing games on just about any kind of hardware thanks to remote rendering and streaming video was interesting, the larger issue remained of how OnLive planned to solve the latency problem. With the closed beta currently underway, PC Perspective put the OnLive gaming service to the test by comparing the user experiences of the OnLive-based games to the experiences with the same locally installed titles. The end result appears to be that while slower input-dependent games like Burnout: Paradise worked pretty well, games that require a fast twitch-based input scheme like UT3 did not."
This discussion has been archived. No new comments can be posted.

OnLive Gaming Service Gets Lukewarm Approval

Comments Filter:
  • by Anonymous Coward on Friday January 22, 2010 @06:20AM (#30857758)

    The guy logged in using credentials 'borrowed' from an authorised beta tester, from more than twice the recommended distance from the server, acknowledged multiple high latency (due to distance) notifications, and the best he could do is damn the service with faint praise.

  • by Anonymous Coward on Friday January 22, 2010 @06:53AM (#30857880)

    ". After all, playing on the internet isn't as quick as a "LAN frag fest", and yet the vast majority of gamers, even of twitch-heavy games, are playing on the internet, not on LANs."

    With tons of client side prediction and faking trying very very hard to hide the client-server lag.

    With OnLive, you can't do that - it just sends some inputs and gets some video back.

    I mean, this could work under optimal, super fast network connections, but I'm pretty sure ensuring you have such a connection would be so expensive that this is a solution to a problem that doesn't exist - it is always cheaper to spend the money on client side hardware instead. I'm sure stupid venture capitalists will keep pumping money into this with idiotic projections how bazillion people will pay X dollars per month or hour or whatever that will somehow cover those network infrastructure costs.

    I doubt it will and few years from now OnLive goes bust taking a big pile of money with it, but hey, you never know... can't do impossible stuff without trying.

  • If you can't keep up with the upgrade cycle required to play the latest PC games, buy a console or play older games.

    The problem with playing older games is that either the matchmaking servers end up switched off (e.g. DNAS Error -103 on PS2 games), or if not, the established players tend not to be friendly toward newbies (e.g. "gtfo n00b" on several classic first-person shooters).

  • Re:As expected (Score:3, Informative)

    by mseeger ( 40923 ) on Friday January 22, 2010 @07:33AM (#30858100)

    Latency (for a not overbooked line) depends on bandwitdh and packet size. Same packetsize and ten times the bandwidth reduces the latency nearly by a factor of ten (on a single line).

    Overall latency depends on the sum of all latencies for each lines on the way plus a bonus for each router. The bonus for the routers is not the issue. The number of hops can be influenced by a service provider like OnLive through Peering Agreements. Something OnLive cannot influence, is the last mile to the customer. Usually 30-50% of the total latency happens here. So an increase in bandwidth will help there.

    In my case if have a latency of about 25-30ms to the major hosting providers here in germany (which is due to a fast line [6mbps + Fastpath]). The time can be distributed as follows:

    - 2ms (my home network)
    - 12ms my DSL line
    - 2ms my Provider
    - 10ms Upstrean Provider
    - 1ms Hosting Provider

    Even in my case nearly 50% of the latency is created on the last mile. The packet travels Kiel -> Hamburg -> Hannover -> Duesseldorf -> Frankfurt. That amounts to perhaps 400 miles. 50% of the latency on 1% of the way seems to me as a pretty conclusive argument that more bandwidth to the enduser would overall latency significantly.

    CU, Martin

    P.S. This all depends on the Bandwidth not being overbooked.....

  • Re:As expected (Score:4, Informative)

    by mseeger ( 40923 ) on Friday January 22, 2010 @08:54AM (#30858434)
    Hi,

    If say the speed-limit on a motorway was 70mph, and there was no congestion on the road; why would adding in extra lanes to the motorway increase how fast I get to my destination?

    You get the car analogy wrong. A packet of 100 bytes is not similar to a single car. It consists of 800 cars (bits). So if you increase the number of lanes more cars can travel. Each car travels still the same speed (of light) but by allowing more cars at the same time, the delivery (packet) distributed over 800 cars gets delivered faster.

    The time a packet takes to get transmitted is roughly: packetsize/bandbidth.

    Say you have a 10mbps line and a 1000bytes packet. This will take 8000 bit / 10.000.0000 bit / s = 0,00008 s or 0,8ms (one way). So the latency through the line will be roughly 1,6ms. If you got to 100mbps ethenet or even gigabit ethernet, the time will go down by factor 10 each step.

    But there are some side effects: Sometimes packets are packed into larger packets to fill the line better. This will increase the latency. When the speed of the line is high, the time the OS needs to send/receive the packets gets more influence on the latency. Also the latency may occur in your providers network because he overbvooks the service (selling access for more cars than the lanes allow and therfor creating congestion).

    To see wether your line is the chokepoint use Traceroute [wikipedia.org] to see where the latency happens. If the latency already occurs close to you, a faster line may improve the latency. Also look for features from your provider as "fastpath".

    CU, Martin

    P.S. This is a very short overview of the topic. A complete coverage would come as a book. BTW the books have already been written: Richard W. Stevens: TCP/IP Illustrated [kohala.com].

  • Re:As expected (Score:3, Informative)

    by mseeger ( 40923 ) on Friday January 22, 2010 @09:31AM (#30858614)

    Quibbler :-) You wanted it....

    For our example 0.5C is sufficently close to C to call it "speed of light" :-). As you point out, the "speed of light" is not the same as C. I can find materials where the speed of light is below 0.5C. So saying that the electric signal travels at the speed of light is correct since i didn't mention any matrial i would be measuring the speed of light in....

    Point, game and match :-)

    CU, Martin

    P.S. I have references to materials reducing the speed of light to 17m/s (38mph for you imperial bastards) without significant absorption. So even our cars goes at the speed of light ....

  • Re:As expected (Score:3, Informative)

    by mseeger ( 40923 ) on Friday January 22, 2010 @09:56AM (#30858774)

    Satellite is a very special case. You cannot (as i did) speak of "the last mile" here.... It's more "the last 20.000+ miles" ;-). Even then the formula still holds up if you use sufficently large packets :-). The formula is valid if the packet size is larger than the storage capacity of the line (a 10mbps satellite link has a storage capacity of ~200KB, which is no reasonable packet size).

    Without making a lecture, the last mile is usually one of the Latency Hogs for a lot of the users. By increasing the bandwidth you can reduce the latency. Other parameters include your continent (not easily changed), the quality of your provider (carefully obfuscated by marketing), the technology used (black box for most users), the peerings of your provider (traceroute is your friend), etc.

    BTW: Most users think of bandwidth only in one direction. But espescially if you do serious gaming, the uplink may be responsible for a lot of your latency. Moving the crosshair and pressing a button in a shooter may result in a 400 byte package. If you have a 128kbps uplink, this gives you already 25ms latency on the way out. Usually incoming and outgoing bandwidth are tied together in package by your provider. So increasing the bandwidth, may really help gamers.

    CU, Martin

  • Re:Duuuuuh (Score:1, Informative)

    by Anonymous Coward on Friday January 22, 2010 @12:11PM (#30860316)

    $250-500k/mo for an OC-48? Do you work for the government? No one pays that... I just placed an order for a east coast to west coast 10G private line p2p link with 2 year commit for 11k a month.

  • by Purity Of Essence ( 1007601 ) on Friday January 22, 2010 @10:43PM (#30866412)

    Doubters should really watch the Columbia University presentation. It's entertaining and very technical and will probably address your every concern. Too many genuine experts here don't know what they are talking about because they are ignorant of the way OnLive actually works. It's more clever than you probably think.

    YouTube Mirror
    http://www.youtube.com/watch?v=2FtJzct8UK0 [youtube.com]

    Original
    http://tv.seas.columbia.edu/videos/545/60/79 [columbia.edu]

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...