Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel PC Games (Games) Games Hardware Linux

Steam Machine Prototypes Use Intel CPUs, NVIDIA GPUs 187

An anonymous reader writes "Valve has revealed their first Steam Machines prototype details. The first 300 Steam Machine prototypes to ship will use various high-end Intel CPUs and NVIDIA GPUs while running their custom SteamOS Linux distribution. The Intel Haswell CPU + NVIDIA GPU combination should work well on Linux with the binary drivers. Using a range of CPUs/GPUs in the prototypes will allow them to better gauge the performance and effectiveness. Valve also said they will be releasing the CAD design files to their custom living room console enclosure for those who'd like to reproduce them." Valve is careful to point out that these specs aren't intended as a standard: "[T]o be clear, this design is not meant to serve the needs of all of the tens of millions of Steam users. It may, however, be the kind of machine that a significant percentage of Steam users would actually want to purchase — those who want plenty of performance in a high-end living room package. Many others would opt for machines that have been more carefully designed to cost less, or to be tiny, or super quiet, and there will be Steam Machines that fit those descriptions."
This discussion has been archived. No new comments can be posted.

Steam Machine Prototypes Use Intel CPUs, NVIDIA GPUs

Comments Filter:
  • by Sycraft-fu ( 314770 ) on Friday October 04, 2013 @05:13PM (#45039521)

    If you want Linux 3D graphics that are:

    1) As fast as you get on Windows.
    2) Support all the latest OpenGL features.
    3) Have a full implementation of the latest OpenGL spec.
    4) Are solid and stable.

    Then the binary nVidia drivers are it. Nothing else comes close. Well for games, particularly new games, this matters. They are making use of the high end features modern GPUs have, they need high speed rendering, etc.

    If another company wants to step up their Linux game then great, but right now it is go nV or go home. Their binary drivers are just head and shoulders above the rest. That may not matter for typical desktop use when the card is doing little else other than some desktop composition and maybe accelerated video playback but it matters a lot if you are trying to make a game render using the latest OpenGL 4.3/4.4 features and have it extremely fast and stable.

  • by jandrese ( 485 ) <kensama@vt.edu> on Friday October 04, 2013 @05:54PM (#45039855) Homepage Journal
    Valve's own statistics show that gamers tend to prefer nVidia hardware. Because this is going to run Linux there really isn't a good alternative anyway. Intel Graphics are still a joke and AMD's drivers are still terrible. As much as free software guys hate it, the nVidia binary blob driver is the best supported 3D graphics driver on Linux.
  • by jandrese ( 485 ) <kensama@vt.edu> on Friday October 04, 2013 @05:57PM (#45039877) Homepage Journal
    If you have a gaming PC already, then just run steam and put it in Big Picture mode if you want the same experience. This is for people who don't have gaming PCs and/or want to play in the living room on their TV.
  • by houstonbofh ( 602064 ) on Friday October 04, 2013 @06:30PM (#45040161)

    Nvidia hardware isn't really clearly superior to AMD.. they rotate on who has the best hardware at various price points.

    But sure, the point is that this hardware should do a specific job for gamers at a specific price point, if Nvidia GPU's are the best bet for that in this product price segment there's no reason to be an ideological crusader about it. The point is to be able to play games, not make the average couch potato start writing driver code on his TV.

    Not on Linux. nVidia consistently outperforms AMD, and is significantly more stable. And they have been actively working with Valve for quite some time to fix some show-stopping driver bugs.

  • by houstonbofh ( 602064 ) on Friday October 04, 2013 @06:32PM (#45040179)

    I wouldn't consider drivers a serious issue. If Valve goes to AMD/ATI and says 'We'll buy a hundred thousand chips for the first production run, with potential sales of fifteen million to follow' I'm sure improved driver support would quickly follow.

    Actually, nVidia has been actively working with them for over a year now fixing some significant driver bugs. And they haven't bought anything yet.

  • by gman003 ( 1693318 ) on Friday October 04, 2013 @11:57PM (#45042001)

    Midlevel? For non-gaming usage, perhaps. For gaming they're strictly low-end, or unusable.

    There are three Intel GPUs on the desktop side - the HD 4600, the Iris 5100, and the Iris Pro 5200. In raw processing power, the first gets you 430GFLOPS, and the latter two get 830GFLOPS. For comparison, the *weakest* GPU in these Steam Machines pumps out 1880GFLOPS, and the top end maxes out around 4.5 TFLOPS.

    And that's a spec that's biased towards Intel - they're more compute-heavy than bandwidth-heavy, and unfortunately most graphics tasks are bound by memory bandwidth. For Intel, the first two have a mere 25.6 GB/S of bandwidth, with Iris Pro adding an on-chip cache to bring it up to 75GB/S. But even the GeForce 660 beats that at 144GB/S, and the Titan doubles that. For those who may not be familiar, the 660 Ti (and the new-gen rebadge-with-enhancements, the 760) was considered a good medium-end card, with the vanilla 660 being for those a bit more budget-minded. The Titan, of course, is their "luxury" card, costing a full $1000, but it's currently the most powerful single-GPU card, period.

    That's just their theoretical performance - the real test, of course, is actual game benchmarks. Nvidia is currently the best at getting the most performance from their hardware in actual games. AMD has more raw power, but their drivers aren't as efficient so Nvidia beats them more often than not. Intel's far worse than either - while Iris Pro should be able to go head-to-head with a GeForce 650, it actually tends to benchmark closer to the GeForce 640. Go look it up on Anandtech, if you're interested.

    Now, is it impressive how much power Intel managed to get out of an IGPU? Yeah, it is. Honestly, I would be interested in seeing them scale up the design further - go from 40 EUs to 200 EUs, bolt on the memory controller from the Xeon Phi, and sell it as a dedicated card. Might be something they can do with the 22nm fabs once they move to 14nm? But in any case, calling their current offerings "medium-end" is misleading at best, and downright wrong at worst.

  • by TeXMaster ( 593524 ) on Saturday October 05, 2013 @02:31AM (#45042543)

    Nvidia hardware isn't really clearly superior to AMD.. they rotate on who has the best hardware at various price points.

    Actually, if you just look at the specifications, ATI/AMD has almost always had the (theoretically) most competitive hardware (GPU-wise), both in terms of performance/price ratio and often even in terms of raw computing power/memory bandwidth. AMD was even the first to come out with hardware support for compute on GPU (the first CTM/CAL betas came out before CUDA was ever mentioned anywhere), even if it required assembly progamming of the shaders (which you could often do without by using a layer such as BrookGPU).

    However, their GPUs have been crippled by the most horrible software ecosystem possible. By and large the main culprit is ATI/AMD itself, who has constantly failed at producing high-quality, stable drivers and capable compilers for their shaders. A secondary culprit (which has finally been removed from the equation) is the architecture itself: up until the introduction of GCN, AMD shaders had a VLIW architecture (VLIW5 first, VLIW4 in the last releases before GCN) which were often not easily exploitable without heavy-duty restructuring and vectorization of your shader code: so you often found yourself with huge horsepower available, while only be able to exploit some 30-60% of it at best.

  • by jones_supa ( 887896 ) on Saturday October 05, 2013 @02:36AM (#45042555)
    The Dolphin Emu guys did a nice review of current OpenGL implementations [dolphin-emu.org].

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...