Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Programming

+ - Can 50-core Xeon Phi's x86 Architecture Best Nvidia's Massive GPUs?->

Submitted by Anonymous Coward
An anonymous reader writes "Nvidia's massively parallel GPUs are being harnessed by an increasing number of supercomputer makers to boost their performance, but at the cost of using a proprietary instruction set that was not designed for general-purpose computing. Now that Intel is releasing its own x86-based massively parallel processor--the Xeon Phi--the supercomputer community will have a choice to make: use Intel's x86 parallel processing tools to create their supercomputer applications or rewrite their applications to make use of Nvidia's GPU's and proprietary instructions. The verdict won't be in on which is best for several years, but I'm hoping to stimulate the programming community to start debating the pros and cons now, so that by the time Intel starts shipping its 50-core Xeon Phi this fall we can have enough data points to make an informed decision. What's your take on Intel's versus Nvidia's approach to supercomputing?"
Link to Original Source
This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Can 50-core Xeon Phi's x86 Architecture Best Nvidia's Massive GPUs?

Comments Filter:

panic: kernel trap (ignored)

Working...