Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Programming Games News

River Trail — Intel's Parallel JavaScript 134

mikejuk writes "Intel has just announced River Trail, an extension of JavaScript that brings parallel programming into the browser. The code looks like JavaScript and it works with HTML5, including Canvas and WebGL, so 2D and 3D graphics are easy. A demo video shows an in-browser simulation going from 3 to 45 fps and using all eight cores of the processor. This is the sort of performance needed if 3D in-browser games are going to be practical. You can download River Trail as a Firefox add-on and start coding now. Who needs native code?"
This discussion has been archived. No new comments can be posted.

River Trail — Intel's Parallel JavaScript

Comments Filter:
  • Oblig (Score:1, Funny)

    by grub ( 11606 )

    What about a beowulf.js cluster of these?
  • by nicholas22 ( 1945330 ) on Friday September 16, 2011 @02:27PM (#37423506)
    CPUs
  • People who don't have 8 cores available and who want acceptable performance? People who want their Windows 8 tablet to have a real world battery life longer than two hours?
    • by 93 Escort Wagon ( 326346 ) on Friday September 16, 2011 @02:44PM (#37423662)

      People who want their Windows 8 tablet to have a real world battery life longer than two hours?

      So, what - 4 or 5 people?

    • by Lennie ( 16154 )

      Actually, I heared if you can do your computing in a short time in parallel or a long time. Choose parallel it is usually more power efficient (this means: it is possible to turn off as many parts of a CPU as soon as possible and frequency scaling can be done when nothing is running).

      • Unfortunately, that does not make it more efficient to run Javascript, through however many layers of indirection and abstraction it undergoes, than it does to run native code. Doing a remarkably inefficient task in parallel only parallels your inefficiency, it does not remove it.

        I am not advocating for native code, but if you want good performance on today's hardware then Javascript is not really the number 1 candidate, regardless of whether it can be executed in parallel or not.

        • Well, that's true. JS introduces a hell of a lot of overhead in pretty much anything you might do. But I still have to say that webGL has impressed me. I haven't had a chance to seriously hack on it myself but from what I see being done with it i believe that you can do some pretty serious stuff.
          OTOH webGL is offloading stuff to the gpu so why would I want to use a plugin to offload stuff to the cpu? GPUs (for their bang) are more efficient than cpus anyway.
          BTW, I can't believe no one mentioned webGL up to

          • Try programming a GPU some time. Also - opening up space for kernel level exploits for teh shinyz is stupid. Also, JS should be natively compiled and optimized before being received by the browser, source only as fallback - who is gonna notice some ARM and x86 snippets in the web sever cache?
  • Comment removed based on user account deletion
  • Ha ha, a 15x speedup by going from 1 to 8 cores? No. It's hard to invent a situation in which you would get a genuine 8x speedup, let alone somehow making each core almost twice as fast.
    • Since JS is normally single-threaded, I'm guessing that the one-core scenario is spending more than half its time on things other than the simulation. Additional cores can be dedicated entirely to the simulation. Under those circumstances, 15x speedup isn't the least bit surprising.
      • by atisss ( 1661313 )
        They probably will be pushing addition to Javascript, because the only things it missed so far was multithreading/semaphores/memory management.
        Finally the JavaScript will be complete
    • Is it now? If the CPU spends half its time unparallellizable preparing its computation and that preparation can just be copied to the other cores, the max theoretical speedup is just under 16x.

      • You're looking at it the wrong way. Adding more cores allows more work to be done in a given interval -- in this case, computing and displaying more frames.

        The "other stuff" I was referring to is stuff external to the simulation, that doesn't have to be done repeatedly on other cores.

      • Is it now? If the CPU spends half its time unparallellizable preparing its computation and that preparation can just be copied to the other cores, the max theoretical speedup is just under 16x.

        If every process is going to work with exactly the same data then it doesn't matter if you have 1 core or 8. The data only has to be prepared one time. So the multiple cores won't save you time preparing that data with each cycle.

        • I think I miscommunicated something. The idea was to show an exception which disproves the original postulate (can't speed up more than 8x on 8 cores). I wasn't referring to what is possible in the SIMD setting only. If the computation of the desired result on a single core only gets half its time for some very likely reason, say you're playing MP3s too, then adding 7 more cores will appear as if there were 14 more of the usual time slots for your parallel task.

      • by yakovlev ( 210738 ) on Friday September 16, 2011 @04:14PM (#37424454) Homepage
        No.

        If half the work is unparallelizable then the max theoretical speedup is 2x.

        This is a simple application of Amdahl's law:

        speedup = 1 / ( (1-P) + (P/S) )
        where P is the amount of the workload that is parallelizable and S is the number of cores.
        speedup = 1 / ( (1-0.5) + (0.5/S) )
        lim S-> infinity (speedup) is 1/ 0.5 = 2x

        The likely reason the speedup appears superlinear here is that there are actually two speedups.

        1.) Speedup from parallelizing on multiple threads. From looking at the usage graphs, this is probably about 4x.
        2.) Speedup from vectorizing to use processor AVX extensions: This could be another 4x.

        Total speedup: 16x.

        A 16x speedup is totally believable for vectorizing and parallelizing a simple scientific simulation like the one shown in the video.
        • 2.) Speedup from vectorizing to use processor AVX extensions: This could be another 4x.

          OK, that makes sense (although I would hope vector instructions could normally be tapped by optimizing libraries instead exposing the vector api to the programmer).

          You win "IMHO the best answer to my question."

          • I'll go you one better. I would hope that vector instructions are utilized by the compiler even for code that doesn't explicitly go after using them.

            At a minimum, it should be possible to provide a "write your code like this, and it will be easy for the compiler to detect that what you really want is vector instructions."

            However, sometimes interpreted languages can work against this due to things like strict exception checking.
            • by adri ( 173121 )

              .. err, I think you missed the point with vectorised instructions and interpreted languages.

              If your interpreted language has no vector operations, then although your compiled binary interpreter can vectorise, the data operations being given to your interpreter is not vectorised. Think about it for a minute. Your interpreter is just doing while (read VM opcode) (call func[VM opcode]) ; the data isn't in a parallelable/vectorised form; nor will the CPU ever see it that way. The CPU just sees the VM opcode str

              • I guess I was thinking JITs, or something like perl that is semi-compiled.

                You're of course right that a simple straight-line interpreter can't do much optimization, but this isn't how modern javascript engines work. Getting a correctly formatted data set is part of the "write your code like this" idea I posted above.

                My real point was, if it's possible to do, I would prefer the javascript engine do the work for me. That way I can write standards-compliant code that will compile and run on a machine without
    • by LWATCDR ( 28044 )

      8 cores X 2 threads per core =16X
      Since the cores are also running the OS, Browser and goodness knows what else 15 X could be possible with more cores. Sort of like like your 40 megabyte program getting a massive speed up when you go from 512Mb of RAM to 2 Gb of RAM. PCs today are not like computers back in the day. They are often running a lot of other code besides your application. That is one reason that DOS is still popular for some tasks. Once your code is running pretty much that is all that is running

      • 8 cores X 2 threads per core =16X

        Only if the CPU can run 2 threads each as fast as it could run 1. I've never heard of such a thing. (Hyperthreading certainly doesn't come close.) If there were such a beast, it would be marketed as a 16 core chip.

        Since the cores are also running the OS, Browser and goodness knows what else 15 X could be possible with more cores.

        If you have a multi-core machine, you don't need multithreaded javascript to run javascript on one core and other things on other cores.

        • by LWATCDR ( 28044 )

          Yes but it runs in more than one of those other cores. This also support SSE which may help a lot for some benchmarks as well. Of course you do not get a linar increase with cores but this may be in the range of possible with some specific benchmarks.

    • It's not totally unbelievable, when you consider the fact that it's using WebGL. Doubling the speed at which the CPU prepares data for the GPU to render can more than double the overall throughput.
    • Not necessarily. Some things will only need to be done once. Say currently that takes 2/3 of the cycle time. Some things will have to be done 15 times to speedup 15 times. These things look 1/3 of the original process. Add in the setup fee and it's not hard to see how you can get 15 times the performance in terms of fps.
    • There are numerous ways to get superlinear speedup. Turns out that it is not that hard. There is more to parallel programming that just 'cores'. Of course, it depends on the algorithm.

      eg
      Matrix Multiply 101 [intel.com]

  • We need some sort of metrics here. The I Programmer article content here is using only 314x1213 pixels on my laptop. The whole page have 1160x2078! Only about 16% of my screen area is the article's content. This is like listening to 50 minutes of commercials every hour on the radio. No one would accept that on the radio and I say we shouldn't here. Thanks!
  • We already have Web Workers, they just can't access the DOM. Why can't they just try to improve the existing framework instead of inventing their own?
    • which standard has Microsoft improved in the past?
      • Actually Microsoft is doing a 180 since IE 8 due to losing marketshare and the threat of HTML 5 and the IPad.

        IE 9 is a decent browser. It truly has caught up and supports SVG, HTML 5 canvas elements, hardware acceleration, CSS 3 (even animations), xhtml (about damn time), and so on. You no longer need the horrible hacks to get anything done in it like tricking it to read xhtml when it does not support it.

        Windows 8 supports Web Workers with IE 10 and according www.html5test.com its HTML 5 is on par with Chro

    • Web workers takes care of this and launches things in different processes. The only browser that doesn't support it is IE, but that will change with Windows 8 next spring.

      It can take care of this without Intel's code

    • by Gyrony ( 2463308 )
      "A ParallelArray abstraction for JavaScript and a Firefox add-on to enable parallel programming in JavaScript targeting multi-core CPUs and vector SSE/AVX instructions." Sounds to me as though this is not Multithreaded JavaScript or a replacement for Web Workers, but an implementation of vector processing for JavaScript, using OpenCL
  • ...could use a bit of multicore speedup itself.
  • If your GPU sucks, try to get people to buy 8 CPU cores and do graphics on those. Seriously, doesn't WebGL allow use of the GPU? If not, fix that.
    • WebCL already exists and has a test implementation from Nokia. Also, those 8 cores would be better than nothing for a software renderer, but not even close to what's built-in on a nice motherboard. The new chips from Intel and AMD with the GPU on the same die as the CPU is the only real hope for 3D for people who don't know well enough to get a motherboard with the right chipset or a computer with a nice discrete GPU.

  • I see no problem at all with a parallel version of JavaScript. But the question I have is who is really going to use this? Granted, some might say "anyone who wants to make money, this is the future of gaming and entertainment". I certainly hope it isn't! Is a browser really the platform of choice for high performance graphics? I think not. Is there anything really wrong with having a native client to produce something so specialized as a graphics intensive platform? Must we really look to a future where ev

    • This is the next best thing for Zynga though :)

    • There is nothing wrong with having a native client for specialized graphic intensive applications....but why cant the web browser also perform some of these more general tasks? No-one is suggesting you implement a Graphics API in Javascript, but a web browser is more then capable of hosting some fairly advanced graphical visualizations or games by building on top of lower level components like Canvas or WebGL.

      Most "game engines" are based on similar architectures....highly performant engine components writ

  • Oh boy!!! (Score:5, Insightful)

    by frank_adrian314159 ( 469671 ) on Friday September 16, 2011 @03:32PM (#37424162) Homepage

    That means the animated ads can now suck up all of my CPU, rather than just one core's worth. I can't wait!

  • by devent ( 1627873 ) on Friday September 16, 2011 @03:34PM (#37424176) Homepage

    Instead of get a 50$ graphics card and play Doom3 on it, we need now 8 cores CPU to play JavaScript games in the browser? That is the bright future we can look for with ChromeOS and "the browser is the OS" future?

    • could someone with mod points please deal with parent appropriately?
      • by Toonol ( 1057698 )
        I honestly don't know which direction of modding you think is appropriate for that comment.
        • well that's just sad. There's no talk of *needing* 8 processors for anything - that was just part of the set-up for the demonstration. Also the code isn't JavaScript - it 'looks like' JavaScript and would replace it. Also no-one's going to stop you playing Doom3, but why not compare with the hardware required for a semi-recent game.. Crysis for example? Oh, because that wouldn't support your trolling..
    • Surely you've noticed that besides the number of polygons getting pushed out and how fast you can decompress a large file, your daily tasks aren't any faster than they were 10 years ago. How's your word processor doing? Still chugging along? Doing anything useful that it didn't before?

  • but rather prefer to use them for something meaningful but the weirdly scrolling ad annoying them in some partialy visible background window....

    • by Yvan256 ( 722131 )

      I think it means people with single-core computer will only get a scrolling banner ad.

      People with a multi-core computer will get a scrolling banner ad with parallax scrolling backgrounds!

  • According to this page [infoq.com], RiverTrail "adds the ParallelArray data type to JavaScript [...] accessible by functions like combine [github.com], filter [github.com], map [github.com], reduce [github.com], etc. which perform work in parallel." Hope that saved you some searching.
  • by __aazsst3756 ( 1248694 ) on Friday September 16, 2011 @03:49PM (#37424264)

    Why should an application decide the best way to split a load over multiple cpu cores? How does it know what else is going on in the OS to balance this load? Shouldn't the OS handle this behind the scenes?

    • by CentTW ( 1882968 )

      Short answer: It doesn't work that way. Programs can only be split over multiple cores if they are designed to use those cores. Most programs aren't because it's harder to write, maintain, and the extra processing power often isn't necessary.

      Long answer: A program is a sequence of instructions that normally have to be run one after another to complete a task. Imagine a program designed to make a peanut butter and jelly sandwich. The highest level of the program might look something like this (the rest

      • Short answer: It doesn't work that way. Programs can only be split over multiple cores if they are designed to use those cores. Most programs aren't because it's harder to write, maintain, and the extra processing power often isn't necessary.

        Short reply: Most programs call system services and libraries to do specialized work. The internals of those system calls and library calls do not matter to the programmer, only their interface does. If the tasks they perform is sufficiently high level, then t

      • Short answer: It doesn't work that way. Programs can only be split over multiple cores if they are designed to use those cores.

        That's only true for some languages. Programs written in pure functional languages such as Haskell absolutely can be split across multiple cores by the compiler/runtime without being designed to be "multithreaded."

        • by brm ( 100455 )

          That's only true for some languages. Programs written in pure functional languages such as Haskell absolutely can be split across multiple cores by the compiler/runtime without being designed to be "multithreaded."

          On the other hand, pure functional languages such as Haskell often cannot be made to effectively use a bounded set of resources (such as a finite number of cores and memory).

      • You are thinking too imperatively. Use functional programming and automatic parallelization becomes possible.
    • Because only the application knows how to split its own inputs so that multiple worker threads can each work independently on an input chunk. The job of the OS is to figure out how much CPU time each worker gets, based on a variety of factors (such as thread and process priority/nice-ness). If you have multiple CPUs and they're not at 100% usage, this results in parallel processing of those inputs.
    • by tlhIngan ( 30335 )

      Why should an application decide the best way to split a load over multiple cpu cores? How does it know what else is going on in the OS to balance this load? Shouldn't the OS handle this behind the scenes?

      It depends on a lot of factors.

      Advantages for doing it in application space include the application knowing what it's doing, and if the OS is saying there's a shortage of CPU time, the application can decide what load to shed more effectively than submitting work and hoping it gets done on time.

      Disadvantag

  • Just to make sure I got this straight:

    Intel took one of the slowest interpreted languages, though the most popular one, and added parallel data primitives and functions. Then they used a pointless little particle fountain demo to show off its benefits.

    So rather than try to make Javascript execute faster, they spread its disease to all 8 cores. How is this an improvement ? The last thing I want is for a web page to sap my CPU and battery life, doing things web pages should not be doing in the first place.

  • I have a couple benchmarks i've run over the years, in multiple languages. For CPU intensive jobs on the same machine:

    C: 1
    Java: 1.1-3
    javascript: 117

    Javascript is in IE8 on win xp.

    Javascript on IE has restrictive time limits for execution, though there are work arounds.
    But If you have 8 cores, you're still 14.6x slower than C.

    IMO, Java, which already runs in the browser, is the better solution. That said, compared to C, Java also has huge memory requirements.
    It would make more sense to allow mutli-core exe

    • by BZ ( 40346 )

      > Javascript is in IE8 on win xp.

      Uh.... This is the same IE8 that doesn't have a JIT, right? Unlike every single browser actually shipping now?

      Here's a relevant graph: http://ie.microsoft.com/testdrive/benchmarks/sunspider/default.html [microsoft.com]

      It's a bit out of date, since all browsers have gotten faster since then, but it shows IE8 being about 18x slower than any modern browser on this particular benchmark. And this is a benchmark that hammers a lot on the VM (dates, regular expressions, etc), not the languag

    • That's interesting ... how long ago did you do this?

      I've spent the better part of 4 hours trying to get RiverTrail to compile for Linux with no luck. I'd love more compute grunt for web graphics (online scientific visualisation) and this seemed pretty good.

      Usually I use Java but really need access to the xmm intrinsics.

      Perhaps I'll forget about parallel javascript if it is really as bad as you say.

Help fight continental drift.

Working...