×
United States

US Sets Plan To Build Two Exascale Supercomputers (computerworld.com) 59

dcblogs quotes a report from Computerworld: The U.S. believes it will be ready to seek vendor proposals to build two exascale supercomputers -- costing roughly $200 to $300 million each -- by 2019. The two systems will be built at the same time and be ready for use by 2023, although it's possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials. The U.S. will award the exascale contracts to vendors with two different architectures. But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump's administration will change directions. The incoming administration is a wild card. Supercomputing wasn't a topic during the campaign, and Trump's dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer. At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama's administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that "pointed-head geeks are not going to be well appreciated."
Supercomputing

A British Supercomputer Can Predict Winter Weather a Year In Advance (thestack.com) 177

The national weather service of the U.K. claims it can now predict the weather up to a year in advance. An anonymous reader quotes The Stack: The development has been made possible thanks to supercomputer technology granted by the UK Government in 2014. The £97 million high-performance computing facility has allowed researchers to increase the resolution of climate models and to test the retrospective skill of forecasts over a 35-year period starting from 1980... The forecasters claim that new supercomputer-powered techniques have helped them develop a system to accurately predict North Atlantic Oscillation -- the climatic phenomenon which heavily impacts winters in the U.K.
The researchers apparently tested their supercomputer on 36 years worth of data, and reported proudly that they could predict winter weather a year in advance -- with 62% accuracy.
Australia

Quantum Researchers Achieve 10-Fold Boost In Superposition Stability (thestack.com) 89

An anonymous reader quotes The Stack: A team of Australian researchers has developed a qubit offering ten times the stability of existing technologies. The computer scientists claim that the new innovation could significantly increase the reliability of quantum computing calculations... The new technology, developed at the University of New South Wales, has been named a 'dressed' quantum bit as it combines a single atom with an electromagnetic field. This process allows the qubit to remain in a superposition state for ten times longer than has previously been achieved. The researchers argue that this extra time in superposition could boost the performance stability of quantum computing calculations... Previously fragile and short-lived, retaining a state of superposition has been one of the major barriers to the development of quantum computing. The ability to remain in two states simultaneously is the key to scaling and strengthening the technology further.
Do you ever wonder what the world will look like when everyone has their own personal quantum computer?
Hardware

Fujitsu Picks 64-Bit ARM For Post-K Supercomputer (theregister.co.uk) 30

An anonymous reader writes: At the International Supercomputing Conference 2016 in Frankfurt, Germany, Fujitsu revealed its Post-K machine will run on ARMv8 architecture. The Post-K machine is supposed to have 100 times more application performance than the K Supercomputer -- which would make it a 1,000 PFLOPS beast -- and is due to go live in 2020. The K machine is the fifth fastest known super in the world, it crunches 10.5 PFLOPS, needs 12MW of power, and is built out of 705,000 Sparc64 VIIIfx cores.InfoWorld has more details.
China

China Builds World's Fastest Supercomputer Without U.S. Chips (computerworld.com) 247

Reader dcblogs writes: China on Monday revealed its latest supercomputer, a monolithic system with 10.65 million compute cores built entirely with Chinese microprocessors. This follows a U.S. government decision last year to deny China access to Intel's fastest microprocessors. There is no U.S.-made system that comes close to the performance of China's new system, the Sunway TaihuLight. Its theoretical peak performance is 124.5 petaflops (Linpack is 93 petaflops), according to the latest biannual release today of the world's Top500 supercomputers. It has been long known that China was developing a 100-plus petaflop system, and it was believed that China would turn to U.S. chip technology to reach this performance level. But just over a year ago, in a surprising move, the U.S. banned Intel from supplying Xeon chips to four of China's top supercomputing research centers. The U.S. initiated this ban because China, it claimed, was using its Tianhe-2 system for nuclear explosive testing activities. The U.S. stopped live nuclear testing in 1992 and now relies on computer simulations. Critics in China suspected the U.S. was acting to slow that nation's supercomputing development efforts. There has been nothing secretive about China's intentions. Researchers and analysts have been warning all along that U.S. exascale (an exascale is 1,000 petaflops) development, supercomputing's next big milestone, was lagging.
Power

Utility Targets Bitcoin Miners With Power Rate Hike (datacenterfrontier.com) 173

1sockchuck writes: A public utility in Washington state wants to raise rates for high-density power users, citing a flood of requests for electricity to power bitcoin mining operations. Chelan County has some of the cheapest power in the nation, supported by hydroelectric generation from dams along the Columbia River. That got the attention of bitcoin miners, prompting requests to provision 220 megawatts of additional power. After a one-year moratorium, the Chelan utility now wants to raise rates for high density users (more than 250kW per square foot) from 3 cents to 5 cents per kilowatt hour. Bitcoin businesses say the rate hike is discriminatory. But Chelan officials cite the transient nature of the bitcoin business as a risk to recovering their costs for provisioning new power capacity.
Classic Games (Games)

Computer Beats Go Champion 149

Koreantoast writes: Go (weiqi), the ancient Chinese board game, has long been held up as one of the more difficult, unconquered challenges facing AI scientists... until now. Google DeepMind researchers, led by David Silver and Demis Hassabis, developed a new algorithm called AlphaGo, enabling the computer to soundly defeat European Go champion Fan Hui in back-to-back games, five to zero. Played on a 19x19 board, Go players have more than 300 possible moves per turn to consider, creating a huge number of potential scenarios and a tremendous computational challenge. All is not lost for humanity yet: DeepMind is scheduled to face off in March with Lee Sedol, considered one of the best Go players in recent history, in a match compared to the Kasparov-Deep Blue duels of previous decades.
Math

Finally Calculated: All the Legal Positions In a 19x19 Game of Go (github.io) 117

Reader John Tromp points to an explanation posted at GitHub of a computational challenge Tromp coordinated that makes a nice companion to the recent discovery of a 22 million-digit Mersenne prime. A distributed effort using pooled computers from two centers at Princeton, and more contributed from the HP Helion cloud, after "many hiccups and a few catastrophes" calculated the number of legal positions in a 19x19 game of Go. Simple as Go board layout is, the permutations allowed by the rules are anything but simple to calculate: "For running an L19 job, a beefy server with 15TB of fast scratch diskspace, 8 to 16 cores, and 192GB of RAM, is recommended. Expect a few months of running time." More: Large numbers have a way of popping up in the game of Go. Few people believe that a tiny 2x2 Go board allows for more than a few hundred games. Yet 2x2 games number not in the hundreds, nor in the thousands, nor even in the millions. They number in the hundreds of billions! 386356909593 to be precise. Things only get crazier as you go up in boardsize. A lower bound of 10^{10^48} on the number of 19x19 games, as proved in our paper, was recently improved to a googolplex. (For anyone who wants to double check his work, Tromp has posted as open source the software used.)
Math

New Mersenne Prime Discovered, Largest Known Prime Number: 2^74,207,281 - 1 (mersenne.org) 132

Dave Knott writes: The Great Internet Mersenne Prime Search (GIMPS) has discovered a new largest known prime number, 2^74,207,281-1, having 22,338,618 digits. The same GIMPS software recently uncovered a flaw in Intel's latest Skylake CPUs, and its global network of CPUs peaking at 450 trillion calculations per second remains the longest continuously-running "grassroots supercomputing" project in Internet history. The prime is almost 5 million digits larger than the previous record prime number, in a special class of extremely rare prime numbers known as Mersenne primes. It is only the 49th known Mersenne prime ever discovered, each increasingly difficult to find.
Businesses

Uber Scaling Up Its Data Center Infrastructure (datacenterfrontier.com) 33

1sockchuck writes: Connected cars generate a lot of data. That's translating into big business for data center providers, as evidenced by a major data center expansion by Uber, which needs more storage and compute power to support its global data platform. Uber drivers' mobile phones send location updates every 4 seconds, which is why the design goal for Uber's geospatial index is to handle a million writes per second. It's a reminder that as our cars become mini data centers, the data isn't staying onboard, but will also be offloaded to the data centers of automakers and software companies.
Supercomputing

Seymour Cray and the Development of Supercomputers (linuxvoice.com) 54

An anonymous reader writes: Linux Voice has a nice retrospective on the development of the Cray supercomputer. Quoting: "Firstly, within the CPU, there were multiple functional units (execution units forming discrete parts of the CPU) which could operate in parallel; so it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next. It also had an instruction cache of sorts to reduce the time the CPU spent waiting for the next instruction fetch result. Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time." They also discuss modern efforts to emulate the old Crays: "...what Chris wanted was real Cray-1 software: specifically, COS. Turns out, no one has it. He managed to track down a couple of disk packs (vast 10lb ones), but then had to get something to read them in the end he used an impressive home-brew robot solution to map the information, but that still left deciphering it. A Norwegian coder, Yngve Ådlandsvik, managed to play with the data set enough to figure out the data format and other bits and pieces, and wrote a data recovery script."
Security

Quantum Computer Security? NASA Doesn't Want To Talk About It (csoonline.com) 86

itwbennett writes: At a press event at NASA's Advanced Supercomputer Facility in Silicon Valley on Tuesday, the agency was keen to talk about the capabilities of its D-Wave 2X quantum computer. 'Engineers from NASA and Google are using it to research a whole new area of computing — one that's years from commercialization but could revolutionize the way computers solve complex problems,' writes Martyn Williams. But when questions turned to the system's security, a NASA moderator quickly shut things down [VIDEO], saying the topic was 'for later discussion at another time.'
Supercomputing

Google Finds D-Wave Machine To Be 10^8 Times Faster Than Simulated Annealing (blogspot.ca) 157

An anonymous reader sends this report form the Google Research blog on the effectiveness of D-Wave's 2X quantum computer: We found that for problem instances involving nearly 1000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 10^8 times faster than simulated annealing running on a single core. We also compared the quantum hardware to another algorithm called Quantum Monte Carlo. This is a method designed to emulate the behavior of quantum systems, but it runs on conventional processors. While the scaling with size between these two methods is comparable, they are again separated by a large factor sometimes as high as 10^8. A more detailed paper is available at the arXiv.
Intel

Intel Launches 72-Core Knight's Landing Xeon Phi Supercomputer Chip (hothardware.com) 179

MojoKid writes: Intel announced a new version of their Xeon Phi line-up today, otherwise known as Knight's Landing. Whatever you want to call it, the pre-production chip is a 72-core coprocessor solution manufactured on a 14nm process with 3D Tri-Gate transistors. The family of coprocessors is built around Intel's MIC (Many Integrated Core) architecture which itself is part of a larger PCI-E add-in card solution for supercomputing applications. Knight's Landing succeeds the current version of Xeon Phi, codenamed Knight's Corner, which has up to 61 cores. The new Knight's Landing chip ups the ante with double-precision performance exceeding 3 teraflops and over 8 teraflops of single-precision performance. It also has 16GB of on-package MCDRAM memory, which Intel says is five times more power efficient as GDDR5 and three times as dense.
Math

'Shrinking Bull's-eye' Algorithm Speeds Up Complex Modeling From Days To Hours (mit.edu) 48

rtoz sends word of the discovery of a new algorithm that dramatically reduces the computation time for complex processes. Scientists from MIT say it conceptually resembles a shrinking bull's eye, incrementally narrowing down on its target. "With this method, the researchers were able to arrive at the same answer as a classic computational approaches, but 200 times faster." Their full academic paper is available at the arXiv. "The algorithm can be applied to any complex model to quickly determine the probability distribution, or the most likely values, for an unknown parameter. Like the MCMC analysis, the algorithm runs a given model with various inputs — though sparingly, as this process can be quite time-consuming. To speed the process up, the algorithm also uses relevant data to help narrow in on approximate values for unknown parameters."
Earth

NASA's Hurricane Model Resolution Increases Nearly 10-Fold Since Katrina 89

zdburke writes: Thanks to improvements in satellites and on-the-ground computing power, NASA's ability to model hurricane data has come a long way in the ten years since Katrina devastated New Orleans. Their blog notes, "Today's models have up to ten times the resolution than those during Hurricane Katrina and allow for a more accurate look inside the hurricane. Imagine going from video game figures made of large chunky blocks to detailed human characters that visibly show beads of sweat on their forehead." Gizmodo covered the post too and added some technical details, noting that, "the supercomputer has more than 45,000 processor cores and runs at 1.995 petfalops."
AI

IBM 'TrueNorth' Neuro-Synaptic Chip Promises Huge Changes -- Eventually 97

JakartaDean writes: Each of IBM's "TrueNorth" chips contains 5.4 billion transistors and runs on 70 milliwatts. The chips are designed to behave like neurons—the basic building blocks of biological brains. Dharmenda Modha, the head of IBM's cognitive computing group, says a system of 24 connected chips simulates 48 million neurons, roughly the same number rodents have.

Whereas conventional chips are wired to execute particular "instructions," the TrueNorth juggles "spikes," much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone's voice as they speak—or changes in color from pixel to pixel in a photo. "You can think of it as a one-bit message sent from one neuron to another." says one of the chip's chief designers. The chips are designed well not for training neural networks, but for executing them. This has significant implications for consumer AI: big companies with lots of resources could focus on the training, which individual TrueNorth chips in people's gadgets could handle the execution.
Math

How Weather Modeling Gets Better 43

Dr_Ish writes: Bob Henson over at Weather Underground has posted a fascinating discussion of the recent improvements made to the major weather models that are used to forecast hurricanes and the like. The post also included interesting links that explain more about the models. Quoting: "The latest version of the ECMWF model, introduced in May, has significant changes to model physics and the ways in which observations are brought into and used within the model. The overall improvements include better portrayal of clouds and precipitation, including a more accurate depiction of intense rainfall. The main effect of the model upgrade for tropical cyclones is slightly lower central pressure. During the first 3 days of a forecast, the ECMWF has tended to have a slight weak bias on tropical cyclones; the new version is closer to the mark."
Supercomputing

Obama's New Executive Order Says the US Must Build an Exascale Supercomputer 223

Jason Koebler writes: President Obama has signed an executive order authorizing a new supercomputing research initiative with the goal of creating the fastest supercomputers ever devised. The National Strategic Computing Initiative, or NSCI, will attempt to build the first ever exascale computer, 30 times faster than today's fastest supercomputer. Motherboard reports: "The initiative will primarily be a partnership between the Department of Energy, Department of Defense, and National Science Foundation, which will be designing supercomputers primarily for use by NASA, the FBI, the National Institutes of Health, the Department of Homeland Security, and NOAA. Each of those agencies will be allowed to provide input during the early stages of the development of these new computers."
Australia

Cray To Build Australia's Fastest Supercomputer 54

Bismillah writes: US supercomputer vendor Cray has scored the contract to build the Australian Bureau of Meteorology's new system, said to be capable of 1.6 petaFLOPS and with an upgrade option in three years' time to hit 5 petaFLOPS. From the iTnews story: "The increase in capacity will allow the BoM to deal with growth in the 1TB of data it collects every day, which it expects to increase by 30 percent every 18 months to two years. It will also allow the agency to collect new areas of information it previously lacked the capacity for. 'The new observation platforms that are coming online are bringing quite a lot more data,' supercomputer program director Tim Pugh told iTnews.

Slashdot Top Deals