It’s hard to wrap your brain around something this fast: a supercomputer that can manage 20 petaflops, or more than 20,000 trillion (that’s 20 quadrillion) floating point calculations per second.
By comparison, the U.S. has roughly 300 million people. It would take over 66 million copies of the U.S., each copy’s 300 million citizens flipping a switch in tandem, to equal 20 petaflops. That’s breathtaking performance.
The power of the Nvidia GT 650M graphics processor residing in the bleeding-edge Retina MacBook Pro I’m using to type this story? A paltry 691 gigaflops, enough horsepower to operate games like World of Warcraft or Portal 2 at more than respectable frame rates, but a fraction of a fraction of the vast, almost incalculable brawn of Titan, a new theoretically 20 petaflops supercomputer that gets the lion’s share of its oomph — 90%, in fact — from over 18,000 Nvidia Tesla K20 GPUs.
Titan just came online at the U.S. Department of Energy’s Oak Ridge National Laboratory. According to ORNL, Titan will be 10 times more powerful than its prior record-breaker, Jaguar, a Cray XT5 capable of just under 2 petaflops that came online in 2009.
Here’s a time lapse video of Titan being assembled. You’re essentially watching a mammoth product upgrade, as Jaguar morphs into Titan, an upgrade to a Cray XK7 system harboring 16-core AMD Opteron 6274 processors and thousands of Nvidia Tesla K20 GPUs.
How much memory? Try over 700 terabytes. And it’s dramatically more efficient: According to ORNL, Titan occupies the same physical space as Jaguar and only uses “marginally more electricity.” Nvidia adds that if ORNL had simply upgraded Jaguar using CPUs, the supercomputer would be “four times its current size and consume more than 30 megawatts of power.”
That, according to Nvidia chief technology officer Steve Scott, is because Titan uses GPUs to “accelerate” the computing process.
“You simply can’t get these levels of performance, power- and cost- efficiency with conventional CPU-based architectures,” he says. “Accelerated computing is the best and most realistic approach to enable exascale performance levels within the next decade.”
What’ll Titan do? All kinds of stuff — Nvidia calls it an “open-science” supercomputer. That means projects like performing a nanoscale analysis of certain metals or magnets that could help us build better motors and generators, creating more efficient, lower-emission fuels through sophisticated large-molecule hydrocarbon fuel modeling, simulating the behavior of neutrons in a nuclear reactor to extend the operating life of fuel rods and, of course, simulating global weather.
The verdict’s still out on Titan’s 20 petaflops title. That’ll be determined by the folks who maintain the official Top500 list of the world’s most powerful supercomputers. That list — Top500′s 40th to date — should be published shortly before the upcoming SC12 conference, held in Salt Lake City, Utah, Nov. 10-16, at the Calvin L. Rampton Salt Palace Convention Center.