The ego cluster is 1950 blades each with 16 cores and 64 GB memory. 4xDDR IB and 1GB/s GigE networks. Every blade is based on a TYAN motherboard with 4 sockets and AMD Opteron 8347 HE (Barcelona b3) CPUs cadenced at 1.9 GHz (nope, not the fastest by any means). Dawning, AMD, and Mellanox. Fat boxes, fat pipes, thick glue. Performance tuning. LINPACK benchmark. Fingers crossed. The next TOP500 list comes out at SC08 in Austin.
The TOP500 List was created by a wise old group of elders, bent and gnomish, with hooded eyes and long white beards. Ahem, you see, it was these Founding Fathers of the Top500 List who decided that LINPACK would be the best way to rank supercomputers. Or else it was an influential user base who championed the LINPACK test through its early days, and convinced everyone else to accept it as a de facto standard. Either way, LINPACK performance numbers remain relevant and you can get them on most large to medium-sized systems. However, and yet, please-do-keep-in-mind, HPC applications show much more complex behavior than LINPACK, so the benchmark doesnât give such a great indication of real-world performance. Thatâs right. . .
Itâs like engine torque on a dynamometer. The bench test will almost always score higher than your midnight run down Main Street. Or else itâs like the small print disclaimer for an attention deficit drug:
*the result of the LINPACK test is not intended to reflect the overall performance of a system; instead, it provides a measure of the performance of a dedicated system for solving a specific set of linear equations. Since the LINPACK problem is very regular, the performance achieved is quite high; the performance numbers give a good indication of peak performance and provide an idea of how much floating-point performance is theoretically possible from a given system.
But LINPACK is solid, LINPACK is reliable, LINPACK is deserving of some serious reverence. The LINPACK benchmark gives you that stable and enduring historical yardstick which has always eluded Major League Baseball. A year ago we did a Top500 run on our internal Rainier cluster, and reached 11.75Tflops. One short year ago. Today the Dawning cluster reached 180.6Tflops. More than 15x higher. And the judges at Top500 have LINPACK to make sure everyone takes their home run swings on the same playing field. At SC08 weâll see how weâve done against history. If you can't make it to Austin, check out the cool video, https://www.yousendit.com/download/Y2o4WGJIcVhRWUtGa1E9PQ
__________________________________________________________________________________
Heard about The Lizard? What makes The Li (npack Tuning Wi) zard so special is that itâs going public. Not right away, not immediately, but just as soon as Frank gets it tweaked and polished. The Lizard automates a good many of the procedures needed for a Top500 run, to include validating the cluster and optimizing for LINPACK. Pretty soon anyone (we-ell, any slick IT Pro with too many MCSE or MCSA certifications on their Wall of Fame) will be able to benchmark a cluster.
A shameless product plug, sure. But howâs it any different than an NBA point guard snapping out his jersey number after lofting up the fast break oop for a tomahawk dunk? Weâre in the game, weâre playing team ball, weâre loving our work.
For now the Lizard still takes a backseat to the traditional methods of manual tuning, but an early test adopter in the US, R-Systems, has been making some bold predictions: "The Lizard is a thing of beauty. It incorporates the undocumented wisdom of Linpack experts to "dial in" clusters and help validate them. I expect the efficiency ratings on the Top500 list will look very good for the Windows HPC 2008 systems."
HPC is and always will be rocket science. Just ask AI Solutions, a little mission design outfit in Maryland: âNASA wanted us to analyze the decay rate of debris from a destroyed Chinese weather satellite, and its impact on NASA spacecraft over the next 20 years. Without supercomputers weâd have been waiting for results for a month or more, but with Windows HPC Server 2008 we completed the analysis in three days.â
HPC is and always will be pushing boundaries. EVE Online is the worldâs largest Massively Multiplayer Online Game, hosting 50,000 users in a single environment. Not sure thereâs any other MMOG out there that can do that, but CCP Games wants to go farther still. Theyâre using Windows HPC to take virtual worlds into the next century now.
This year our Many Faces of Go won the 2008 Computer Games in China, beating the champ, even though Mogo had more processing power. Maybe it was all due to Surface. Picture it: those shiny stones on a touch screen checkerboard of 19x19 squares, with 200 or so moves per position as opposed to the 35 legal moves in chess. Go experts were consulted in the creation of the UI and 100s of details were analyzed to ensure it remained true to the gameâs long tradition, but you really donât need a Surface box. Any standard Go frontend that speaks Go Text Protocol can be used to play Go against the HPC cluster. WHPC users can visit Smart-Gamesâ website and download the parallel version of the game and run it on their cluster for diagnostic purposes or for just pure fun! (Hmmm, sounds familiar, wasnât that how LINPACK got started?)
____________________________________________________________________________________
All right, letâs talk business, commercial users, economic news you can use. Letâs talk Dell. Drop in the box. Preconfigured. Factory pre-installed Windows HPC Server 2008 and Dell PowerEdge nodes, just in time for Thanksgiving. Raise a glass. Say no more. Moment of silence.
And how about Mathworks? Those preinstalled Dell clusters come with install instructions for MATLAB. Life just got better for umpteen million HPC users in academia and government.
Ansys optimizes their software for HPC. Theyâve gotten serious performance gains on Windows HPC Server 2008, theyâre giving their customers more capacity and faster turnaround time, but what theyâre really eyeballing are new ways to help engineers work with ever-increasing data sets --which is exactly the same problem facing so many big organizations: the data deluge is a tidal wave already.
Crayâs CX1, tell me youâve seen it. Like it was designed by Frank Lloyd Wright, or maybe Mies Van Der Rohe, less is more, form follows function, that is one gorgeous desk side cluster. And Crayâs giving away a CX1 in their sweepstakes with us, pay attention, it ends Jan 21st, eleven days before the Super Bowl.
IBM is offering test drives. Theyâre running Windows HPC Server 2008 on their global network of on-demand supercomputing centers. Log in, buckle up, take a ride in a supercomputer (which reminds me, anyone paying attention to the news out of Ferrari these days?). âIBMâs On Demand Centers are an effective way for new users to tap into the power of supercomputers,â said Steve Remondi, CEO of Exa Corp., Burlington, Mass. âMany of our customers have never used supercomputers before, but they immediately realize that high-performance computing offers a competitive edge.â
ISVs: streamline your HPC development and deployment. HP, Dell, Cray, and Viglen all have a variety of discounted hardware, as well as Windows HPC server 2008 certification programs. Test your server apps on optimized clusters, let those big guys do the heavy lifting, broaden your reach and scale.
HPC is and always will be the next big thing. Or so the old joke says. From the days of vector processing and symmetric multiprocessors and âMPPâ offerings, HPC has been a fascinating technology that never quite translated outside the confines of top-level science, engineering, and research. The environment was complex, parallel programming was difficult, the eco-system was highly fragmented. But all thatâs changing fast. If you want a preview of coming attractions, a good sneak peek at the future, take a look at Windows HPC Server 2008.
Tim Carroll
Product Manager