GDT::Grid::Utilitarian::Archive::Year 2008

Grid Utilitarian
Cyberinfrastructure Needs To Be Built Now
University of California San Diego Supercomputer Center director Fran Berman says creating a cyberinfrastructure (CI) is essential for future research advancement and discovery.
   "Fundamental to modern research, education, work, and life, 
    CI has the potential to overcome the barriers of geography, 
    time, and individual capability to create new paradigms and 
    approaches, to catalyze invention, innovation, and discovery, 
    and to deepen our understanding of the world around us." 

   "Both research and education initiatives will be critical to 
    ensuring that the academic community can conduct 21st century 
    research and education with 21st century tools and infrastructure." Diego Supercomputer Center Director Urges Academia to Make Cyberinfrastructure "Real"

[06 December 2008, top]

Roadrunner Beats Jaguar by 0.046 Petaflops
The following was copied from on 14 November 2008.
   "The 32nd edition of the closely watched list of the world's 
    TOP500 supercomputers has just been issued, with the 1.105 
    petaflop/s IBM supercomputer at Los Alamos National Laboratory 
    holding on to the top spot it first achieved in June 2008."

   "The Los Alamos system, nicknamed Roadrunner, was slightly enhanced 
    since June and narrowly fended off a challenge by the Cray XT5 
    supercomputer at Oak Ridge National Laboratory called Jaguar. 
    The system, only the second to break the petaflop/s barrier, 
    posted a top performance of 1.059 petaflop/s in running the 
    Linpack benchmark application. One petaflop/s represents one 
    quadrillion floating point operations per second." Chases Roadrunner, but Can't Grab Top Spot on Latest List of World's TOP500 Supercomputers

[Extra] reported that 290 of the top 500 supercomputers belonged to the United States. The UK had 45, France had 26 and Germany was 4th with 25. Japan and China had 18 and 15, respectively. Russia had 8.

[22 November 2008, top]

Rocks+ For Linux on Cray Deskside Supercomputer
Supercomputing on the desktop is coming.
   "Cray and Clustercorp announced the immediate availability of 
    the Cray CX1 deskside supercomputer preloaded with Rocks+ 5, 
    the commercial version of the Rocks Cluster Distribution for 
    Linux users. The joint solution is fully certified as Intel
    Cluster Ready and ships with Intel Cluster Checker preloaded 
    and pre-tuned."

Prices for the Cray CX1 supercomputer start at $25,000.

   "Intel collaborated closely with Cray and Clustercorp to put 
    together a seamless Intel Cluster Ready solution," said Richard 
    Dracott, Intel's general manager of High Performance Computing. 

Here's another quote from the Cray press release.

   "We are delighted to see Clustercorp and Cray collaborate on 
    the Linux version of the Cray CX1," said Philip Papadopoulos, 
    PhD, head of the Rocks project at the University of California, 
    San Diego. "Cray has long been an innovator in our industry; 
    it is exciting to see one of the original leaders in traditional 
    supercomputing team up with one of the driving forces behind the 
    Linux-based commodity cluster movement."

Rocks® is a registered trademark of the Regents of the University of California and "Rocks+ includes software developed by the Rocks Cluster Group at the San Diego Supercomputer Center at the University of California, San Diego and its contributors." Cluster Distribution

[Extra] Speaking of Cray Inc.... Cray has been "named one of the 'Top Five Vendors to Watch in 2009' by both the readers and the editors of HPCwire as part of the publication's 2008 Readers' and Editors' Choice Awards." Kudos to Cray!

[22 November 2008, top]

Looming Crisis? The 'Digital Dark Age'
It takes a lot of bits to represent data and data about that data (i.e. information).
   "A 'digital dark age' may be an unintended consequence of our 
    rapidly digitizing world, warns University of Illinois at 
    Urbana-Champaign professor Jerome P. McDonough. McDonough 
    says the issue of a potential digital dark age originates 
    from the massive amount of data created by the rise of the 
    information economy, which at last count contained 369 exabytes 
    of data, including electronic records, tax files, email, music, 
    and photos."

I agree with professor McDonough when he states: "If we can't keep today's information alive for future generations we will lose a lot of our culture."'Digital Dark Age' may doom some data

[01 November 2008, top]

TGen's Saguaro 2 at ASU is a Supercomputer
I posted the following to my AzFoo at blog.

I've posted a lot about high-performance computing and a couple of the postings have been about ASU's High-Performance Computing Initiative (HPCI).

I wanted to extend a Thank You to the Republic's Ken Alltucker for his 29 October 02008 posting about the Saguaro 2 supercomputer that is being shared by TGen and ASU. Saguaro 2 can do 50 trillion calculations per second.

I remain convinced that high-performance computing is going to enable the scientists and researchers at TGen and the Biodesign Institute at ASU to make some significant discoveries. Release from TGen

[01 November 2008, top]

Clemson is a World Community Grid Leader
For many years now I've been bothered by how little computing the computers at Scottsdale Community College do. For example, at most, the computers compute for about 12 hours a day, five days a week, for 39 weeks. There are approximately 8766 hours in a years; therefore, the computers at SCC are computing at most 27% of the time. In othe words, the computers aren't doing anything for 73% of the time.

The Maricopa Community Colleges should join the World Community Grid.

   "Clemson University is tops in helping to tackle climate change, 
    muscular dystrophy, cancer and a host of other world problems by 
    contributing idle computer time to the World Community Grid (WCG).

   "According to IBM, Clemson's School of Computing has been 
    contributing more than four years of CPU time per day. This 
    means that approximately 1,500 Clemson computers have been 
    working on World Community Grid problems every day. Depending 
    on the day, Clemson has at times been first in the nation and 
    as high as fourth in the world for contributions among World 
    Community Grid teams."

   "World Community Grid's mission is to create the largest public 
    computing grid benefiting humanity. Its not-for-profit work is 
    built on the belief that technological innovation combined with 
    visionary scientific research and large-scale volunteerism can 
    change the world for the better."

Kudos to Clemson University. Solving Problems

[01 October 2008, top]

CERN Turns On the Large Hadron Collider
10 September 2008 was a huge day for physicists around the world as CERN "turned on" the Large Hadron Collider (LHC).

CERN is the European Organization for Nuclear Research and it is where Sir Tim Berners-Lee "invented" the World Wide Web in 1990. Berners-Lee created the web to enable physicists to use hypertext to share information.

The LHC is going to generate massive quantities of data, but data without processing is nothing more than a sequence of zeros and ones. LHC output is going to be piped into a world-wide network (cluster) of 60,000 computers collectively called the LHC Grid.

Scientist David Colling said, "This is the next step after the Web, except that unlike the Web, you're sharing computing power and not files."

Even home computers (public computing via screen savers) can be part of the LHC effort...

[30 September 2008, top]

Blue Waters Peta-Scale Computing in 2011
Kudos to the National Center for Supercomputing Applications (NCSA) at University of Illinois at Urbana-Champaign for partnering with IBM to "build the world's first sustained petascale computational system." The supercomputer will be named "Blue Waters" and it is scheduled to be online in 2011.

Blue Waters, which is supported by a $208 million NSF grant, will have "more than a petabyte of memory and more than 10 petabytes of disk storage." By the way, one petabyte is a million billion bytes (i.e. 1,000,000,000,000,000 bytes).

   "Blue Waters will be an unrivaled national asset that will 
    have a powerful impact on both science and society," said 
    Thom Dunning NCSA director and a professor of chemistry at 
    Illinois in a release.  "Scientists around the country-simulating 
    new medicines or materials, the weather, disease outbreaks, or 
    complex engineered systems like power plants and aircraft-are 
    poised to make discoveries that we can only begin to imagine." $208 million petascale computer gets green light

[Extra] [side-bar] It was at the NCSA where Andreessen and Bina created the Mosaic web browser (circa 1993). Mosaic, which eventually became Netscape, played a key role in increasing the popularity of the World Wide Web. In 2003, NCSA created the PlayStation 2 Cluster to support scientific computing. Today, a PlayStation 3 Cluster is playing a key role in the Folding@home project.

[09 September 2008, top]

Predicting Hurricanes Such as Gustav and Ike
I posted the following to my AzFoo blog at
   Hurricane Gustav provides yet another example as to why 
   our country needs to be investing more money into the 
   advancement of High-Performance Computing (HPC).  

   Although we're getting better at predicting Mother Nature, 
   there is lots of room for improvement.

   HPC environments provide researchers with the ability to ponder 
   every "what if" scenario they can image (and then some).

   [side-bar] These days the 'P' in HPC stands for many things.  
   High Performance Computing is a highly pervasive computing 
   environment that enables highly productive computing via a 
   highly persistent cyber-infrastructure that exploits highly 
   parallel computing to provide highly powerful computation.

After posting about Gustav, forecasters are attempting to predict the behavior of Ike, which hit Cuba as a Category 4 hurricane.

[08 September 2008, top]

China Wants a Petaflop Computer By 2010
According to, 20 years ago China stopped investing in microprocessor development. In 2001, it reversed course and started working on the Godson microprocessor. Now the country has a set a goal of having a peta-scale computer in 2010. aims for petaflop computer in 2010

[02 September 2008, top]

Supercomputing is a Race in Progress
On 18 June 2008, SeekingAlpha had a posting titled: "IBM Wins Supercomputing 'Bakeoff'." How can you win something when the competition has just started? IBM is in the lead, but that doesn't guarantee they'll win.

The posting was nothing more than an analysis of the recently updated TOP500 list.

The posting does mention how IBM's RoadRunner could do 1.02 petaflops, but that was late-May, early-June. In mid-June it was doing 1.144 petaflops.

[20 August 2008, top]

University of Arizona Into Green Supercomputing
The University of Arizona is into efficient (i.e. green) supercomputing.
   "The UA's SGI Altix ICE system has just been ranked by 
    The Top500 Supercomputer Sites as the 237th most powerful 
    computer in the world. In addition, The Green500 List, run 
    by Top500, rates the UA system as the 50th greenest in the 
    world in electricity consumption." 

   "The system is actually comprised of two machines. The first, 
    a 628-core CPU SGI Altix 4700 'shared memory' system, was 
    installed in March 2007.  The rest of the supercomputing 
    budget was used to acquire an SGI Altix ICE, a lower-cost, 
    high-performance cluster, that went online on April 1 of 
    this year [2008]. This computer has 1,392 core processors, 
    but can also accommodate additional compute nodes provided 
    by researchers themselves."

   "Together the new high-performance system has increased 
    computing power from 0.6 trillion floating point operations 
    per second, or TeraFLOPS, to 19.4 TeraFLOPS, or about 32 times 
    the capacity of the nearly four-year-old system it replaced."

   "Currently there are 97 research groups spread across eight colleges 
    and 30 departments at the UA using the HPC facility. Nearly all 
    the research intensive groups on campus use it, including science, 
    engineering, medicine and optical sciences. Researchers in social 
    and behavioral sciences, especially psychology and linguistics, 
    also are among the users." Computers Really Fast, and Green

[20 August 2008, top]

One Step Closer To Quantum Computing?
They are researching quantum computing at the University of Surrey...
   "Quantum computing has the potential to fix problems that 
    would normally take millions of years to solve, much faster 
    than ordinary computers. For these quantum computers to work, 
    atoms need to be kept fixed in space, allowing them to move 
    in an undisturbed oscillating wave motion. This atomic quantum 
    wave holds much more information than a normal computer bit, 
    meaning the computer logic and programmes needed to crack a 
    code are more powerful, and therefore much faster."

Given what we're able to do with zeros and ones, given the ability to "hold much more information than a normal computer bit" is mind boggling.

   "We hope that this work will open up a new field of physics, 
    where quantum coherence can be explored in solid crystals, 
    but at the same time we have brought a scalable silicon 
    quantum computer a step nearer."--Professor Ben Murdin research brings quantum computers one step closer

[09 August 2008, top]

Fusion Simulations Needed To Build Reactors
Cray Inc. announced that "researchers from the University of California-Irvine (UCI) have conducted the largest-ever fusion energy simulation on a Cray XT4 supercomputer. Codenamed "Jaguar" and housed at Oak Ridge National Laboratory (ORNL), researchers harnessed the power of the highly scalable Cray system to simulate electron transport for a prototype fusion reactor developed to study the scientific and technological feasibility of fusion energy."

UCI researcher Yong Xiao was quoted saying.

   "Fusion holds the promise of a revolutionary new energy source 
    for the world, and this important simulation has brought us one 
    step closer to making it a reality. Advances in high performance 
    computing are key to advancing the science associated with identifying 
    and developing alternative energy sources. The Cray XT4 system provided 
    the scale, reliability and sustained performance required to handle 
    the tremendous amount of data produced by complex fusion simulations."

The Cray press release indicated that fusion, the "power source of the stars and sun, could provide a cleaner, more abundant energy source with far fewer harmful emissions than fossil-fuel burning power plants and fewer problems associated with waste than current nuclear power reactors."

[04 August 2008, top]

Predicting Weather With Supercomputers
I posted the following to my blog on 23 July 2008.

With southern Texas saying "hello" to hurricane Dolly today [2008.07.23], this is a good time to highlight the power of supercomputing.

On 22 July 2008, Seattle-based Cray Inc. issued a press release stating that "researchers from the University of Oklahoma's Center for Analysis and Prediction Storms (CAPS) used a powerful Cray XT3 supercomputer housed at the Pittsburgh Supercomputing Center (PSC) to incorporate real-time radar data into their high-resolution thunderstorm forecasting model for the first time."

In a nutshell, cyber-infrastructure connected supercomputers with high-performance visualization systems help researchers "predict storms more accurately and with improved lead time" because they are able to "incorporate a greater and greater number of variables into their models to create increasingly accurate computer simulations and shed light on scientific phenomena that we can't explain today."

Bottom-line: Being able to better predict the actions of Mother Nature saves lives.

[Extra] Today, 4 August 2008, tropical storm Edouard is approaching Texas.

[04 August 2008, top]

The Petabyte Age
Chris Anderson, editor in chief of Wired, has authored an essay on the Petabyte Age: "Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn't just more. More is different."

Anderson quoted Peter Norvig, Google's research director, stating: "All models are wrong, and increasingly you can succeed without them."

Anderson might be correct when he wrote: "But faced with massive data, this approach to science -- hypothesize, model, test -- is becoming obsolete."

Being a Google fan, I enjoyed how Anderson ended his essay: "There's no reason to cling to our old ways. It's time to ask: What can science learn from Google?" End of Theory: The Data Deluge Makes the Scientific Method Obsolete

[21 July 2008, top]

TGen Gets $1.99M Supercomputing Grant
When it comes to computing power, 21st century biologists will always need more, more, more.

Phoenix-based TGen announced that the NIH has awarded it with a $1.99 million grant to "enhance its supercomputing capabilities."

Ed Suh, TGen's CIO, was quoted in the TGen press release saying:

   "In today's genomic research environment, high-throughput 
    instruments allow scientists to collect increasingly large 
    amounts of data. This scalable computing system will allow 
    TGen and ASU scientists to explore those large volumes of 
    complex data more thoroughly and at an accelerated pace."

Dan Stanzione, Director of ASU's High Performance Computing Initiative, was quoted in the TGen press release saying:

   "The success of TGen and ASU scientists to date has come at 
    the sacrifice of time. However, individuals affected with 
    disease do not have the luxury of time. The parallel 
    cluster-computing system will optimize TGen and ASU 
    researchers' ability to meet their data analyses and 
    systems modeling needs, and hopefully accelerate timely 
    and effective discovery toward improved human health"

Happy supercomputing to TGen and ASU! Awarded $1.99 Million Grant to Advance Highly Parallel Supercomputing

[17 July 2008, top]

SUSE Linux is a Popular OS for Supercomputers
Novell issued a press release informing us that SUSE is a popular operating system for supercomputers.
   "Supercomputers around the world are running on SUSE 
    Linux Enterprise Server from Novell®. According 
    to TOP500, a project that tracks and detects trends 
    in high-performance computing, SUSE Linux Enterprise 
    is the Linux of choice on the world's largest HPC 
    supercomputers today. Of the top 50 supercomputers 
    worldwide, 40 percent are running on SUSE Linux Enterprise, 
    including the top three -- IBM eServer Blue Gene at the 
    Lawrence Livermore National Laboratory, IBM eServer 
    BlueGene/P (JUGENE) at the Juelich Research Center 
    and SGI Altix 8200 at the New Mexico Computing 
    Applications Center."

Novell's name dropping continued...

   "Customers such as Audi, MTU Aero Engines, NASA Advanced 
    Supercomputing Division, Porsche Informatik, Seoul National 
    University, Swinburne University of Technology, Tokyo Institute 
    of Technology and Wehmeyer are running supercomputers and 
    computer clusters on SUSE Linux Enterprise Server to handle 
    mission-critical workloads with minimal downtime."

Keep up the good work Novell!

[Extra] The openSUSE® Project announced that "openSUSE 11.0 is available for download at openSUSE 11.0 is the latest release of the community Linux distribution. [...] openSUSE 11.0 includes everything you need to get started with Linux on the desktop and server. The openSUSE distribution provides the foundation for Novell's award-winning SUSE® Linux Enterprise products."

[05 July 2008, top]

Kudos to ASU's High-Performance Computing Initiative
ASU's High-Performance Computing Initiative (HPCI) has an outstanding leader in Dan Stanzione.
   "ASU is using its supercomputing capabilities to aid 
    humanitarian organizations attempting to provide disaster 
    relief to victims of Cyclone Nargis that hit the Southeast 
    Asian country of Myanmar May 2."

Keep up the great effort! experts guide Myanmar relief efforts

[24 June 2008, top]

Forbes Reports About the TOP500 List
On 18 June 2008, Forbes reported on what's happening the world of supercomputing.
   "In the Top500 list, a twice-annual ranking of the world's 
    most powerful supercomputers, IBM not only took the top 
    three spots for the fastest computers in the world, but 
    also got credit for five of the top 10 fastest machines. 
    All told, the Armonk, N.Y., company had 210 computers on 
    the list--more than any other company."

There's lots of competition in the supercomputing world, but at least IBM, HP, Sun Microsystems, Cray are all American companies. Vs. HP: Supercomputer Showdown

[18 June 2008, top]

Exaflops Computing By 2019?
The 2008 projection for peta-scale computing was accurate. I have been keeping an eye open for when exa-scale computing will happend and the 1st projection I've seen is the year 2019.
   "At next week's International Supercomputing Conference in 
    Dresden, Germany, Jack Dongarra, a professor of computer 
    science at University of Tennessee and a distinguished 
    research staff member at Oak Ridge National Laboratory, 
    will be giving a presentation on exaflop systems 
    'in the year 2019.'"

   "'The projection is very clear; in 11 years we will have an 
     exaflop,' said Dongarra, who believes by then every system 
    on the Top 500 computing list will be at least a petaflop." hail Roadrunner's petaflop record; now, what about the exaflop?

[16 June 2008, top]

1.026 Quadrillion Calculations Per Second
On 9 June 2008 the New York Times had a story titled: "Military Supercomputer Sets Record." The reported that the RoadRunner supercomputer did 1.026 quadrillion calculations per second.
   "To put the performance of the machine in perspective, 
    Thomas P. D'Agostino, the administrator of the National 
    Nuclear Security Administration, said that if all six 
    billion people on earth used hand calculators and performed 
    calculations 24 hours a day and seven days a week, it would 
    take them 46 years to do what the Roadrunner can in one day."

Jack Dongarra, a computer scientist at the University of Tennessee, was quoted saying: "This is equivalent to the four-minute mile of supercomputing." Supercomputer Sets Record

[09 June 2008, top]

IBM Testing the Roadrunner
The IEEE posted an article that started with the following.
   "A handful of engineers at a lab in Poughkeepsie, N.Y., 
    have assembled what they expect will become--at least 
    for a while--the world's most powerful computer. The 
    IBM Roadrunner likely will go down in history as the 
    first computer to consistently crank out 1 petaflops
    --a quadrillion floating-point operations per second."

I am frequently asked what are supercomputers good for and there are probably more than a quadrillion responses; however, to date these supercomputers are grossly under utilized.

   "Once fully tested by IBM, the system will be packed up 
    and shipped to Los Alamos National Laboratory in New Mexico, 
    where it will be used to run classified physics experiments 
    as part of the U.S. nuclear missile program."

The IEEE article ended with the following about Japan extending its Earth Simulator supercomputer.

   "Japan has announced a follow-on project called the 
    Life Simulator, targeted at achieving 10 Pflops of 
    sustained performance. But it is not expected to 
    be ready until 2011."; now what?

[02 June 2008, top]

Yahoo! and CRL Collaborate on Cloud Computing
I'm hearing more and more about "cloud computing;" therefore, I thought it was time to get this posted.

The following was announced on 24 March 2008.

   "Yahoo! and Computational Research Laboratories (CRL), 
    a wholly owned subsidiary of Tata Sons Limited, announced 
    an agreement to jointly support cloud computing research."

Granting supercomputing capabilities to Yahoo!'s talented researchers is major deal for Yahoo!

   "As part of the agreement, CRL will make available to researchers 
    one of the world's top five supercomputers that has substantially 
    more processors than any supercomputer currently available for 
    cloud computing research."

I had not heard of "Hadoop" before, but I'll be looking into it sooner rather than later.

   "The Yahoo!/CRL effort is intended to leverage CRL's expertise in 
    high performance computing and Yahoo!'s technical leadership in 
    Apache Hadoop, an open source distributed computing project of 
    the Apache Software Foundation, to enable scientists to perform 
    data-intensive computing research on a 14,400 processor supercomputer."

CRL's supercomputer is indeed super.

   "Called the EKA, CRL's supercomputer is ranked the fourth fastest 
    supercomputer in the world -- it has 14,400 processors, 28 terabytes 
    of memory, 140 terabytes of disks, a peak performance of 180 trillion 
    calculations per second (180 teraflops), and sustained computation 
    capacity of 120 teraflops for the LINPACK benchmark."! and CRL to Collaborate on Cloud Computing Research

[03 May 2008, top]

Is an Exaflood Coming Soon?
Cisco says we are entering the Exabyte Era and the Grid Utilitarian agrees.
   "A scholar at the Discovery Institute (yes, that Discovery Institute), 
    Brett Swanson, kicked off the current round of debate about Internet 
    capacity with a piece in the Wall Street Journal. Swanson warned that 
    the rise in online voice and video were threatening the Internet, 
    especially at its 'edges,' those last-mile connections to consumers 
    and businesses where bandwidth is least available. 'Without many tens 
    of billions of dollars worth of new fiber optic networks,' he wrote, 
    'thousands of new business plans in communications, medicine, 
    education, security, remote sensing, computing, the military 
    and every mundane task that could soon move to the Internet 
    will be frustrated. All the innovations on the edge will die.'"

   "What we are facing is nothing less than a 'coming Exaflood.'" The coming exaflood, and why it won't drown the Internet

[16 April 2008, top]

Supercomputing at the University of Tennessee
Kudos to the University of Tennessee for receiving a NSF grant to buy a supercomputer.

I am consistently asked "what are supercomputers used for?" At the UT (and the Oak Ridge National Laboratory) it will initially be used for the following.

   + provide a boost to climate scientists in their efforts to predict
     extreme weather such as hurricanes and tornadoes as well as long-term
     climate changes and the effects of pollution,
   + permit astrophysicists to conduct more realistic simulations of
     supernova formation, galaxy evolution, and black hole mergers
   + enable earth scientists to perform high-resolution simulations 
     of the Earth's interior and enhance our understanding of the 
     planet's evolution.

The UT supercomputer is planned to be doing "near" petascale computing sometime in 2009. to Provide One of World's Fastest Supercomputers to University of Tennessee

[03 April 2008, top]

Sun Microsystems Working on a Virtual Supercomputer
Sun Microsystems announced it "received $44.3 million in funding from the Department of Defense to research microchip interconnectivity." The company said the 5 1/2-year DARPA (Defense Advanced Research Projects Agencey) project "will look at the possibility of creating a virtual supercomputer through a network of low-cost chips." Sun Microsystems Awarded $44 Million Department of Defense Contract to Develop Microchip Interconnect System

[24 March 2008, top]

Defining the 'P' in HPC
I suspect this posting is bullfoo, but I'm a bullfooer so here it is...

HPC stands for High Performance Computing; however, these days the 'P' in HPC stands for many things.

High performance computing is a highly pervasive computing environment that enables highly productive computing via a highly persistent cyber-infrastructure that exploits highly parallel computing to provide highly powerful computation.

Or... HPC is a HPC environment that enables HPC via a HPCI that exploits HPC to provide HPC.

Or... HPPPPPPC stands for High Performance, Productivity, Pervasive, Persistent, Parallel, Powerful Computing. In the 20th century HPPPPPPC would be H6PC, but in the exponential growth world of the 21st century we write HP^6C (or HP6C).

[11 March 2008, top]

85.2% of TOP500 Supercomputers Running Linux
As of 11/2007, 426 of the "top" 500 supercomputers were running Linux systems. The 426 systems had a combined total of 970,790 processors. Operating system Family Share for 11/2007

[04 March 2008, top]

Kudos To Frances Allen
Kudos to Frances Allen.

High-Performance Computing is highly-parallel computing.

   "For pioneering contributions to the theory and practice of 
    optimizing compiler techniques that laid the foundation for 
    modern optimizing compilers and automatic parallel execution." Frances Allen

[Extra] SGI lives... SGI Acquires Assets of Linux Networx, a Leader in Clustered HPC

[25 February 2008, top]

An Update On Supercomputing In New Mexico
The state of New Mexico is excited about getting their new Encanto supercomputer up and running.

New Mexico's governor, Bill Richardson, said he "foresees the system fostering statewide water modeling projects, forest fire simulations, city planning and the development of new products." State's supercomputer a catalyst for research, education, economy


An exaflop is a 1000 times faster than a petaflop.

   "Preparing groundwork for an exascale computer is the mission 
    of the new Institute for Advanced Architectures, launched 
    jointly at Sandia and Oak Ridge national laboratories." One million trillion flops targeted by new Institute for Advanced Architectures

[22 February 2008, top]

ASU's HPCI Collaborates With UA's BIO5
Headline from the Tucson Citizen: "$50M grant solidifies UA's bioscience position." Kudos the UA BIO5!

In a nutshell, the UA-led iPlant Collaborative will "develop a centralized database of research information on plant biology and offer researchers the tools needed to solve the major science problems they face."

The NSF grant is for 5-years with UA getting 79% of the $50 million and ASU getting 4%. One item that shouldn't be ignored is that 16% of the grant goes to the Cold Spring Harbor Laboratory in New York. This is a great "connection" for the state of Arizona.

Arizona's governor was quoted saying.

   "Arizona's future lies in innovation in areas like the biosciences, 
    and we are tremendously proud that the National Science Foundation 
    has chosen Arizona to chart a new course in plant science research." 

Again, "Kudos" to BIO5 at the University of the Arizona.

[Extra] Here are a couple quotes from Dan Stanzione, Director of ASU's High Performance Computing Initiative.

   "Our role is to support the plant scientists in implementing 
    their vision for the iPlant cyberinfrastructure. We are 
    providing the large-scale storage, high-end computing power 
    and expertise in applying supercomputing as part of the 

   "Deepening our knowledge in plant science is critically important 
    in confronting many of our global challenges. Food production, 
    energy production, environmental sustainability, the development 
    of biofuels and more effective medicines, dealing with climate 
    change -- all of these hinge on making new discoveries in
    plant biology." Creating cyber tools for one of nation's major scientific endeavors

[03 February 2008, top]

Cray Inc. Says It's An Important Time For HPC
Supercomputers are enabling more and more difficult problems to be analyzed and, in some cases, solved. Let's hope the U.S. government continues to fund HPC initiatives.

Seattle, WA-based Cray Inc. announced the "appointment of Jill Hopper to the position of vice president responsible for government programs." Hopper said: "Cray's highly scalable and innovative systems have been used to tackle some of the most complex problems we face today, including supporting key government missions and improving everything from airplane safety and fuel efficiency to medical treatments and storm prediction."

In the press release, Cray CEO/President Peter Ungaro said: "High performance computing enables the scientific breakthroughs and industry advancements that contribute to our country's continued scientific and technical leadership, economic competitiveness and national security. It's an important time in the HPC industry with an increased need for supercomputers with more computational capability that simultaneously operate with maximum application, power and cost efficiency."

If Jill is 10% the woman Grace was, then Cray Inc. made a great move.

[17 January 2008, top]

More News About the U.K.'s HECToR
HECToR is the High-End Computing Terascale Resource. HECToR is owned by the "Research Councils of the UK and will be used by scientists to simulate everything from climate change to atomic structures. It could run at speeds of up to 63 teraflops.
	Professor Jacek Gondzio at the University of Edinburgh plans 
	to use HECToR to model financial markets. He is working on 
	finding the safest and most profitable investment strategies 
	for pension funds, based on uncertain information about the 
	future of the world economy.

	"Uncertainty needs to be modelled by multiple scenarios and 
	 in order to reflect reality this automatically expands 
	 problems to large sizes."

It has cost £5.6m less than the £65m estimate to build, but its annual running costs have jumped from the £5.4m estimate to £8.2m. Trew said this was because electricity prices in the UK had nearly doubled since the planning stages. "I think the initial estimates for power costs were unrealistically low, but power does cost an awful lot more today than it did five or six years ago."

USD to GBP 0.5064 Inside the UK's fastest machine

[14 January 2008, top]

Lots of Archive Data by 2010
Headline seen on Slashdot: "27 Billion Gigabytes to be Archived by 2010."

Nevering deleting anything from your computer will end up using lots of bytes of memory.

   "In the private sector alone electronic archives will take 
    up 27,000 petabytes (27 billion gigabytes) by 2010."

The following comment was made to the Slashdot posting.

   "In other words, 27 Exabytes?

    Note to science and tech journalists: please stop stringing 
    together 'millions' and 'billions' in an attempt to make the 
    numbers seem large, impressive, and incomprehensible. Scientific 
    notation and SI exist for a reason." 27 Billion Gigabytes to be Archived by 2010

[03 January 2008, top]

About the Grid Utilitarian
The Grid Utilitarian is a blog devoted to high-performance computing. This includes grid-based utility computing and 21st century Informatics. This blog was created on 3 October 2004 and it starts 2008 with 156 postings.

Grid Utilitarian Archives: 2007 | 2006 | 2005 | 2004

[01 January 2008, top]

Creator: Gerald Thurman []
Last Modified: Saturday, 05-Jan-2013 11:17:33 MST

Thanks for Visiting