GDT::Grid::Utilitarian::Archive::Year 2007

Grid Utilitarian
SC07 Celebrates Fortran Turning 50
[old news/cleanup posting] I wish I was going to SC07 (Supercomputing Conference 2007), but it doesn't fit my schedule.
   "The computing industry will celebrate the 50th birthday 
    of Fortran during SC07, which takes place Nov. 10-16, 2007, 
    in Reno, Nev. SC07 will offer the Fifty Years of Fortran panel, 
    and Frances Allen, IBM Emerita, will moderate the discussion on 
    the influence of the programming language on system software, 
    applications, and computer architecture.  David Padua from the 
    University of Illinois at Urbana-Champaign, Henry M. Tufo of the 
    National Center for Atmospheric Research, John Levesque of Cray, 
    and Richard Hanson of Visual Numerics will serve as panelists, 
    and will also speculate on the future of Fortran."

SC07.Supercomputing.org:: SC07 Homepage

[22 December 2007, top]

Computational Astrophysics in Japan
At the time of this press release (2007.12.19), Cray's stock was selling at its 52-week low of $5.52 per share.
   "Cray announced the selection of a Cray XT4(TM) system by the 
    National Astronomical Observatory of Japan (NAOJ).  With more 
    than 27 teraflops of computational capability, the Cray XT4 
    system will be housed at the NAOJ's Center for Computational 
    Astrophysics (CfCA) to aid scientists and researchers in the 
    study of the origin of planetary systems and galaxies and the 
    formation of stars and star clusters."

Jun Makino, Director of CfCA, was quoted saying the following.

   "The huge time and spatial scales associated with astronomical 
    phenomena make laboratory experiments impossible. Computer 
    simulations allow us to create virtual universes, which help 
    us to understand the early stages and evolution of the universe 
    and the formation and evolution of galaxies, stars, black holes 
    and planets.  We are dedicated to providing astronomers with the 
    most advanced resources and the Cray XT4 supercomputer will allow 
    us to achieve this goal."

27 teraflops today... over one petaflops next year?

[18 December 2007, top]

Iran Has a 860 Gigaflops Linux-Based Supercomputer
The Iranians have built a supercomputer that they'll use for "weather forecasting and meteorological research.
   "Iranian scientists claim to have used 216 microprocessors 
    made by Advanced Micro Devices to build the country's most 
    powerful supercomputer, despite a ban on the export of 
    U.S. computer equipment to the Middle Eastern nation."

AMD says they had nothing to do with this and the Grid Utilitarian believes AMD.

It appears as though Iranians are FLOSSers (i.e. users of Free/Libre and Open Source Software).

   "Scientists at the Iranian High Performance Computing Research 
    Center at the country's Amirkabir University of Technology said 
    they used a Linux-cluster architecture in building the system 
    of Opteron processors. The supercomputer has a theoretical peak 
    performance of 860 giga-flops, the posting said."

As of the date of this posting, the fastest computer in the world was capable of running at 478.2 teraflops.

[11 December 2007, top]

Petascale Computing 1st, Then Exascale Computation
We haven't reached petascale computing, yet many supercomputing gurus are already thinking about computing at the exascale. David A. Bader, associate professor of computing and executive director of high-performance computing at Georgia Tech, was asked the following question: "How significant will the development of Petascale Computing be to the advancement of science and technology?"
   "Increasingly, cyber-infrastructure is required to address our 
    national and global priorities, such as sustainability of our 
    natural environment by reducing our carbon footprint and by 
    decreasing our dependencies on fossil fuels, improving human 
    health and living conditions, understanding the mechanisms of 
    life from molecules and systems to organisms and populations, 
    preventing the spread of disease, predicting and tracking severe 
    weather, recovering from natural and human-caused disasters, 
    maintaining national security, and mastering nanotechnologies."

   "Several of our most fundamental intellectual questions also require 
    computation, such as the formation of the universe, the evolution of 
    life, and the properties of matter."

ITnews.com.au:: Petascale computers: the next supercomputing wave

[03 December 2007, top]

CMU Becomes a Yahoo! M45 Collaborator
Yahoo! announced it has signed up Carnegie Mellon University for its new M45 project, a day after it introduced the open source supercomputer initiative.
   "Through M45, researchers will have access to a 4,000-processor 
    computing cluster that can perform 27 teraFLOPS, and offer 3 
    TB of memory and 1.5 petabytes of storage.  The supercomputer 
    will use the latest version of Hadoop and run other open-source 
    software, including the Pig parallel programming language."

Research.Yahoo.com:: Yahoo! Launches New Program to Advance Open-Source Software for Internet Computing

[26 November 2007, top]

220 Megapixels (or 100-Times High-Def TV)
High-performance visualization systems are of interest to scientists and engineers working on the "earth sciences, climate prediction, biomedical engineering, genomics, and brain imaging."
   "Engineers at the University of California, San Diego, have 
    built the world's highest-resolution computer display, a 
    55-panel screen capable of zooming in on a live picture 
    of a human brain to give a clear image of a nerve cell."
    [220 megapixels, 100 times higher than an high-def TV]

   "The UCSD system is linked to a 50-panel high-resolution display 
    in UC Irvine through a fiber optic Ethernet cable that can carry 
    data at 2 Gbps."

InformationWeek.com:: UCSD Engineers Build World's Highest-Resolution Computer Display

[26 November 2007, top]

478.2 TeraFLOPS and Counting
The BlueGene/L System, jointly developed by IBM and the U.S. Department of Energy, remains the world's fastest. This has been true since November of 2004.
   "The current IBM system has been significantly expanded, 
    achieving a Linpack benchmark performance of 478.2 teraflops. 
    Six months ago, the BlueGene held the top position with 
    280.6 TFlop/s."

InformationWeek.com:: IBM BlueGene Is The World's Fastest Computer Once Again

[24 November 2007, top]

Green500 List (FLOPS per Watt)
Measuring a supercomputing using performance-per-watt, or FLOPS per watt (FLOPS/W).
   "The purpose of this list is to provide a ranking of the 
    most energy-efficient supercomputers in the world and 
    serve as a complementary view to the TOP500 List."

Green500.org:: The Green500 List

[20 November 2007, top]

Red Hat and Platform Computing Doing HPC
Red Hat has reached an agreement with Platform Computing to "jointly offer a new product, the Red Hat HPC Solution, that fully integrates Platform's Open Cluster Stack1 with Red Hat Enterprise Linux."
   "Businesses are increasingly utilizing HPC clusters to gain a 
    competitive edge; the new Red Hat HPC Solution allows users 
    to deploy their HPC applications in a more cost-effective manner, 
    while providing tools in a single, easy-to-deploy package. The 
    Red Hat solution incorporates the operating system, device drivers, 
    cluster installer, resource and application monitor and job scheduler 
    for every node in the cluster."

RedHat.com:: Red Hat and Platform Computing Collaborate to Deliver Integrated HPC Solution

[13 November 2007, top]

Disk Storage Using Giant Magnetoresistance (GMR)
BBC headline on 10/15/2007: "Drive advance fuels terabyte era." It turns out the advance is enabled by nanotechnology.
   "A nanotechnology breakthrough announced by Hitachi 
    could usher in the 'terabyte era' in computer storage.
    Hitachi researchers have reduced the read-write head of 
    a hard drive to a size that is 2,000 times smaller than 
    the width of a human hair, enabling storage capacities 
    of as much as four terabytes on a single hard drive in 
    the next few years."

The 2007 Nobel prize for physics went to French scientist Albert Fert and Peter Grunberg of Germany for discovering "giant magnetoresistance" (GMR), in which "weak magnetic changes give rise to big differences in electrical resistance."

   "A computer hard-disk reader that uses a GMR sensor is equivalent 
    to a jet flying at a speed of 30,000 kmph, at a height of just 
    one metre above the ground, and yet being able to see and catalogue 
    every single blade of grass it passes over."

Note: 30,000 kmph is approximately 18,641.14 mph and one metre is about 1.1 yard (i.e. 39.4 inches).

The following is the bottom-line for everyday computer users.

   "Existing hard disks can hold about 200 gigabytes of information 
    per square inch, but Hitachi's new technology is expected to store 
    up to 1 TB of data per square inch. By 2011, Hitachi expects to 
    have a hard disk for desktops with 4 TB of storage and a laptop 
    with a 1 TB drive."

BBC.co.uk:: Drive advance fuels terabyte era

[21 October 2007, top]

Bioinformaticians at Yokohama City Univ. Get a Cray
Cray announced that Japan's Yokohama City University is going to be using a Cray at its Division of Structural Bioinformatics. Two researchers are going to use the supercomputer to "expand on the principle of three-dimensional structures and functions of biomolecules, proteins and nucleic acids."
   "The new Cray XT4 system will accelerate analysis of genome 
    function and evolution from protein solid structure information. 
    Yokohama City University has a proud tradition of science and 
    technology excellence and is fast becoming an international 
    institution of higher education and research. This is a key 
    step for the institution and the field of bioinformatics."
    --Akinori Kidera, Professor at Yokohama City University 

From a hardware perspective, the Cray XT4 system for Yokohama City University contains "dual-core AMD Opteron(TM) processors, over 4 teraflops of peak performance and 1.4 terabytes of main memory."

Cray.com:: Japan's Yokohama City University Selects Cray Supercomputer for Bioinformatics Research

[11 October 2007, top]

Inventing the Future at Virginia Tech University
They're doing supercomputing at Virginia Tech University.
   "By combining supercomputing, parallel programming, virtualization, 
    and operating systems aspects, this new grant demonstrates the new 
    possibilities for collaboration that were enabled by bringing together 
    different researchers with different areas of specialty under the roof 
    of the Center for High-End Computing (CHECS)." 
    --Godmar Back, assistant CS professor

The Utilitarian will assume "High-End Computing" is alternative terminology of HPC (High Performance/Productivity/Pervasive Computing).

VT.edu's motto: "Invent the future." "I Invent the Future" was the theme at the 2007 Grace Hopper Celebration of "Women in Computing." GDT::DreamTeam member Alan Kay was quoted saying: "The best way to predict the future is to invent it."

VTnews.edu:: Computer Science researchers explore virtualization potential for high-end computing

[06 October 2007, top]

University of Maryland and IBM Cell Computing
Looks like they're doing supercompuing in the state of Maryland.
   "The University of Maryland, Baltimore County (UMBC) is creating 
    a high-performance computational test laboratory based on the 
    Cell Broadband Engine (Cell/B.E.), as a result of a partnership 
    with IBM. Supercomputing research in aerospace and defense, 
    financial services, medical imaging, and weather and climate 
    change prediction will be the focus of the Multicore Computing 
    Center (MC2)."

The following quote is from UMBC CS professor Milt Halem.

   "Cell processors are groups of eight very fast, independent but 
    simple PCs with their own tiny memory all on a single chip each 
    with its own leader. It's like a distributed orchestra with 224 
    musicians and 28 conductors connected with head phones trying to 
    play Beethoven's Fifth Symphony together." 

IBM is a major player in the world of HPC (High Performance/Productivity Computing).

UMBC.edu:: IBM Award to Help Establish Multicore Supercomputing Center at UMBC

[28 September 2007, top]

IU's Big Red versus UI's Blue Waters
It's IU's Big Red versus UI's Blue Waters (or supercomputing competition in the midwest).
   "IU's 'Big Red' is one of the fastest supercomputers in the world, 
    but it will pale in comparison to the new supercomputer being built 
    at the University of Illinois."

The University of Illinois National Center for Supercomputing Applications (NCSA) has received a $208 million NSF grant to build a supercomputer named "Blue Waters." Blue Waters will be able to perform one quadrillion calculations per second.

Indiana University built the "Big Red" supercomputer and it was initially capable of executing 20.4 trillion flops.

UI's Blue Waters will be available for "open scientific research, which means that researchers across the country will be able to use this computer for research in fields such as chemistry, biology, cosmology and high-energy physics. Scientists use computer simulations to understand things that are very large, like the universe, or very small, like interactions of molecules and folding of proteins."

Idsnews.com:: U. Illinois to build computer faster than IU's Big Red

[23 September 2007, top]

Calvin College Does Supercomputing
A Calvin College computer science professor received a $205,000 NSF grant to build a supercomputer named OHM to replace a supercomputer that they built six years ago.
   "The new OHM will add at least two feet in overall length 
    and be built from at least 32 computers, each containing 
    two CPUs, each with dual or even quadruple processing elements 
    called 'cores'."

   "This past winter Calvin professor Joel Adams and then Calvin 
    senior Tim Brom built Microwulf, a portable supercomputer measuring 
    a mere 11 inches by 12 inches by 17 inches. Microwulf, at 26.25 
    gigaflops peak performance, is more than twice as fast as the 
    original OHM and cost less than $2,500 to construct."

Professor Adams was quoted saying the following.

   "If the new OHM has 128 cores, it should be about 40 times as fast 
    as the original; if is has 256 cores, it will be about 80 times as 
    fast. If we buy more than 32 computers, it could be even faster. 
    We won't know until we build it."

Calvin College is located in Grand Rapids, Michigan.

Calvin.edu:: Prof Receives NSF Grant for Supercomputer

[16 September 2007, top]

Cray Inc. Likes Austin, Texas
Texas is a leader when it comes to HPC.

Cray Inc. announced it has opened a new engineering development center in Austin, Texas. Cray's press release stated:

   "Austin has a very strong high-tech community in both hardware 
    and software and provides a deep pool of technology and engineering 
    talent. We look forward to becoming part of the community and, at 
    the same time, advancing Cray's leading-edge technological strength 
    and innovation in HPC."

The following was copied from the news report.

   "Austin's status as a technology hub, the close proximity to a 
    number of strategic development partners and the University of 
    Texas, with its strong computer science and computer engineering 
    programs, were key factors in the company's decision to open a 
    facility in the area."

It would be great if Cray established operations in the Phoenix area, but I won't hold my breath.

[12 September 2007, top]

Sun Acquires Majority of Cluster File Systems' IP
Sun Microsystems announced they are acquiring the "majority of Cluster File Systems, Inc.'s intellectual property and business assets, including the Lustre File System." The following was copied from ClusterFS.com.
   "Lustre® is a scalable, secure, robust, highly-available 
    cluster file system. It is designed, developed and maintained 
    by Cluster File Systems, Inc."

   "Public Open Source releases of Lustre are available under the 
    GNU General Public License."

A Sun executive VP said, "This acquisition, coupled with the recent announcement of the Sun Constellation System, the most open petascale capable HPC architecture in the industry, shows our long-term commitment to the open source community and leadership in HPC."

Cluster File Systems Inc. announced that they are "now in the process of installing a 500+ TeraFlop and 1.7 PetaByte cluster at Texas Advanced Computing Center (TACC)." Sun Microsystems is already involved with the TACC.

[12 September 2007, top]

RPI Has Fastest University System
Rensselaer Polytechnic Institute has a supercomputer and they are discussing ways to put the supercomputer to work.
   "The $100 million IBM Blue Gene supercomputer, housed in the 
    Computational Center for Nanotechnology Innovations at Rensselaer 
    Technology Park in nearby North Greenbush, can handle more than 
    100 trillion arithmetic operations a second, making it the 
    seventh-fastest computer in the world, and the fastest on 
    a university campus."

Simulations that use to take weeks can now be done in less than 30 minutes.

   "A team of RPI researchers already is modeling blood flow 
    through the human body, enabling them to pose 'what if' 
    questions on how different approaches, for example, might 
    work in treating a blockage. The simulations, tailored to 
    an individual's situation, previously could take weeks 
    to complete."

IBM told RPI the following: "Like it or not, the transistor as we know it is running out of gas. We need to break through this nanobarrier." RPI's supercomputer can help with this effort.

TimesUnion.com:: Tapping the promise of new RPI supercomputer

[11 September 2007, top]

Ohio Sees the Power of Supercomputing
The following quote comes from Stan Ahalt, executive director of the Ohio Supercomputer Center.
   "People are beginning to awaken to the fact that you can use 
    the computer as if it were a laboratory. You can do experiments 
    on the computer that will help guide your thinking."

Ahalt was also quoted saying the following.

   "We are trying to get curriculum about advanced computing and 
    simulation into high schools, two-year colleges and a certificate 
    program for retraining the workforce."

Speaking of "two-year colleges"... The Maricopa County Community College District isn't doing anything to help students learn about supercomputing.

WashingtonTechnology.com:: Future looks bright for supercomputers

[08 September 2007, top]

Analyzing Individual Storm Cells
There is little doubt that better weather prediction capabilities will save lives.

Headline: "Scientists Use Powerful Cray Supercomputer to Develop Groundbreaking Strategies in Weather Prediction." Sub-title: "Latest Computer Models Zoom Down to Level of Individual Storm Cells." The supercomputer was one at the Pittsburgh Supercomputing Center and the scientists were from the University of Oklahoma's Center for Analysis and Prediction of Storms (CAPS) and the National Oceanic & Atmospheric Administration (NOAA).

[29 August 2007, top]

Two of the World's Fastest Computers are Crays
The following is from a Cray press release issued on 10 July 2007.
   "Cray Inc. announced that two Cray supercomputers are now 
    in an elite group of systems that can perform computations 
    at more than 100 teraflops (100 trillion floating point 
    operations per second), as measured by the industry-standard 
    TOP500 benchmark. The Cray systems, one installed at Oak Ridge 
    National Laboratory (ORNL) and the other at Sandia National 
    Laboratories, are now two of the three fastest computers in 
    the world. They are built on a Cray XT(TM) infrastructure that 
    enables Cray customers to upgrade to increasingly higher 
    performance levels, instead of forcing them to invest 
    in an entirely new system."

Let's see... 100 teraflops needs to be multiplied by 10 to get to one petaflops.

[16 August 2007, top]

Efficient Parallel Programming is 10 Years Away?
There are already hundreds of programming languages, but new languages are being developed to support parallel programming.
   "Multicore processors are driving a historic shift to a new 
    parallel architecture for mainstream computers. But a parallel 
    programming model to serve those machines will not emerge for 
    five to 10 years, according to experts from Microsoft Corp."

With respect to parallel languages, Burton Smith of Microsoft Research was quoted saying: "Building in support for atomic transactions also is in the plan."

EETimes.com:: M'soft: Parallel programming model 10 years off

[03 August 2007, top]

Supercomputing at the NCSA
NCSA was created in 1985 and it is the "largest non-secure, or public, supercomputing facility in the United States."

It was at NCSA where Marc Andreesen and Eric Bina invented the Mosaic browser (subsequently Netscape) in the early 1990s.

   "The north campus of The University of Illinois in Urbana is 
    home to one of the most powerful and long-running supercomputer 
    facilities in the world. The recent Top 500 list has all five 
    of their primary workhorses listed. Three of them were even in 
    the top 100 with Abe debuting at #8."

The total maximum theoretical computing capacity of NCSA is 146 teraflops distributed across five primary machines.

   "Tom told me it took the NCSA 19 years to reach the petabyte 
    milestone of storage requirement. This happened in 2005. But, 
    it only took another 12 months to reach the second petabyte 
    (2006) milestone. The third came in only eight months (2007) 
    and right now they're estimating six months more to reach 
    number four. To put that into perspective, a petabyte is 
    approximately 1,500,000 CDs worth of data, which is enough 
    to fill a football field five discs high with CDs laid 
    end to end."

TGDaily.com:: NCSA: A look inside one of the world's most capable supercomputer facilities

[01 August 2007, top]

Learning About Linux Programming at ASU
During the summer of 2007, the Fulton High Performance Computing Initiative at ASU offered a collection of short course intended to provide "hands-on training in the use of Beowulf cluster parallel computers." The instructor was HPCI Director Dr. Dan Stanzione.
   #1 Beowulfs, Mini Grids, and Basic MPI
   #2 Advanced MPI Programming
   #3 Parallel Algorithms
   #4 Parallel I/O, Debugging, and OpenMP

Stanzione's examples were presented in both FORTRAN and C.

HPC.ASU.edu:: Linux Programming

[16 July 2007, top]

At Cornell, Advanced Computing is HPC
The Theory Center at Cornell University is now called the Center for Advanced Computing.
   "The 22-year-old Cornell Theory Center has been reorganized 
    and renamed in a move designed to make its high-performance 
    computing resources more efficient and effective for the 
    university's researchers and to take advantage of growing 
    opportunities for research funding."

"Advanced Computing" might be a compact way to say "High Performance/Productivity Computing" or "Center for 21st Century Computing."

Cornell.edu:: Cornell Theory Center is now Cornell Center for Advanced Computing

[12 July 2007, top]

GCN Interviews RCI's Daniel Reed
Daniel Reed is director of the Renaissance Computing Institute. Government Computer News posted an interview with Reed on GCN.com.
   "GCN: What does RCI do?

    Reed: The highest priority is to look at how computer technology 
    affects broad societal problems. It's really about bringing people 
    together across disciplinary boundaries. I think a lot of problems 
    we deal with in this decade lie at the intersection of multiple 
    disciplines. Our role is to be a catalyst for innovation. And that 
    spans everything from traditional computer science to supporting 
    the humanities or performing arts."

Here is more of Reed explaining what RCI does.

   "One of the big problems we're addressing now is rapid population 
    growth in environmentally sensitive areas. As an example, in North 
    Carolina, we have a rapid coastal population growth in areas with 
    fragile ecosystems [that] are susceptible to severe weather. How 
    do we look at predicting the effect of hurricanes and storm surge 
    in those areas? Our goal is to forecast what the impact is likely 
    to be and where it will happen and - in the longer term - use 
    that information to influence zoning and planning."

According to Reed, the biggest issue facing computer science over the next few years is software.

   "The challenge will be how to deal with the explosive growth 
    of multicore processors. We will see a hundred-plus core chips 
    soon, and those will be embedded in ever-larger systems. Petascale 
    systems will have hundreds of thousands of cores. The software is 
    not keeping pace with that."

I've added the following quote from Daniel Reed to the GDT::Quotes collection. "The killer issue is how to exploit large-scale parallelism in a productive way."

GCN.com:: Daniel Reed | It's software at the core

[09 July 2007, top]

Hewlett-Packard is a Supercomputing Leader
The Grid::Utilitarian contains a lot of postings about supercomputings built by IBM, Cray and Sun Microsystems, but HP makes supercomputers also.

Hewlett-Packard has become the "leading manufacturer in the top 500, with 41 percent to IBM's 38 percent; however, none of HP's systems are in the top 50."

Six of the top 10 supercomputer systems come from IBM. IBM has announced it expects to break the petaflop barrier as early as 2008 with BlueGene/P, its next supercomputing family. At its maximum scale, IBM said, BlueGene/P could reach 3 petaflops.

Top500.org:: Supercomputer Sites

[08 July 2007, top]

Rice University's CITI Using Crays
Cray Inc. announced that "research teams using a Cray supercomputer at Rice University's Computer and Information Technology Institute (CITI) have developed computational techniques that will eventually assist medical workers in diagnosing and treating some of the most devastating diseases afflicting humans -- ranging from cerebral aneurysms to illnesses such as bacterial/viral infections and cancer."
   "The large-scale processing provided by our Cray supercomputer 
    enables researchers to develop highly detailed models and 
    sophisticated algorithms that more closely match actual 
    conditions in the body.  One research team is conducting 
    blood-flow simulations at a level of accuracy that was not 
    feasible before we had this degree of computational power. 
    And new computer algorithms developed by another team reduce 
    the time it takes to analyze biomolecular processes from days 
    and months to mere minutes." --Jan Odegard, CITI executive director

Rice University's CITI is a "research-centric institute dedicated to the advancement of applied interdisciplinary research in the areas of computation and information technology by bringing together scholars with complementary expertise to solve complex problems. Research areas include: parallel computation, robotics, telecommunications, data modeling and analysis, bioinformatics, advanced computation, computational neuroscience, sensor networks and computational fluid dynamics."

CITI.Rice.edu:: Computer and Information Technology Institute

[08 July 2007, top]

Sun's Ranger Shooting for 1.7 Petaflops
Sun Microsystems is a player when it comes to supercomputing.

NewsFactor.com reports that "Constellation environments can eventually be configured to provide as much as 1.7 petaflops."

   "When Ranger is up and running at the Texas Advanced Computing 
    Center in Austin, and joins with fellow supercomputers on the 
    TeraGrid national network in late 2007, it is expected to deliver 
    a peak performance of more than 500 teraflops."

NewsFactor.com:: Supercomputer Fight: Sun's Ranger vs. IBM's Blue Gene

[More...] Sun has the Solaris (TM) 10 Operating System-based Sun(TM) Constellation System, one of the world's first open Petascale computing environments. The system is being designed for "complex applications such as climate, weather and ocean modeling, researchers can use the Sun Constellation System to test next-generation weather forecast codes with long-term climate modeling. Researchers can also run earthquake and seismic simulations with higher resolutions and more accurate modeling of wave propagation to gain further insight into earthquake scenarios."

[08 July 2007, top]

One Million Core System in Five Years?
LCI is the Linux Cluster Institute and Peter Ungaro is the CEO and president of Cray, Inc. Ungaro has predicted a "one million core system" within five years.
   "The morning began with the keynote presentation from Peter 
    Ungaro, titled 'From Beowulf to Cray-o-wulf -- Extending the 
    Linux Clustering Paradigm to Supercomputing Scale.' In this 
    presentation, Peter unveiled Cray's view of cluster computing 
    and how they are going to compete in the HPC marketplace with 
    future generations of clusters containing ten thousand to one 
    million cores. He predicted a one million core system within 
    five years. For comparison, today's entire Top 500 list 
    represents less than one million cores!"

HPCWire.com:: Perils and Pitfalls of HPC Spotlighted at LCI Conference

[21 June 2007, top]

Google Acquires Peakstream
Google has acquired a startup called PeakStream that specializes in software programming tools for high performance, multi-core and parallel processors.
   Google said, "We believe the PeakStream team's broad technical 
   expertise can help build products and features that will benefit 
   our users. We look forward to providing them with additional 
   resources as they continue developing high performance applications 
   for modern multi-core systems."

According to the press release, PeakStream advertises that it "makes it easy to program new high performance, multi-core and parallel processors, and convert them into radically powerful computing engines for computationally intense applications."

Radar.OReilly.com:: Google's Acquisition of Peakstream

[21 June 2007, top]

Clemson University Doing HPC with Sun Microsystems
Clemson University has opened its Computational Center for Mobility Systems (CU-CCMS). The CU-CCMS is a "technology anchor of the Clemson University International Center for Automotive Research (CU-ICAR) campus in Greenville, SC."
   "This center will reduce both the time and money that it takes 
    to get an aerodynamically sound vehicle or an optimized engine 
    into the marketplace. Manufacturers can simulate multiple design 
    options simultaneously by running computations overnight and build 
    the final product only once, instead of the more traditional 
    build-and-test cycles, which drive up cost and time." 

CU-CCMS is doing supercomputing using systems from Sun Microsystems. The HPC system comprises "grid computing, servers, storage, archive sub-systems and a dedicated high-speed InfiniBand fabric from Voltaire."

The CU-CCMS start-up is funded through a $17 million alliance between Clemson University, the state of South Carolina and Sun Microsystems, Inc.

CUICAR.com:: Clemson University Intl. Center for Automotive Research

[21 June 2007, top]

Norway's Bergen Center fo Computational Science Getting a Cray
A 50 teraflops Cray XT4(TM) supercomputer system is being installed at the Bergen Center for Computational Science (BCCS). The supercomputer will be used for "advanced research in fields including marine molecular biology, large scale simulation of ocean processes, climate research, computational chemistry, computational physics, computational biology, the geosciences, and applied mathematics."

The BCCS is part of the University of Bergen located in Norway. BCCS conducts research in "computational biology, computational mathematics and scientific computing. A main objective of BCCS is to exploit the synergies between its groups and boost cross-disciplinary activity."

BCCS.UIB.no:: Bergen Center for Computional Science

[21 June 2007, top]

RPI + IBM + NYC = CCNI
Rensselaer Polytechnic Institute, IBM and the state of New York, has created the Computational Center for Nanotechnology Innovations (CCNI). The CCNI has an IBM Gene supercomputer that is doing 80 terflops of computing now and it will eventually do 100 teraflops. The CCNI wants to "design and manufacture smaller and faster semiconductors."
   "The $100 million university-based supercomputing center is 
    designed to continue work on nanoscale semiconductor technology 
    and develop nanotechnology innovations in energy, biotechnology, 
    arts, and medicine."

   [...]

   "Currently, circuit components are about 90 nanometers wide, 
    and according to the International Technology Roadmap for 
    Semiconductors, the components need to shrink to 45 nm by 
    2010, 32 nm by 2013, and 22 nm by 2016."

News.RPI.edu:: Rensselaer, IBM, and New York State Unveil New Supercomputing Center

[02 June 2007, top]

CITRIS Excited About Peta-Scale Computing
CITRIS is the Center for Information Technology Research in the Interest of Society. CITRIS creates information technology solutions for many of our most pressing social, environmental and healthcare problems.
   "Petascale computing is coming of age, opening powerful 
    new modeling opportunities for CITRIS applications. From 
    the exploration of protein folding at the atomic level to 
    long-range climate predictions and turbulence studies, the 
    new computers will give a broad range of users processing 
    power heretofore reserved for weapons research."

   "The computer industry will hit a wall unless it figures out 
    how to deal with large-scale parallelism." -- James Demmel, 
    Professor of Mathematics and Computer Science at UC Berkeley 
    and founding Chief Scientist at CITRIS

CITRIS-UC.org:: Peta Computing's Parallel Universe

[07 May 2007, top]

PS3 Machines Major Folding@Home Workers
Sony's PlayStation 3 (PS3) is part of Stanford Universities Folding@Home project. The project's leader, Vijay Pande, stated:
   "The PS3 turnout has been amazing, greatly exceeding our 
    expectations and allowing us to push our work dramatically 
    forward. Thanks to PS3, we have performed simulations in 
    the first few weeks that would normally take us more than 
    a year to calculate. We are now gearing up for new simulations 
    that will continue our current studies of Alzheimer's and 
    other diseases."

Folding.Stanford.edu:: Folding@Home Distributed Computing

[28 April 2007, top]

Switzerland's CSCS Upgrading Their Cray
Swiss National Supercomputing Centre (CSCS) is upgrading their HPC environment so that next year they can "run next-generation weather forecasts with two-kilometer resolution for MeteoSwiss. This will make Switzerland one of the first countries in Europe to move from the current standard forecast resolution of seven kilometers to the more accurate two-kilometer resolution."

CSCS's Cray supercomputer will also be used to "support Switzerland's scientific research community in disciplines including chemistry, engineering sciences, environmental science, life science, materials science, physics and other fields."

CSC's supercomputer's peak performance is being increased from its current "8.5 teraflops to an aggregate 22.8 teraflops."

[21 April 2007, top]

HPC Enabling 21st Century Informatics
High-Performance Computing (HPC) (or High-Productivity Computing) is enabling 21st century Informatics.

ASU's last Discovery Tour for Spring 2007 will be conducted by Dr. Sethuraman Panchanathan (Panch), Director of the new School of Computing and Informatics at ASU. The following was copied from the press release for the tour.

   "Panch will describe his new school and how the new school will 
    pursue informatics education and research in partnership with 
    the Arts, Media and Engineering program, the School of Human 
    Evolution and Social Change, the School of Life Sciences, the 
    Department of Mathematics and Statistics, the Department of 
    Psychology, the Biodesign Institute, the Global Institute for 
    Sustainability, W.P. Carey School of Business, the College of 
    Nursing and Healthcare Innovation, the School of Earth and Space 
    Exploration, the Center for Law, Science and Technology and the 
    College of Liberal Arts and Sciences."

The Discovery Tour is at 4:00pm on Tuesday, 8 May 2007, at the ASU Brickyard located on Mill Avenue in Tempe, Arizona.

At the spring 2007 Computer Science ATF meeting held April 6th, ASU in informal discussion said they were working on an "Informatics certificate."

[16 April 2007, top]

ORNL Heading Toward Peta-Scale Computing
Cray today announced that the Department of Energy's Oak Ridge National Laboratory (ORNL) has completed a more than doubling of the capacity of its Cray supercomputer. The ORNL system was operating at 54 teraflops, but now it has performance capacity of 119 teraflops. Cray says, "The upgrade is an important milestone in ORNL's previously announced plan to provide its users with a petaflops-speed supercomputer in 2008."
   "Scientists and industry partners such as Boeing, Corning, 
    DreamWorks Animation and General Atomics will be able to 
    employ the enhanced Cray supercomputer configuration to 
    conduct high-impact projects as part of the Department 
    of Energy's INCITE program. In addition, ORNL staff and 
    guest researchers will use the Cray supercomputer to advance 
    the frontiers of neutron science, biological systems, energy 
    production and advanced materials."

Cray.com:: ORNL More Than Doubles Performance of Cray Supercomputer to 119 Teraflops

[14 April 2007, top]

PSC's BigBen Doing Big Things
Cray's BigBen supercomputer at the Pittsburgh Supercomputing Center is enabling biophysicists to make significant discoveries.
   "The National Health Council estimates that as many as 14,000 
    hospitalized Americans die annually because available antibiotics 
    no longer work effectively. Since they reproduce and mutate quickly, 
    bacteria are constantly evolving countermeasures to existing treatments. 
    Molecular biologists use high performance computers such as the 
    Cray XT3 supercomputer to help them win the 'arms race' against 
    harmful bacteria's chemical defenses."

BigBen allows researchers to "create complex simulations that help explain how certain enzymes produced by bacteria prevent antibiotic medicines such as penicillin and its derivatives from curing infections."

[04 April 2007, top]

Optimism is High for the Passage of H.R. 1068
On 12 March 02007, the High Performance Computing R&D Act (H.R. 1068) passed the House. A similar bill failed to pass the Senate last year, but optimism is high that the HPC R&D Act will be sent to President Bush before the end of 02007.

An argument could be made that U.S. politicians must pass this legislation solely because of Homeland Security.

HPCWire.com:: Congress Finally Getting Its HPC Act Together

[31 March 2007, top]

Supercomputing Programming Languages Coming Soon
A new collection of programming languages are being developed to help programmers program supercomputers. Sun Microsystems has Fortress, Cray has Chapel and IBM is working on X10.

John Mellor-Crummey, a computer science professor at Rice University, justified the need for new programming languages by saying: "Programming of parallel systems is much too hard today."

IBM's X10 is a "parallel, distributed, object-oriented language developed as an extension of Java."

ComputerWorld.com:: Languages for Supercomputing Get 'Suped' Up

[16 March 2007, top]

IBM Cluster Solutions HPC Initiative
IBM is a leader in supercomputing and now they want to "make it easier for organizations to use clusters of servers in processing compute-intensive workloads. Often used in science and academic research, IBM is looking to widen the use of HPC in smaller businesses and departments of larger enterprises." In other words, supercomputing for everybody.

InformaionWeek.com:: IBM Launches High-Performance Computing Initiative

[28 February 2007, top]

UC-Berkeley on Parallel Computing
UC-Berkeley posted a report about parallel computing. The last paragraph was noteworthy.
   "Since real world applications are naturally parallel and 
    hardware is naturally parallel, what we need is a programming 
    model, system software, and a supporting architecture that are 
    naturally parallel. Researchers have the rare opportunity to 
    re-invent these cornerstones of computing, provided they simplify 
    the efficient programming of highly parallel systems."

EECS.Berkeley.edu:: The Landscape of Parallel Computing Research: A View from Berkeley

[17 February 2007, top]

Servers Take Lots of Power To Run
Supercomputers require a super amount of power.

I read a blog posting started with the following: "Want to stop global warming? Kill your web TV."

The poster wrote that with respect to servers "electricity use doubled between 2000 and 2005 and could spike another 75 percent by 2010. To put it another way, in 2005 it took the equivalent of 14, 1,000-megawatt power plants to keep online the world's data centers owned by Internet giants like Google (GOOG), Microsoft (MSFT) and Yahoo (YHOO)." The poster added that by 2005 server farms consumed 1.2 percent of the electricity generated in the U.S." There are reasons Google is establishing server farms in The Dalles, Oregon, and North Carolina.

Blogs.Business2.com:: Power Hogs: Server Farm Electricity Use Soars

[17 February 2007, top]

University of Edinburgh in Scotland Picks Cray
Cray Inc. has won an "$85 million contract to build a supercomputer for the academic community in the United Kingdom. UoE HPCX Ltd., a wholly owned subsidiary of the University of Edinburgh in Scotland, and the Engineering and Physical Sciences Research Council, which is Britain's main funding agency for engineering and physical sciences, awarded the contract."

According the Cray press release, the supercomputer is a major component of a project called "High End Computing Terascale Resources" and it will be installed in the university's Parallel Computing Center.

On the same day as the U.K. deal, Cray announced 2006 4th-quarter results. Total quarterly revenue was $101.4 million. The company made money with net income of $8.7 million.

[16 February 2007, top]

IBM Supercomputers Up and Running at NOAA
IBM has completed the "construction of two new supercomputers as part of its nine-year CRM contract with the National Oceanic and Atmospheric Administration. In 2002, when the federal government awarded the $224.4 million contract, it ranked as IBM's largest contract ever."
   "Once the two new computers go online, the primary and backup 
    systems will rank as 36th and 37th on the top 500 list of world's 
    fastest supercomputers, IBM announced. The top spot on the list 
    belongs to IBM's Blue Gene/L system, which the company installed 
    at the Department of Energy's Lawrence Livermore National Laboratory, 
    in Livermore, Calif."

The NOAA's two new supercomputers will have "disk storage capacity of 160 terabytes. Each supercomputer will be able to process 14 teraflops when performing at maximum capacity. The machines will also be able to sort through 240 million global weather observations each day."

[04 February 2007, top]

Supercomputing Enabling Physics Approximations
Cray Inc. announced that "General Atomics researchers using a Cray X1E(TM) supercomputer have made a significant breakthrough in their ability to predict what happens inside an experimental fusion reactor, a milestone on the way to developing a stable and efficient new power source. Fusion is the nuclear reaction that fuels stars like the sun and has the potential to produce clean, almost limitless power here on Earth."

Cray's supercomputer is being used to "simulate the complex behavior of the super-heated gaseous fuel called plasma as it roils within a reactor. A fusion reactor spins plasma at a high rate of speed, building up pressure and immense heat that can reach 200 million degrees Fahrenheit."

Jeff Candy, principal scientist in the Energy Group at General Atomics, is quoted saying the following:

   "Approximation is the name of the game in physics. GYRO performs 
    a faithful gyrokinetic approximation of the fundamental physics 
    that occurs when the nuclei of deuterium and tritium atoms fuse 
    within a reactor's magnetic containment field."

[04 February 2007, top]

The State of Mississippi Into Supercomputing
The state of Mississippi is into supercomputing.
   "Cray Inc. announced it has won an order to provide two of the 
    world's most powerful supercomputers to the DoD supercomputing 
    center hosted by the U.S. Army Engineer Research and Development 
    Center (ERDC) in Vicksburg, Mississippi in 2007. Together, the 
    supercomputers will give a six-fold boost to the current computing 
    capabilities of the ERDC, which supports military and civil engineering 
    projects in the United States and around the world on behalf of the 
    Department of Defense (DoD) High Performance Computing Modernization 
    Program (HPCMP). The announcement also ensures that Mississippi will 
    continue to be among the leaders in the nation in total installed 
    supercomputing capability. Financial terms of the award from the 
    HPCMP were not disclosed."

Mississippi's Cray supercomputer is being upgraded to over 40 teraflops (trillion floating point operations per second) by upgrading it to dual-core AMD Opteron(TM) processors and doubling the memory.

[04 February 2007, top]

Fortress--Fortran For Supercomputers?
Supercomputing has given Fortran new life. Sun Microsystems has a new program language named Fortress that they have released as open source. Sun says Fortress is a "replacement for the 50-year old Fortran language" and that it was designed through a Defense-Department supercomputing project. Sun hopes that "Fortress will be able to solve the problem of programs that do not scale very well, allowing them to utilize parallelism." Fortress allows programmers to use "ordinary mathematical expressions instead of having to translate formulas into the intricate syntax of computer languages."

[04 February 2007, top]

Utility Computing Continues to Grow
Old stuff from late-2006...

The Mercury News had an article on 'Utility' computing. Here are a couple of quotes from the article.

   "Upstart companies are challenging the industry giants 
    in a hot new market: on-demand software. It's offered 
    as a subscription over the Internet rather than as 
    installed software."

On-demand software -- pay for only what you use.

   "It's New Software vs. Old Software, with the new going 
    mainstream. And it's part of the ``utility'' computing 
    movement in which technology -- processing power, data 
    storage or software -- is provided in the way power or 
    water is delivered."

Programs can get as fat as they like and users don't need to keep buying more and more hardware to run them.

[28 January 2007, top]

INCITE Awards 95 Million Hours of Supercomputing Time
Supercomputers are expensive; therefore, it make sense for those who have under untilized computer power to share it with others. This is especially true with the supercomputers belonging to the U.S. government.

InformationWeek.com reports that the "U.S. Department of Energy's Office of Science awarded 95 million hours of computing time on supercomputers to 45 projects." The awards were given up a program called INCITE: "Innovative and Novel Computational Impact on Theory and Experiment."

The supercomputer will be used to "design quieter cars, improve commercial aircraft design, advance fusion energy, understand nanomaterials, and further studies of supernova and global climate change and the causes of Parkinson's disease."

One final item from the InformationWeek.com posting that I found interesting: "It would take more than 114 hours to run 1 million processing hours on a single-processor desktop computer. A project receiving 1 million supercomputing hours from DOE would run on 2,000 processors for 500 hours, or about 21 days."

InformationWeek.com:: DOE Awards Private, Public Groups 95 Million Hours Of Supercomputing Time

[09 January 2007, top]

About the Grid Utilitarian
The Grid Utilitarian is a blog devoted to high-performance computing. This includes grid-based utility computing and 21st century Informatics. This blog was created on 3 October 2004 and it starts 2007 with 103 postings.

Grid Utilitarian Archives: 2006 | 2005 | 2004

[01 January 2007, top]


Author: Gerald D. Thurman [deru@deru.com]
Last Modified: Saturday, 05-Jan-2013 11:17:33 MST

Thanks for Visiting