GDT::Grid::Utilitarian::Archive::Year 2006

Grid Utilitarian
Intel Helping Africa Do Supercomputing
The South African Council for Scientific and Industrial Research (CSIR) in Pretoria has a supercomputer for researchers to use. The supercomputer's peak output of one teraflops. In the world of supercomputing, one terflops is not much, but for South Africa it is a lot of computing power. South Africa scientists are actively researching "epidemics such as AIDS, malaria, and tuberculosis." The system was donated to the CSIR by U.S.-based Intel Corporation.
   "Winston Hide, director of the South African National Bioinformatics 
    Institute at the University of the Western Cape, explained the benefits 
    of the new system by saying, 'It's like using the brightest possible 
    search light in a cave as opposed to a torch.'"

Kudos to Intel and best of good fortune to African researchers.

CNET Supercomputer dedicated to disease research in Africa

[20 December 2006, top]

Cray Floats More Stock
Cray Inc. sold 8,625,000 shares of common stock at a price of $10.00 per share, which included an over-allotment of 1,125,000 shares. The company will receive net proceeds of about $81.3 million. Kudos to Cray!

[19 December 2006, top]

Australia Getting Into Supercomputing
Australia knows the importance of supercomputing in the 21st century.
   "Monash University's Professor David Abramson has been building 
    high-performance supercomputers since doing his PhD in computer 
    science at Monash in the 1980s and then again at CSIRO in the 
    mid-1980s. For the past decade he has been working on the Nimrod 
    project, which enables a simulation program to be run in different 
    places using computers that are run in parallel - a virtual 
    supercomputer that gives a simulation program the strength 
    to do what-if scenarios."

Supercomputers allow all kinds of "what-if scenarios" to be tested. For example, in 2006, Nimrod software was used to "explore whether burning the savanna grasslands in Australia could affect the arrival of the summer monsoons." Supercomputers: strength in numbers

[11 December 2006, top]

Pittsburgh's BigBen Supercomputer Getting Bigger
The Pittsburgh Supercomputing Center (PSC) is increasing the capacity of its "BigBen" Cray XT3 supercomputer from 10-teraflops system to over 21 teraflops. BigBen is part of the NSF's TeraGrid computing infrastructure that provides "capability" computing. The PSC is replacing BigBen's single-core AMD Opteron(TM) processors with dual-core Opteron processors to boost peak performance. In addition, they are increasing system memory from two to four terabytes. A PSC director is quoted saying: "In the first year since becoming a production resource, BigBen has made possible a number of remarkable achievements. In fact, the system has been in such demand among scientists that it is now the most oversubscribed computing resource on the TeraGrid. We look forward to more new insights into important scientific problems as a result of this upgrade."

[11 December 2006, top]

Keep an Eye On Don Reed at RENCI
Dan Reed is a supercomputing guru and these days he is Director of the Renaissance Computing Institute (RENCI) located in Chapel Hill, North Carolina. RENCI is a collaborative effort between the University of North Carolina, Duke University, North Carolina State University and the state of North Carolina. For 20 years, Reed was at the University of Illinois at Urbana-Champaign, where he "led the National Center for Supercomputing Applications and the university's Computer Science Department." Reed is convinced that supercomputing is going to lead to great things. Super-computer boss has a to-do list for a better future

[01 December 2006, top]

MammoGrid Helping To Fight Breast Cancer
The Mammogrid is a project that will use grid technology to "develop a European-wide database of mammograms that will be used to investigate a set of important healthcare applications as well as the potential of this Grid to support effective co-working between healthcare professionals throughout the EU."
   "Breast cancer is the most common cancer in women. In the EU 
    and the US, one in eight will develop it at some point in their 
    lives, and it will kill one in 28. But harnessing the power of 
    the grid could help increase the accuracy of diagnoses." Harnessing grid computing to save women's lives

[01 December 2006, top]

China's Integral Utility Supercomputing Grid
Steve Chen is a name to keep track of when it comes to the world of grid supercomputing.
   "He is working on a project of putting many HPC systems 
    in the different Chinese states and linking them together 
    as a large and highly efficient Integral Utility Supercomputing 
    Grid (IUGS) or Integral Grid. The Grid Supercomputers will be 
    made available as a commercial service on the Internet, allowing 
    highly productive, real-time, interactive and collaborative 
    enterprise or personal applications in science, engineering, 
    commerce, telecommunication, healthcare, education, media, 
    financial and logistics."

These days Chen is working a project called "Third-Brain" which is an "architecture far beyond the Petaflop/s system. Emulating and augmenting the real Human Brain, the design draws from a number of disciplines and incorporates this in an innovative approach." Interview with Steve Chen

[26 November 2006, top]

DARPA Allocates $500 Million For Supercomputing
Cray Inc. announced they had been awarded a $250 million agreement from the U.S. Defense Advanced Research Projects Agency (DARPA) to develop a new supercomputer that will be "hybrid computing that integrates a range of processing technologies into a single scalable platform." The Cray CEO and president, Peter Ungaro, was quoted saying: "This is a great day for Cray and the worldwide supercomputing community."

DARPA selected two phase III performers for the its High Productivity Computing Systems program (HPCS). IBM was the other company selected and they will receive $244 million.

Bottom-line: DARPA has allocated almost $500 million for the development of High-Performance Computing Systems and this is good news for our country and the world.

[22 November 2006, top]

30 Times Faster Than The Fastest Computer
As of 16 November 2006, BlueGene is the faster computer in the world at 280.6 teraflops. BlueGene is installed at the Lawrence Livermore National Laboratory in Livermore, California. IBM and the DOE have initiated a five-year project to build a supercomputer that will be 30 times faster than BlueGene.
   "Researchers are hoping to use the new supercomputer to 
    monitor the aging process of the U.S. nuclear stockpile, 
    speeding genome sequencing and modeling climate changes." 

The NNSA (National Nuclear Security Administration) and the Office of Science will each contribute $17.5 million, and IBM will contribute $23 million. IBM Teams With DOE To Build Supercomputer 30X Faster Than BlueGene

[16 November 2006, top]

The Cray XT4 Supercomputer is Available
Cray Inc. announced the "availability of its next-generation massively parallel processing (MPP) system, the Cray XT4(TM) supercomputer. The powerful new supercomputer, previously code-named 'Hood,' is designed to easily and efficiently scale to a peak performance of more than one petaflops (1,000 trillion floating-point operations per second). The Cray XT4 supercomputer debuts with several large system orders announced earlier this year from leading organizations, including the Oak Ridge National Laboratory (ORNL), the National Energy Research Scientific Computing Center (NERSC) and the Finnish IT Center for Science (CSC)."

The Cray XT4(TM) is equipped with AMD Opteron dual-core processors that can be upgraded to AMD's quad-core processing technology when the technology advances.

According to the National Energy Research Scientific Computing Center at Berkeley Lab, their real-world performance test indicates the new Cray XT4 system will "deliver over 16 teraflops on a sustained basis."

[13 November 2006, top]

Harvard Getting an IBM Supercomputer
Harvard University is getting a supercompter named the "CrimsonGridBGL." The supercomputer will be able to do 11 teraflops. Harvard researchers are going to use the computing power to study "cells, the circulatory system and the heart, computer systems, integrated circuits and the formation of galaxies. It is also likely to help university departments analyze financial risk and epidemiology." Harvard, IBM To Deploy Academia's Largest Blue Gene Supercomputer

[22 October 2006, top]

CSC Finland Getting a 70 Teraflops Cray
CSC Finland, the Finnish IT center for science, will "acquire a Cray massively parallel processing (MPP) system delivering over 70 teraflops of compute power to CSC's high performance computing users." The new Cray system will replace a "four-year-old cluster system that can no longer keep pace with the performance needs of the Finnish research community, which is currently doubling its computer usage every 16 months." The Finnish researchers are doing research in "areas such as physics, chemistry, nanotechnology, linguistics, bioscience, applied mathematics and engineering."

[09 October 2006, top]

ASU To Help UT-Austin Build a Supercomputer
On 29 September 2006, the Grid Utilitarian submitted the following "Letter to the Editor" of the Arizona Republic.
   Texas has been a leader in high-performance computing
   and that continues to be true.  I read where the University
   of Texas has received a $59 million grant to build a supercomputer
   that will run at 400 trillion calculations per second.

   Why do we care?

   Arizona State University will be collaborating with the University
   of Texas along with Cornell University and Sun Microsystems on
   building this supercomputer.

   Kudos to ASU for this accomplishment.

According to an ASU Fulton School of Engineering press release, the supercomputer system is to "achieve a peak performance in excess of 400 trillion floating point operations per second, providing more than 100 trillion bytes of memory and 1.7 quadrillion bytes of disk storage." The $59 Million TACCNSF Grant

[07 October 2006, top]

IBM To Build a Petaflop Computer By 2008
The BBC posted a supercomputing article on 7 September 2006 that started as follows.
   "Computer giant IBM will build the world's most powerful 
    supercomputer at a US government laboratory."
   "The machine, codenamed Roadrunner, could be four times 
    more potent than the current fastest machine, BlueGene/L, 
    also built by IBM."

The BBC posting included the following.

   "The new machine will be able to achieve 'petaflop speeds,' 
    said IBM. One petaflop is the equivalent of 1,000 trillion 
    calculations per second."

   "Running at peak speed, it will be able to crunch through 
    1.6 thousand trillion calculations per second."

It takes lots of space to house a supercomputer.

   "When Roadrunner is finished in 2008 it will cover 
    12,000 square feet (1,100 square metres) of floor 
    space at Los Alamos National Laboratory in New Mexico."

Supercomputers are getting more and more and more super.

[16 September 2006, top]

America Must Supercompute To Out-Compete
Because of global competition and our ever flattening world, U.S. companies must take action to remain competitive.
   "Companies that embrace supercomputing can help put 
    U.S industry back on competitive footing in the 
    modern world economy."

Accoring an viewpoint, supercomputing can enable U.S. companies to "take an evolutionary step and start looking to technologies that can enable them to build better products, cut costs of production, quickly analyze and solve assembly line problems and streamline overall efficiency."

It is not hard to believe what the Washington-based Council on Competitiveness believes: "the country that out-computes will be the one that out-competes." Viewpoint -- Supercomputing: The Next Industrial Revolution

[13 September 2006, top]

Swim Suit Design; Weather Forecasting; Black Holes
I had a handful of items sitting around waiting to be posted to the Grid Utilitarian (i.e. they were getting old). I decided to group them into this one single posting that shows supercomputers being used to design swimsuits, predict global weather patterns, and aid the study of black holes.

1) Using a supercomputer to design swimsuits?

   "At Speedo's Aqualab research and development facility, 
    data of Olympic swimmers that wear body suits are processed 
    by the performance-intensive Fluent Computational Fluid Dynamics 
    (CFD) program on an SGI Altix high-performance computing system 
    offered by Silicone Technology." Swimsuit Design Uses Supercomputing

2) Given how inaccurate weather prediction is locally here in the Valley of the Sun, maybe the weather forecasters need more computing power. Supercomputers Cast Light On Cloudy Puzzle Of Global Weather

3) I don't think it would be fun to be captured by a "black hole" from which there is no escape. Astronomers Use Supercomputers To Study Atoms Linked To Black Holes

[09 September 2006, top]

DOE Wants a Petaflops Supercomputers
I don't know why I been sitting on this HPC news that was reported during late-June of 2006.
   "The U.S. Department of Energy (DOE) has ordered the first 
    petaflop supercomputing system and an upgrade of its Blue 
    Gene system from Cray.  The new system reportedly will attain 
    1,000 trillion floating-point operations per second (teraflops), 
    or one petaflop. Oak Ridge scientists plan to use the system to 
    tackle problems in energy, biology, and nanotechnology." DOE increases supercomputer stockpile

[31 August 2006, top]

Supercomputers Must Compute Faster
The headline says it all: "Computationally intensive research creates insatiable demand for faster supercomputers." We need superdupercomputers or super2computers (later to be replaced by super3computers and so on). It appears as though if we create computing cycles they will be used. DOE raises the bar on supercomputing

[09 August 2006, top]

Senate Subcommittee Learning About HPC
U.S. politicians are in serious need of solid computing advice -- especiallly when it comes to our next era of computing. Hopefully, our senators will pay close attention to the free education they are getting. Senate Subcommittee Hears Testimony on HPC

[03 August 2006, top]

Supercomputing Helping Bird Flu Researchers
There is a never ending stream of applications that will benefit from high-performance computing. The following was obtained from the Edupage mailing-list, which is a service of EDUCAUSE.

   "Researchers looking into how to avoid widespread outbreaks 
    of the avian flu will take advantage of upgrades to a 
    supercomputer at Swansea University in Wales to perform 
    complex modeling calculations. The processing power of 
    the computer, known as Blue C, has been upgraded to more 
    than two teraflops. The improvements also lowered the energy 
    usage of the machine, cutting its electricity bill by 50,000 
    pounds per year.

How cool... more computing power requiring less electrical power. University Supercomputer Enlisted in Bird Flu Research

[22 July 2006, top]

SGI Might Go Belly-Up
I'm still majorily disappointed that SGI wiped out existing shareholders as part of its restructuring. I will be perfectly content if SGI stops making computing history. Jason Stamper's Blog: Will SGI Become the Next Data General?

[19 July 2006, top]

U.K. Picks Cray for HECToR Hardware
The 4 April 2006 posting to the Grid Utilitarian was about the U.K. allocating funds to build HECToR. It turns out that Cray Computer has become a "preferred bidder" to provide HECToR's hardware.
   "Cray Inc. announced that the Engineering and Physical Sciences 
    Research Council (EPSRC), the main funding agency in the United 
    Kingdom for research in engineering and the physical sciences 
    and the managing agent on behalf of the other Research Councils 
    for High Performance Computing, has selected Cray as the preferred 
    bidder to provide the computing hardware for the Councils' next 
    generation national high performance computing service for the 
    UK academic community. This project, commonly referred to as 
    HECToR, which stands for High End Computing Terascale Resources, 
    is expected to operate for up to six years and have an initial 
    theoretical peak capability of over 50 teraflops. The contract 
    is expected to provide for customer options for additional 
    capability in the future."

Kudos to Cray Inc. On 3 July 2006, Cray's stock hit a new 52-week high of $10.63 before closing at $10.50. UK's EPSRC Selects Cray Inc. to Negotiate Multi-Year Contract for HECToR Procurement

[04 July 2006, top]

Stanford Center for Computational Earth and Environmental Studies
Congratulations to Stanford University on the opening of the Stanford Center for Computational Earth and Environmental Studies (CEES). The CEES is a "partnership between the School of Earth Sciences, the Computer Systems Laboratory, private industry and government." Stanford Center for Computational Earth and Environmental Studies

[04 July 2006, top]

More Super Supercomputers Needed
The following was obtained from ACM TechNews: "Henry Tufo, a computer scientist at the University of Colorado, Boulder, says 'Petascale systems will open up new vistas (for) scientists.'"

And he is right.

   "The federal government is pushing computer scientists 
    and engineers to greatly step up the speed and capacity 
    of America's supercomputers.

    Officials say much faster performance is needed to 
    handle a looming tidal wave of scientific, technical 
    and military data."

In the metric system, peta implies quadrillion just like terra implies trillion and giga implies billion and mega implies million. Peta-scale computing is coming (maybe "2010?"). I believe Kurzweil when he says "the singularity is near." Supercomputers are about to get a lot more super

[22 June 2006, top]

Turning Data Into Knowledge at U. of Bristol
The University of Bristol is getting into supercomputing thanks to a consortium led by ClusterVision, IBM and and ClearSpeed.
   "New insights into the structure of space and time, 
    climate modeling, and the design of novel drugs, 
    are but a few of the many research areas that will 
    be transformed by the installation of three supercomputers 
    at the University of Bristol. 

As stated in the ScienceDaily report: supercomputing is about turning "data into knowledge." Supercomputers To Transform Science

[18 June 2006, top]

Pittsburgh Supercomputing Center Turns 20
Happy 20th birthday to the Pittsburgh Supercomputing Center.
   "The story goes like this: Carnegie Mellon physics professor 
    Michael Levine proposed that he and his research colleague, 
    University of Pittsburgh physics professor Ralph Roskies, 
    submit a proposal in response to a National Science Foundation 
    (NSF) solicitation to fund supercomputing centers. Roskies 
    was skeptical. "Why would they give a supercomputer to us?" 
    he asked.

Roskies concerns were moot because the Pittsburgh got their supercomputer and on 16 June 1986 the Pittsburgh Supercomputing Center was born. Pittsburgh Supercomputing Center Celebrates Its 20th Birthday

[17 June 2006, top]

Cray To Supply ORNL with a Petaflop Supercomputer
Cray Inc. announced they are going to supply the Oak Ridge National Laboratory (ORNL) with the "world's first petaflops-speed (1,000 trillion floating-point operations per second) supercomputer." Lucky ORNL and kudos to Cray. Cray Signs $200 Million Contract to Deliver World's Largest Supercomputer to Oak Ridge

[16 June 2006, top]

Microsoft Getting Into Cluster Computing
I must admit the name of Microsoft's computing cluster caused me to giggle given this is 2006 and Windows cluster has "2003" in its name.
   "Windows Compute Cluster Server 2003 is designed for 
    cluster-based supercomputing systems that typically 
    use collections of off-the-shelf computer components." Microsoft Launches Supercomputing OS

[12 June 2006, top]

We're Programming the Wrong Computers
Dr. Thomas Sterling is a guru in cluster computing thanks to his work on the Beowulf Project. He is currently a professor at Lousiana State University where he is "developing the ParalleX Model for future generation parallel computing and is co-investigator on DoE, NSF, and NASA sponsored research projects."
   "We are programming the wrong computers. Almost all our 
    computers are designed to be sequential with very little 
    scalability. They are not designed to be large scale highly 
    parallel HPC. They do not address the problems of parallel 
    overheads, latency hiding, resource contention, starvation 
    - a combination of parallelism and load balancing. Therefore, 
    programmers are forced to explicitly overcome these barriers 
    with their codes, through painstaking manipulation of details, 
    some of which is not even directly accessible (e.g. caches)." Thomas Sterling Speaks to the Future of HPC

[11 June 2006, top]

SGI Re-organizes, Common Stock Goes to Zero
This is old news from 8 May 2006.

SGI filed chapter 11 and its common stock was left worthless. For some reason (wishful thinking), I thought SGI would not completely wipe out existing shareholders, but they did. SGI Reorganization

[07 June 2006, top]

Cray Does a 1-for-4 Reverse Stock Split
Cray Inc., maker of supercomputers, announced that shareholders approved a 1-for-4 reverse stock split. The company appears as though they will stay in business for the short-term. Press Release

[07 June 2006, top]

China Earthquake Observation Network Picks Novell Linux
Novell announced that the China Earthquake Administration (CEA) selected SUSE® Linux Enterprise Server as its "operating platform for CEA's Digital Earthquake Observation Network." A CEA spokesperson is quoted saying: "Novell and SUSE Linux are well-established names in the Linux environment in China."

Typically, the forecasting of earthquakes is devoted to HPC, but there was no mention of supercomputing in Novell's press release. Press Release

[07 June 2006, top]

RPI Building a Supercomputer for Nano Research
Rensselaer Polytechnic Institute (RPI) is developing a supercomputing center that will be "the largest at a university and one of the 10 largest worldwide." The Computational Center for Nanotechnology Innovations will be used to "study nanotechnology and its application in semiconductors. Researchers will try to shrink the size of some components from 65 nanometers today to 22 nanometers by 2015." Companies participating in development of the new center include "IBM, Advanced Micro Devices (AMD), and Cadence." [source:] New Supercomputing Center To Advance the Science of Nanotechnology

[17 May 2006, top]

Researchers Using NCSA Build a Virus
Nature reports about how researchers using the NCSA (National Center for Supercomputing Applications) was able to model a virus (not a computer virus, but a Mother Nature virus). The problem required to use of highly parallel processing. In addition, a nanosecond is long compared to a femtosecond.
    "Klaus Schulten at the University of Illinois, Urbana, 
     and his colleagues built a computer model of the satellite 
     tobacco mosaic virus, a tiny spherical package of RNA."

    "Their success depended on the latest version of a computer 
     program called NAMD, which Schulten and his colleagues have 
     built over the past decade to simulate biological molecules. 
     The program allows the several hundred different processors 
     within a supercomputer to work in parallel on the same problem."

    "Running on a machine at the National Center for Supercomputing 
     Applications, Urbana, the program calculated how each of the 
     million or so atoms in the virus and a surrounding drop of salt 
     water was interacting with almost every other atom every 
     femtosecond, or millionth of a billionth of a second."

    "The team managed to model the entire virus in action for 
     50 billionths of a second. Such a task would take a desktop 
     computer around 35 years." Supercomputer builds a virus

[06 May 2006, top]

Cray Supporting Adapative Supercomputing
It appears as though Cray Computer may be a survivor. A couple of months ago Cray announced a "redesign of its supercomputers that will integrate blades into a conventional chassis, with software serving as a hub, allocating the tasks to the various blades. Cray is best known for its vector processing technology, which has garnered federal funding for its applications in nuclear weapons design and code cracking, though clustered systems have eroded the demand for specialized technologies such as Cray's."

Cray's press release included the following.

   "Intel and AMD are investing heavily in multicore technologies, 
    though they are encountering limitations with passing data over 
    numerous chips. Others in the industry are using FPGAs to perform 
    high-speed calculations. The vertical organization of circuit boards 
    in servers is now catching on with supercomputers; SGI announced a 
    blade design in November for its Altix systems. Cray's 
    adaptive-supercomputing technology will transfer chores 
    among blades powered by AMD Opteron chips, FPGAs, or its 
    Cray's own vector processors." 

As of 6 May 2006, Cray's stock was selling for $1.92 per share.

[06 May 2006, top]

Indiana University Getting an IBM e1350 Cluster
Indiana University is getting a $9 million IBM supercomputer--one of the 20 fastest in the world--that will be able to perform more than 20 trillion calculations a second. IU's supercomputer will enable research in the "formation of planets, weather patterns, and molecular-level biology." A high-speed fiber-optic network will enable researchers at other Indiana campuses to utilize the computing power of the new system. IU is hopeful that their new supercomputer will enable them to get $800 million in research grants. Purdue University is happy IU is getting a supercomputer because it helps make the state of Indiana a supercomputer leader. Indiana University -- Fastest University Supercomputer

[06 May 2006, top]

DARPA High Productivity Computing Systems
DARPA understands the importance of High Performance (Productivity) Computing and they are working with industry to "develop the ability to manufacture and deliver a petaflop-class computer that is substancially easier to program and use than the computers the industry is evolving toward today." High Productivity Computing relies on powerful floating point and integer arithmetic, large memories, and high bandwidth. [Note: "peta" is 1015, which is 1,000 trillion.] HPCS: The Big Picture

[06 May 2006, top]

Two Future Resources: and
During the summer of 2006 I am planning on spending time at the and websites.

[Extra] The Global Grid Forum (GGF) and the Enterprise Grid Alliance (EGA) are merging. Both organizations had the same goal: "accelerate the pervasive adoption of grids worldwide." { Competing grid bodies to merge }

[23 April 2006, top]

U.K. Allocates Funds For Hector
The United Kingdom has allocated £52m to build "Hector." Hector is the "High-End Computing Terascale Resource" and it will be owned by the Research Councils of the UK.

According to the BBC story, the new supercomputer could run at speeds of up to 100 teraflops.

The BBC reported that Hector will not come close to matching Blue Gene/L's 280.6 teraflops and that Blue Gene/L has "not reached its maximum performance, thought to be in excess of 367 teraflops." Boost for UK's superfast computer

[04 April 2006, top]

Western Australian Supercomputer Program Chooses a Cray
Cray, Inc. announced that the "University of Western Australia will install a Cray XT3(TM) system as part of the Western Australia Supercomputing Program (WASP). Among the research activities planned for the new supercomputer are major large-scale computational studies and simulations in the areas of geophysics, chemistry, astrophysics, biology, rock mechanics, genetic epidemiology, physics and quantum mechanics, and water research." News Release

[24 March 2006, top]

Sun Grid Hit By DOS Attack
Build it and the will attempt to attack/crack it.
   "Sun Microsystems' Grid, a publicly available computing 
    service, was hit by a denial-of-service network attack 
    on its inaugural day."

The service hit by a DOS (Denial Of Service) was one that converts "blog entries into podcasts." And according to the CNET report, Sun handled the attack quickly and without major service disruption.

CNET Sun Grid hit by network attack

[24 March 2006, top]

Sun Grid--Powered by the Grid
Sun Microsystems now offers "pay-as-you-go access to its Sun Grid Compute Utility to U.S. customers." In a nutshell, customers buy processing power from Sun and "only pay for processing cycles that they use. With its new offering, Sun will bill users $1 per hour per CPU through online payment service PayPal." The Sun Grid is effectively a realization of their long time motto: "The Network is the Computer." Powered by the Grid

[24 March 2006, top]

HP Helping Develop ChinaGrid
Many politicians have laced into the "gang of four" (Google, Yahoo, Microsoft, and Cisco Systems) for how they have been doing business with the Chinese government. Is Hewlett-Packard going to make it a gang of five?
   "Hewlett-Packard has joined forces with the Chinese 
    government to develop the ChinaGrid, which, when 
    finished, will be one of the world's largest grid 
    computers with a full 15 teraflops of computing power. 
    It will consist mostly of HP ProLiant and HP Integrity 

Lots of computing power. It only seems reasonable that one of the world's largest grids be in the world's most populated country.

   "HP officials said that the grid initiative by the Chinese 
    government will extend I.T. resources and services to 
    thousands of researchers and the more than 290 million 
    students in the country's university system."

290 million Chinese students is approximately 8.3 million less than the entire population of the United States.

   "The ChinaGrid facility, which opened its doors during 
    the last week of February, is running under the direction 
    of the China Ministry of Education."

   "The grid will serve several purposes, including powering 
    a Web-based language-instruction application at a Hong 
    Kong university, a suite of bioinformatics applications, 
    and a videoconferencing system."

These days, bioinformatics is a popular form of Informatics and Informatics requires high-performance visualization systems.

[09 March 2006, top]

Hollywood Production; Swedish Physicists; Korean Weather Prediction
Although SGI is a penny stock, that hasn't stopped major supercomputer users from selecting SGI solutions. In addition, it appears as though Cray is experiencing a corporate turnaround.

(1) Ascent Media Group (AMG) is an industry leader in "content creation, post-production and distribution of film and television" announced it had selected SGI server, storage and networking technology as the "heart of a new state-of-the-art facility in Burbank, Calif." The SGI hardware and software is part of AMG's "data-centric production network solution, known as ProdNet, which offers studio clients ultra-secure methods for accommodating a large variety of deliverables." The SGI equipment will be housed on of the "most modern, most secure, all-digital facilities in the world." The SGI systems will be "dedicated to manufacturing, repurposing, and distributing large media assets in huge volumes, with no concession to bandwidth limitations."

(2) The Swedish-based National Supercomputing Center (NSC) at Linkoping University (LiU) has deployed a new SGI supercomputer. The system will "allow physicists and other researchers from throughout Sweden to break through computational barriers created by complex computations. The researchers selected a 64-processor SGI® Altix® system equipped with half a Terabyte of memory. Their first project involves researching the "manufacture of 'organic electronics' that could serve as a low-cost, easily manufactured alternative to silicon. The more LiU researchers can learn about the crystal structure of such systems, the better they can assess the ability of organic materials to reliably carry electrical charges."

(3) Speaking of Cray Inc... Korea Meteorological Administration (KMA) announced that it has put into production the "fastest operational numerical weather prediction system in the world. KMA takes advantage of the processing speed of its new Cray X1E(TM) supercomputer to facilitate development and operational services for long-range weather prediction and climate study, resulting in more accurate and timely weather, seasonal climate and ocean wave forecasts." KMA's Cray X1E system has 1,024 processors that deliver a peak performance of 18.5 teraflops.

[28 February 2006, top]

SGI Says Bankruptcy May Be Its Future
Silicon Graphics announced that the company may have to file bankruptcy if some form of turn around cannot be successfully implemented. SGI produces highly regarded high-performance computing systems, yet the company may not be able to make into the next era of high-performance computing. SGI warns that bankruptcy might be year-end option

[13 February 2006, top]

U.K's AWE Selects Cray Supercomputer
It continues to look like Cray Inc. can stay competitive in the next era of high-performance computing. It's stock is no longer a penny stock and research organizations continue to select Cray for their supercomputing needs. Cray announced that it will "provide the United Kingdom's AWE plc. with one of the world's most powerful supercomputers, a Cray XT3(TM) system with peak performance of over 40 teraflops (trillions of calculations per second)." AWE is the Atomic Weapons Establishment and it is "responsible for providing warheads for the United Kingdom's nuclear deterrent." The following quote comes from Dr. Brian Bowsher, AWE's Director for Research and Applied Science.
   "This investment will enable us to make advances on a range 
    of scientific fronts -- including weapon physics, materials 
    science and engineering -- which will underpin our continued 
    ability to underwrite the safety and effectiveness of the 
    Trident warhead in the Comprehensive Test Ban era." Cray Selected by UK's AWE to Provide One of World's Largest Supercomputers

[28 January 2006, top]

Grid Technology Helping With Drug Discoveries
Grid computing is working well these days and it only going to get better as we slowly transition into the next era of computing.
   "Currently drug discovery seeks compounds that can 
    inhibit or kill invading parasites and infections, 
    but there are potentially millions of candidate compounds. 
    It can take 10 years to discover a drug and another 10 to 
    get it approved."

   "Grid technology, where the resources of many computers in 
    a network are applied to a single problem at the same time, 
    however, can reduce candidate compounds from millions to 
    thousands or even hundreds, isolating the most promising 
    candidates and speeding up the discovery process." Networking Computers To Help Combat Disease

[25 January 2006, top]

IBM a Leading Supercomputing Company
IBM is a supercomputer company.

[12 January 2006, top]

About the Grid Utilitarian Blog
The Grid Utilitarian blog contains postings about HPC (High Performance Computing). HPC topics include supercomputing, grid computing, and utility computing. The Grid Utilitarian was started October 2004 and as of 01 January 2006 it contained 54 postings.

[01 January 2006, top]

Author: Gerald D. Thurman []
Last Modified: Saturday, 05-Jan-2013 11:17:33 MST

Thanks for Visiting