GDT::Grid::Utilitarian::Archive::Year 2012

Grid Utilitarian
India Shooting For Exascale Computing in 2017
@compufoo received the following tweet from @HPC_guru on 2012.09.27.
   2017 was someone's hallucination! MT @insideHPC #India's #Exascale 
   Plans Called a "Pipe Dream"

It's possible exascale computing will be reached by 2017, but for India the energy demands of supercomputers is a bottleneck.

[27 September 2012, top]

TACC: Of Mice and Melodies
When it comes to supercomputing, I'm sure glad the state of Texas is one of the United States of American.

TACC is the Texas Advanced Computing Center and their motto: Powering Discoveries That Change The World.

   "TACC's Ranger and Lonestar supercomputers are used to crunch 
    this data, with Ranger running programs in 2 hours that used 
    to take 3 days to run on a desktop. Learn more."

TACC.UTexas.eduOf Mice and Melodies

[09 August 2012, top]

#FastForward to Exascale Computing
Only "some" researchers?
   "Some researchers have a seemingly unquenchable thirst 
    for computing speed." -- Don Clark #FastForward Funds Plant Seeds for 'Exascale' Computers

[25 July 2012, top]

TACC: "Chasing Science as a Service"

When it comes to HPC, I'm sure glad Texas is one of the states in the United States of America.

My Google+ posting on 25 July 2012.

@compufoo tweet tweeted on 25 July 2012.

"Chasing Science as a Service" Texas Advanced Computing Center 

[25 July 2012, top]

Why HPC? Weather Prediction is One of the Many Whys
Inevitably, when I speak about HPC (supercomputing [petaflops and exaflops], visualization systems, Infinite Computing, etc.), I am asked the following question: Why? (i.e. Why as in why do we need so many flops?) My response always starts with "weather forecasting..." with an emphasis on forecasting such things as hurricanes and tornadoes. Accurate storm predictions can save lives.

The following is a headline from the Friday, 13 April 02012, Arizona Republic: Saturday storms 'life threatening'.

"We're quite sure tomorrow will be a very busy and dangerous day in terms of large swathes of central and southern plains." -- National Weather Service ( via the Arizona Republic

Various news sources reported the following.

National Weather Service's Storm Prediction Center in Norman, Okla., which specializes in tornado forecasting, took the unusual step of warning people more than 24 hours in advance of a possible "high-end, life-threatening event."

The predictions ended up being extremely accurate: Tornadoes hit the midwest part of the United States hard on Saturday and Sunday.

The accuracy of weather forecasting is important because it can save lives. But right now the accuracy is critically important because of the need to establish trust among the populous.

[16 April 2012, top]

Texas is a Supercomputing Leader
The state of Texas is a leader when it comes to supercomputing. from the Energy Front University, IBM partner to bring first Blue Gene supercomputer to Texas

[07 April 2012, top]

20 Petaflops by 02012
[1 April 02012] Three years ago, on 1 April 02009, I gave a talk titled 20 Petaflops by 02012. [Yes, I used a 5-digit year.] First quarter of 02012 has ended and as far as I know our computing world has not hit 20 petaflops, but the next TOP500 list doesn't come out until June. Regardless, I am 99.999% confident that 20 petaflops in 02012 is going to happen primarily because of what IBM announced four months ago.

[25 November 2011] IBM issued a press release titled IBM Announces Supercomputer to Propel Sciences Forward having the sub-title Blue Gene/Q tackles major challenges of the day, delivers up to 100 petaflops at peak performance.

   "When it is fully deployed in 2012 at Lawrence Livermore National 
    Laboratory (LLNL), the system, named 'Sequoia', is expected to 
    achieve 20 petaflops at peak performance, marking it as one of 
    the fastest supercomputers in the world." Announces Supercomputer to Propel Sciences Forward

[11 November 2011 (at 11:11)] The TOP500® issued the following press release: Japan's K Computer Tops 10 Petaflop/s to Stay Atop TOP500 List. Japan's K computer was benchmarked at 10.51 petaflops.

[1 April 2009] AzGrid::Talk::20 Petaflops in 02012

Two years earlier I gave a talk sub-titled Computing in the 21st Century. During that talk I stated the following: "The next era of computing is the era of High-Performance (Productivity) Computing (HPC)." In addition, during the talk I indicated that peta-scale computing was scheduled to occur in 02008. posted the following on 18 June 02008: "With the publication of the latest edition of the TOP500 list of the world's most powerful supercomputers today (Wednesday, June 18), the global high performance computing community has officially entered a new realm--a supercomputer with a peak performance of more than 1 petaflop/s." [1.026 petaflops]

[4 April 2007] AzGrid::Talk::The Next Era of Computing

[31 March 2012, top]

Supercomputers Can Save U.S. Manufacturing?
@nanofoo received the following tweet from @SciAm on 2012.03.09.
   Supercomputers Can Save U.S. Manufacturing

How? I had to click the hyperlink to find out.

[09 March 2012, top]

TACC is a Leader In Supercomputing
@nanofoo received the following tweet from @CSNewsUpdate on 2012.03.06.
   Broadway Technology Adds University of Texas to Its College 
   Recruiting Roster - IT News Online

Broadway Technology, LLC, provides "high-performance trading solutions for top-tier global banks and hedge funds." The company also recruits from MIT, Cornell, Stanford and Waterloo University in Canada.

Prior to receiving this tweet, @compufoo had scheduled the following tweet to be tweeted.

   If you want to learn about #supercomputing, #HPC, #BigData &
   #Informatics, #VisualizationSystems, etc., then follow @TACC_Hedda

[06 March 2012, top]

Here Comes Quantum Computing
@nanofoo received the following tweet from @SmarterPlanet on 2012.03.02. New Era of Computing: Quantum Computing Shift From Theory to Practice

[02 March 2012, top]

From IOPS To Cores To FlOPS
While working on the HPC portion of my "Learning About the Future" talk, I was interrupted with the following two news items:
   (1) Fusion-io announced they reached the one billion IOPS milestone. 
   (2) Tilera will be releasing a 100 core processor later this year.

Fusion-io employs Steve Wozniak as a Chief Scientist.

Tilera is a spinoff of MIT professor Anant Agarwa who is the director of MIT's CSAIL (Computer Science and Artificial Intelligence Laboratory).

And speaking of the HPC portion of my "Learning About the Future" talk...

HPC/21st Century Informatics

Don't ask me about the 'P' in HPC. Let 'P' equal Performance such that HPC stands for High Performance Computing. [supercomputing]

Data processing, information technology, Informatics... 21st century Informatics is HPC-based Informatics.

   Moore's Law: Number of transistors on a chip doubles every two years.

   Moore's Law: Processors get twice as fast every 18 months.

Computing roadmap: Exaflops by 02018-02020

Late last year (02011) Japan's K Computer was #1 on the list rated at 10.51 petaflops.

   peta-: metric prefix for 10^15
   FlOPS: Floating-point Operations Per Second
          (floating-point implies real numbers)

   10.51 petaflops = 10,510,000,000,000,000 flops
   (ten quadrillion five hundred ten trillion flops)

   10.51 petaflops Nov 02011    (10.2x in 3.5 years; 942% increase)
    1.026 petaflops Jun 02008   ( 7.5x in 3 years; 650% increase)
    0.1368 petaflops Jun 02005  (i.e. 136.8 teraflops)

My own calculations...

   petaflops     when
     10        2011.00 <-- 10 petaflops (right now)
     20        2012.25
     40        2013.50
     80        2014.75
    160        2016.00
    320        2017.25
    640        2018.50
   1280        2019.75  <-- 1.28 exaflops

Moore's Law growth factors range from 18 to 24 months; however, my calculations show petaflops doubling every 15 months and getting us to exaflops during 4th-quarter of 02019. [Computing roadmap: Exaflops by 02018-02020]

Huge quantity (think infinite) of bits are collected by sensors/devices (cameras, scanners, medical hardware, RFIDs, nanosensors, et. al.) and are piped into supercomputers having 99.999% up-time, high-speed Internet connects (bandwidth) and huge amounts of storage (think infinite).

"We're all aware of the approximately 2 billion people now on the Internet - in every part of the plant, thanks to the explosion of mobile technology," IBM's chairman, Samuel Palmisano, said in a speech last September (02011). "But there are also upward of a trillion interconnected and intelligent objects and organisms - what some call the Internet of Things. All of this is generating vast stores of information. It is estimated that there will be 44 times much data and content coming over the next decade... reaching 35 zettabytes in 02020. A zettabyte is a 1 followed by 21 zeros. And thanks to advanced computation and analytics, we can now make sense of that data in something like real time."

It is possible that a computing cloud is a cluster of supercomputers and that the Internet morphs into a network of networked clouds.

When "stuff" can be converted into 0s and 1s, then that enables that "stuff" to be processed by HPC systems. Example: DNA converted into letters (ACAAGATGCCATTGTCC...), letters get converted into numbers (A=65, C=67, etc.) and numbers are converted into bits (binary digits).

Again, assume the 'P' in HPC is for "Performance".

   input  function  output
   data -> HPC -> nothing
   data -> HPC -> No or Yes 
   data -> HPC -> a number
   data -> HPC -> set of numbers
   data -> HPC -> paragraph of information
   data -> HPC -> 1 page report, 2 page report, ..., 100 page report, ...
   data -> HPC -> high-performance visualization system

   data -> noise filter -> 99.999% signal 

Quoting self... What does a scientist say when you give them a petaflops supercomputer? More flops, please.

   STEAM (Science, Technology, Engineering, Arts, Mathematics)
   + more data (inputs)
   + more variables & the variables have larger domains
   + simpler algorithms because brute force becomes an option
   + processing can produce larger ranges (outputs)
   + "see" what happens when systems approach zero and infinity
   + no limit on the number "what if" scenarios
   + data sets (inputs/outputs) are archivable because of infinite storage

[23 January 2012, top]

Top 500 List For November 2011
10.51 quadrillion petaflops is #1.
   #1   Japan     10510
   #2   China      2566
   #3   U.S.       1759
   #4   China      1271
   #5   Japan      1192
   #6   U.S.       1110
   #7   U.S.       1088
   #8   U.S.       1054
   #9   France     1050
   #10  U.S.       1042 2011

[16 January 2012, top]

About the Grid Utilitarian
The Grid Utilitarian is a blog devoted to high-performance computing. This includes grid-based utility computing and 21st century Informatics. This blog was created on 3 October 2004 and it started 2012 with 243 postings.

Grid Utilitarian Archives: 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004

[01 January 2012, top]

Creator: Gerald Thurman []
Last Modified: Monday, 21-Jan-2013 07:19:56 MST

Thanks for Visiting