top 10 supercomputer 2014

56
2015 Waleed Omar BC123009 Mir Hassan Qadier BC123111 Top 10 Supercomputers

Upload: capital-university-of-science-and-technology

Post on 15-Apr-2017

1.028 views

Category:

Education


2 download

TRANSCRIPT

Page 1: Top 10 Supercomputer 2014

Waleed Omar BC123009Mir Hassan Qadier BC123111

2015Top 10 Supercomputers

Page 2: Top 10 Supercomputer 2014

Table of ContentsIntroduction...................................................................................................................

1. Some Common Uses of Supercomputers.........................................................................

1.1 Building brains....................................................................................................

1.2 Predicting climate change......................................................................................

1.3 Forecasting hurricanes..........................................................................................

1.4 Testing nuclear weapons.......................................................................................

1.5 Mapping the blood stream....................................................................................

1.6 Understanding earthquakes...................................................................................

1.7 Recreating the Big Bang........................................................................................

2. Supercomputer challenges...........................................................................................

3. Operating Systems.....................................................................................................

4. Processing Speeds.....................................................................................................

5. Energy usage and heat management..............................................................................6. History.................................................................................................................

7. Supercomputing in Pakistan.......................................................................................

8. Introducing the world's first personal supercomputer.......................................................

9. Manufacturers of Supercomputers...............................................................................

10. Architecture and operating systems..........................................................................

Top 10 Super computers..................................................................................................11. Tianhe-2..........................................................................................................

National Supercomputing Center in Guangzhou ,...................................................................11.1 Tianhe-Specification....................................................................................

12. Titan.................................................................................................................

Oak Ridge National Laboratory..........................................................................................

12.1 Titan Specification:............................................................................................

13. Sequoia.............................................................................................................

Lawrence Livermore National Laboratory.............................................................................

13.1 Sequoia Specification:........................................................................................

14. K computer.........................................................................................................

RIKEN.........................................................................................................................14.1 K Computer Specification:..........................................................................

15. Mira(Blue Gene/Q)...............................................................................................

Argonne National Laboratory............................................................................................15.1 Mira(Blue Gene/Q) Specification:................................................................

16. piz daint(Cray XC30)..............................................................................................

Page 3: Top 10 Supercomputer 2014

Swiss National Supercomputing Centre................................................................................16.1 piz daint(Cray XC30)Specification:.............................................................

17. Stampede...........................................................................................................

Texas Advanced Computing Center.....................................................................................17.1 Stampede Specification:..............................................................................

18. JUQUEEN...........................................................................................................

Forschungszentrum Jülich................................................................................................18.1 JUQUEEN specification..............................................................................

19. Vulcan (Blue Gene/Q)............................................................................................

Lawrence Livermore National Laboratory.............................................................................19.1 Vulcan Specification....................................................................................

20. Cray CS Strom................................................................................................20.1 Cray CS Strom Specification:......................................................................

21. Supercomputer based on Xeon processors:...................................................22. Supercomputer based on IBM Power BQC Processors..................................

23. Supercomputer based on Fujitsu SPARC 64Villfx Processor..............................................

24. Supercomputer base on NVIDIA tesla/Intel Phi processor...............................................25. Efficiency and Performance chart:.................................................................References:..............................................................................................................

Page 4: Top 10 Supercomputer 2014

IntroductionA supercomputer is the fastest type of computer. Supercomputers are very expensive and are employed for specialized applications that require large amounts of mathematical calculations. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently.

1. Some Common Uses of SupercomputersSupercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and many others. Major universities, military agencies and scientific research laboratories depend on and make use of supercomputers very heavily.

1.1 Building brainsSo how do supercomputers stack up to human brains ? Well, they're really good at computation: It would take 120 billion people with 120 billion calculators 50 years to do what the Sequoia supercomputer will be able to do in a day. But when it comes to the brain's ability to process information in parallel by doing many calculations simultaneously, even supercomputers lag behind. Dawn, a supercomputer at Lawrence Livermore National Laboratory, can simulate the brain power of a cat — but 100 to 1,000 times slower than a real cat brain.

Nonetheless, supercomputers are useful for modeling the nervous system. In 2006, researchers at the École Polytechnique Fédérale de Lausanne in Switzerland successfully simulated a 10,000-neuron chunk of a rat brain called a neocortical unit. With enough of these units, the scientists on this so-called "Blue Brain" project hope to eventually build a complete model of the human brain.

The brain would not be an artificial intelligence system, but rather a working neural circuit that researchers could use to understand brain function and test virtual psychiatric treatments. But Blue Brain could be even better than artificial intelligence, lead researcher Henry Markram told The Guardian newspaper in 2007: "If we build it right, it should speak."

Page 5: Top 10 Supercomputer 2014

1.2 Predicting climate changeThe challenge of predicting global climate is immense. There are hundreds of variables, from the reflectivity of the earth's surface (high for icy spots, low for dark forests) to the vagaries of ocean currents. Dealing with these variables requires supercomputing capabilities. Computer power is so coveted by climate scientists that the U.S. Department of Energy gives out access to its most powerful machines as a prize.

The resulting simulations both map out the past and look into the future. Models of the ancient past can be matched with fossil data to check for reliability, making future predictions stronger. New variables, such as the effect of cloud cover on climate, can be explored. One model, created in 2008 at Brookhaven National Laboratory in New York, mapped the aerosol particles and turbulence of clouds to a resolution of 30 square feet. These maps will have to become much more detailed before researchers truly understand how clouds affect climate over time.

1.3 Forecasting hurricanesWith Hurricane Ike bearing down on the Gulf Coast in 2008, forecasters turned to Ranger for clues about the storm's path. This supercomputer, with its cowboy moniker and 579 trillion calculations per second processing power, resides at the TACC in Austin, Texas. Using data directly from National Oceanographic and Atmospheric Agency airplanes, Ranger calculated likely paths for the storm. According to a TACC report, Ranger improved the five-day hurricane forecast by 15 percent.

Simulations are also useful after a storm. When Hurricane Rita hit Texas in 2005, Los Alamos National Laboratory in New Mexico lent manpower and computer power to model vulnerable electrical lines and power stations, helping officials make decisions about evacuation, power shutoff and repairs.

1.4 Testing nuclear weaponsSince 1992, the United States has banned the testing of nuclear weapons. But that doesn't mean the nuclear arsenal is out of date.

The Stockpile Stewardship program uses non-nuclear lab tests and, yes, computer simulations to ensure that the country's cache of nuclear weapons are functional and safe. In 2012, IBM plans to unveil a new supercomputer,

Page 6: Top 10 Supercomputer 2014

Sequoia, at Lawrence Livermore National Laboratory in California. According to IBM, Sequoia will be a 20 petaflop machine, meaning it will be capable of performing twenty thousand trillion calculations each second. Sequoia's prime directive is to create better simulations of nuclear explosions and to do away with real-world nuke testing for good.

1.5 Mapping the blood streamThink you have a pretty good idea of how your blood flows? Think again. The total length of all of the veins, arteries and capillaries in the human body is between 60,000 and 100,000 miles. To map blood flow through this complex system in real time, Brown University professor of applied mathematics George Karniadakis works with multiple laboratories and multiple computer clusters.

In a 2009 paper in the journal Philosophical Transactions of the Royal Society, Karniadakas and his team describe the flow of blood through the brain of a typical person compared with blood flow in the brain of a person with hydrocephalus, a condition in which cranial fluid builds up inside the skull. The results could help researchers better understand strokes, traumatic brain injury and other vascular brain diseases, the authors write.

1.6 Understanding earthquakesOther supercomputer simulations hit closer to home. By modeling the three-dimensional structure of the Earth, researchers can predict how earthquake waves will travel both locally and globally. It's a problem that seemed intractable two decades ago, says Princeton geophysicist Jeroen Tromp. But by using supercomputers, scientists can solve very complex equations that mirror real life.

"We can basically say, if this is your best model of what the earth looks like in a 3-D sense, this is what the waves look like," Tromp said.

By comparing any remaining differences between simulations and real data, Tromp and his team are perfecting their images of the earth's interior. The resulting techniques can be used to map the subsurface for oil exploration or carbon sequestration, and can help researchers understand the processes occurring deep in the Earth's mantle and core.

1.7 Recreating the Big BangIt takes big computers to look into the biggest question of all: What is the origin of the universe?

Page 7: Top 10 Supercomputer 2014

The "Big Bang," or the initial expansion of all energy and matter in the universe, happened more than 13 billion years ago in trillion-degree Celsius temperatures, but supercomputer simulations make it possible to observe what went on during the universe's birth. Researchers at the Texas Advanced Computing Center (TACC) at the University of Texas in Austin have also used supercomputers to simulate the formation of the first galaxy, while scientists at NASA’s Ames Research Center in Mountain View, Calif., have simulated the creation of stars from cosmic dust and gas.

Supercomputer simulations also make it possible for physicists to answer questions about the unseen universe of today. Invisible dark matter makes up about 25 percent of the universe, and dark energy makes up more than 70 percent, but physicists know little about either. Using powerful supercomputers like IBM's Roadrunner at Los Alamos National Laboratory, researchers can run models that require upward of a thousand trillion calculations per second

2. Supercomputer challengesA supercomputer generates large amounts of heat and therefore must be cooled with complex cooling systems to ensure that no part of the computer fails. Many of these cooling systems take advantage of liquid gases, which can get extremely cold.

Another issue is the speed at which information can be transferred or written to a storage device, as the speed of data transfer will limit the supercomputer's performance. Information cannot move faster than the speed of light between two parts of a supercomputer.

Supercomputers consume and produce massive amounts of data in a very short period of time. Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

3. Operating SystemsMost supercomputers run on a Linux or Unix operating system, as these operating systems are extremely flexible, stable, and efficient. Supercomputers typically have multiple processors and a variety of other technological tricks to ensure that they run smoothly.

4. Processing SpeedsSupercomputer computational power is rated in FLOPS (Floating Point Operations Per Second). The first commercially available supercomputers reached speeds of 10 to 100 million FLOPS. The next generation of supercomputers is predicted to break the petaflop level. This would represent computing power more than 1,000 times faster than a teraflop

Page 8: Top 10 Supercomputer 2014

machine. A relatively old supercomputer such as the Cray C90 (built in the mid to late 1990s) has a processing speed of only 8 gigaflops. It can solve a problem, which takes a personal computer a few hours, in .002 seconds! From this, we can understand the vast development happening in the processing speed of a supercomputer.

K2The site http://www.top500.org/is dedicated to providing information about the current 500 sites with the fastest supercomputers. Both the list and the content at this site is updated regularly, providing those interested with a wealth of information about the developments in supercomputing technology.

5. Energy usage and heat managementA typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 Megawatts of electricity. The cost to power and cool the system can be significant, e.g. 4MW at $0.10/kWh is $400 an hour or about $3.5 million per year.

Heat management is a major issue in complex electronic devices, and affects powerful computer systems in various ways. The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer coolingtechnologies. The supercomputing awards for green computing reflect this issue.

The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.

In the Blue Gene system IBM deliberately used low power processors to deal with heat density. On the other hand, the IBMPower 775, released in 2011, has closely packed elements that require water cooling. The IBM Aquasar system, on the other hand uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.

The energy efficiency of computer systems is generally measured in terms of "FLOPS per Watt". In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt. In November 2010, the Blue Gene/Q reached 1684 MFLOPS/Watt. In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.

6. HistoryThe history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were

An IBM HS20 blade

Page 9: Top 10 Supercomputer 2014

designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer.

Cray left CDC in 1972 to form his own company. Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it became one of the most successful supercomputers in history. The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluor inert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990.

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaflops per processor. The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.

1954: The IBM 704 was considered to be the world's first super-computer and took up a whole room designed for engineering and scientific calculations.

7. Supercomputing in PakistanSupercomputing is a recent area of technology in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. The fastest supercomputer currently in use in Pakistan is developed and hosted by the National University of Sciences and Technology at its modeling and simulation research center. As of November 2012, there are no supercomputers from Pakistan on the Top500 list. The initial interests of Pakistan in the research and development of supercomputing began during the early 1980s, at several high-powered institutions of the country. During this time, senior scientists at the Pakistan Atomic Energy Commission (PAEC) were the first to engage in research on high performance computing, while calculating and determining exact values involving fast-neutron calculations. According to one scientist involved in the development of the supercomputer, a team of the leading scientists at PAEC developed powerful computerized electronic codes, acquired powerful high performance computers to design this system and came up with the first design that was to be manufactured, as part of the atomic bomb project. However, the most productive and pioneering research was carried out by physicist M.S. Zubairy at the Institute of Physics of Quaid-e-Azam University. Zubairy published two important books on Quantum Computers and high-performance computing

A Cray-1 preserved at the Deutsches Museum (German Museum)

Page 10: Top 10 Supercomputer 2014

throughout his career that are presently taught worldwide. In 1980s and 1990s, the scientific research and mathematical work on the supercomputers was also carried out by mathematician Dr. Tasneem Shah at the Kahuta Research Laboratories while trying to solve additive problems in Computational mathematics and the Statistical physics using the Monte Carlo method. During the most of the 1990s era, the technological import in supercomputers were denied to Pakistan, as well as India, due to an arms embargo placed on, as the foreign powers feared that the imports and enhancement to the supercomputing development was a dual use of technology and could be used for developing nuclear weapons.During the Bush administration, in an effort to help US-based companies gain competitive ground in developing information technology-based markets, the U.S. government eased regulations that applied to exporting high-performance computers to Pakistan and four other technologically developing countries. The new regulations allowed these countries to import supercomputer systems that were capable of processing information at a speed of 190,000 million theoretical operations per second (MTOPS); the previous limit had been 85,000 MTOPS. The COMSATS Institute of Information Technology (CIIT) has been actively involved in research in the areas of parallel computing and computer cluster systems. In 2004, CIIT built a cluster-based supercomputer for research purposes. The project was funded by the Higher Education Commission of Pakistan. The Linux-based computing cluster, which was tested and configured for optimization, achieved a performance of 158 GFLOPS per second. The packaging of the cluster was locally designed. National University of Sciences and Technology (NUST) in Islamabad has developed the fastest supercomputing facility in Pakistan till date. The supercomputer, which operates at the university's Research Centre for Modeling and Simulation (RCMS), was inaugurated in September 2012. The supercomputer has parallel computation abilities and has a performance of 132 teraflops per second (i.e. 132 trillion floating point operations per second), making it the fastest graphics processing unit (GPU) parallel computing system currently in operation in Pakistan. It has multi-core processors and graphics co-processors, with an inter-process communication speed of 40 gigabits per second. According to specifications available of the system, the cluster consists of a "66 NODE supercomputer with 30,992 processor cores, 2 head nodes (16 processor cores), 32 dual quad core computer nodes (256 processor cores) and 32 Nvidia computing processors. Each processor has 960 processor cores (30,720 processor cores), QDR InfiniBand interconnection and 21.6 TB SAN storage.”

8. Introducing the world's first personal supercomputer

The world's first personal supercomputer, which is 250 times faster than the average PC, has been unveiled. Although at £4,000 it

Page 11: Top 10 Supercomputer 2014

is beyond the reach of most consumers, the high-performance processor could become invaluable to universities and medical institutions. The revolutionary Tesla supercomputer was launched in London.The NVIDIA's Tesla computer could prove invaluable to medical researchers and accelerate the discovery of cancer treatmentsThe desktop workstations are built with innovative NVIDIA graphics processing units (GPUs), which are capable of handling simultaneous calculations usually relegated to £70,000 supercomputing 'clusters' that take up entire rooms.'The technology represents a great leap forward in the history of computing,' NVIDIA spokesman Benjamin Berraondo said.'It is a change equivalent to the invention of the microchip.'PHD students at Cambridge and Oxford Universities and MIT in America are already using GPU-based personal supercomputers for research.Scientists believe the new systems could help find cures for diseases. The device lets them run hundreds of thousands of science codes to create a shortlist of drugs that are most likely to offer potential cures.'This exceptional speedup has the ability to accelerate the discovery of potentially life-saving anti-cancer drugs,' said Jack Collins from the Advanced Biomedical Computing Centre in Maryland.The new computers make innovative use of the revolutionary graphics processing units, which NIVIDA claims could bring lightning speeds to the next generation of home computers.'A traditional processor handles one task at a time in a linear style, but GPUs work on tasks simultaneously to do things such as get color pixels together on screens to present moving images,' Mr Berraondo said.'So while downloading a film onto an iPod would take up to six hours on a traditional system, a graphics card could bring this down to 20 minutes.'The supercomputers, made by a number of UK based companies including Viglen, Armari and Dell are currently on sale to universities and to the science and research community.PC maker Dell said they would soon be mass producing them for the general consumer market.

9. Manufacturers of Supercomputers

Page 12: Top 10 Supercomputer 2014

IBM

Aspen Systems

SGI

Cray Research

Compaq

Hewlett-Packer

Thinking Machines

Cray Computer Corporation

Control Data Corporation

10. Architecture and operating systems As of November 2014, TOP500 supercomputers are mostly based on x86 64 CPUs (Intel EMT64 and AMD AMD64 instruction set architecture), with these few exceptions (all RISC-based), 39 supercomputers based on Power Architecture used by IBM POWER microprocessors, three SPARC (including two Fujitsu/SPARC-based, one of which surprisingly made the top in 2011 without a GPU, currently ranked fourth), and one Shen Wei-based (ranked 11 in 2011, ranked 65th in November 2014) making up the remainder. Prior to the ascendance of 32-bit x86 and later 64-bit x86-64 in the early 2000s, a variety of RISC processor families made up the majority of TOP500 supercomputers, including RISC architectures such as SPARC, MIPS, PA-RISC and Alpha.In recent years heterogeneous computing, mostly using NVidias graphics processing units (GPU) as coprocessors, has become a popular way to reach a better performance per watt ratio and higher absolute performance; almost required for good performance and to make the top (or top 10), with some exceptions, such as the mentioned SPARC computer without any coprocessors. Then an x86-based coprocessor, Xeon Phi, has also been used.

Page 13: Top 10 Supercomputer 2014

Share of processor architecture families in TOP500 supercomputers by time trend.

All the fastest supercomputers in the decade since the Earth Simulator supercomputer, have used a Linux-based operating system. As of November 2014, 485 or 97% of the world's fastest supercomputers use the Linux kernel Remaining 3% run some Unix variant (which is AIX for all of them except one), with one supercomputer running Windows and one with a "mixed" operating system. Within those 97% are the most powerful supercomputers including those ranking as the top ten.

Since November 2014, Windows Azure cloud computer is no longer on the list of fastest supercomputers (its best rank was 165 in 2012), leaving the Shanghai Supercomputer Center's "Magic Cube" as the only Windows-based supercomputer, running Windows HPC 2008 and ranked 360 (its best rank was 11 in 2008).

Page 14: Top 10 Supercomputer 2014

Top 10 Super computers

Page 15: Top 10 Supercomputer 2014

11. Tianhe-2National Supercomputing Center in Guangzhou , China, 2013

Manufacture: NUDT

Cores: 3,120,000 cores

Power: 17.6 megawatts

Interconnect: Custom

Operating System: Kylin Linux

11.1 Tianhe-Specification

Total having 125 cabinets housing 16,000 compute nodes each of which contains two Intel Xeon (Ivy Bridge) CPUs and three 57-core Intel Xeon Phi accelerator cards

Each compute node has a total of 88GB of RAM.

Page 16: Top 10 Supercomputer 2014

There are a total of 3,120,000 Intel cores and 1.404 petabytes of RAM, making Tianhe-2 by far the largest installation of Intel CPUs in the world.

The system will have theoretical peak performance of 54.9 PetaFlops.(Floating point operation per seconds and one petaFlops carries 1015 computing )

Most microprocessor can carry 4 Flops per clock cycle and single core of 2.5Mhz has a theoretical performance of 10billion Flops=10 Glops

Storage System:

Page 17: Top 10 Supercomputer 2014

256 I/o nodes and 64 storage server with total capacity of 12.4PB

I/o nodes:

Burst I/o bandwidth :5GB/s

Cooling System:

Closed compelled chilled water cooling High Cooling capacity using power of 80KW Using City cooling System to supply cool water as shown in diagram below

Programming languages supported:

OpenCL,Open MP,CUDA or Open ACC is depend on the programmer but majorly are performed on those which are defined earlier

Page 18: Top 10 Supercomputer 2014

12. TitanOak Ridge National Laboratory , United States 2012

Manufacturer: Cray

Cores: 299,008 CPU cores

Power: Total power consumption for Titan should be around 9 megawatts under full load and around 7 megawatts during typical use. The building that Titan is housed in has over 25 megawatts of power delivered to it.

Interconnect: Gemini

Operating System: Cray Linux Environment

12.1 Titan Specification: Total having 200 cabinets.Inside each cabinets are Cray XK7 boards, each of which has

four AMD G34 sockets and four PCI slots Having 20PF Plus peak performance or more than 20,000 trillion calculations per second Having total 299,008 CPU cores Total CPU memory is 710Tera bytes Having 18,688 compute nodes

Page 19: Top 10 Supercomputer 2014

Item Configuration

Processor 18,688 AMD Opteron 6274 16-core CPUs18,688 NVidia Tesla K20X GPUs

Interconnect Gemini

Memory 693.5 TiB (584 TiB CPU and 109.5 TiB GPU)

Storage 40 PB, 1.4 TB/s IO Lustre filesystem

Cabinet Titan is built from 200 cabinets. Inside each cabinets are Cray XK7 boards, each of which has four AMD G34 sockets and four PCI slots

Power 9 MW under full usage

Cooling 6600 tons of cooling capacity just to keep the recirculated air going into these cabinets cool using liquid cooling

Page 20: Top 10 Supercomputer 2014

Cooling System of Titan:

Cooling System of titan is based on cooling liquid and approximately 66,000 of ton cooling is there sample for it is shown,

Page 21: Top 10 Supercomputer 2014

Storage system:

fastest storage at 1.4TB/s

Interior of one compute node:

Page 22: Top 10 Supercomputer 2014

Programming languages supported:

OpenCL,Open MP,CUDA or Open ACC is depend on the programmer but majorly are performed on those which are defined earlier

Page 23: Top 10 Supercomputer 2014

13. SequoiaLawrence Livermore National Laboratory United States, 2013

Manufacture: IBM

Cores: 1,572,864 processor cores

Power: 7.9 MW

Interconnect:

5-dimensional torus topology

Operating System: Red Hat Enterprise Linux

13.1 Sequoia Specification: 96 racks containing 98,304 compute nodes. The compute nodes are 16-corePowerPC

A2 processor chips with 16 GB of DDR3 memory each Sequoia is a Blue Gene/Q design, building off previous Blue Gene designs Sequoia will be used primarily for nuclear weapons simulation, replacing the

current Blue Gene/L and ASC Purple supercomputers at Lawrence Livermore National Laboratory. Sequoia will also be available for scientific purposes such as astronomy, energy, lattice QCD, study of the human genome, and climate change

Page 24: Top 10 Supercomputer 2014

Item Configuration

Processor 16-corePowerPC A2 processor chips with 16 GB of DDR3 memory each

Interconnect interconnected in a 5-dimensional torus topology

Memory 1.5 PiB

Storage

Cabinet 96 racks containing 98,304 compute nodes

Power 7.9 MW

Cooling

Page 25: Top 10 Supercomputer 2014

14.K computer RIKEN Japan, 2011

Manufacture: Fujitsu

Cores: 640,000 cores

Power: 12.6 MW

Interconnect: six-dimensional torus interconnect

Operating System: Linux Kernel

14.1 K Computer Specification: the K computer comprises over 80,000 2.0 GHz 8-core SPARC64 VIIIfx

processors contained in 864 cabinets, for a total of over 640,000 cores Each cabinet contains 96 computing nodes, in addition to 6 I/O nodes. Each computing

node contains a single processor and 16 GB of memory The computer's water cooling system is designed to minimize failure rate and power

consumption. K had set a record with a performance of 8.162petaflops, making it the fastest

supercomputer K computer has the most complex water cooling system in the world. K computer reported the highest total power consumption of any 2011 TOP500

supercomputer (9.89 MW – the equivalent of almost 10,000 suburban homes), it is relatively efficient, achieving 824.6 GFlop/kWatt. This is 29.8% more efficient than China's NUDT TH MPP (ranked #2 in 2011), and 225.8% more efficient than Oak Ridge's Jaguar-Cray XT5-HE (ranked #3 in 2011).

Target performance was 10pFlops.but it was recorded at 8.162pFlops

Page 26: Top 10 Supercomputer 2014

Item Configuration

Processor 80,000 2.0 GHz 8-core SPARC64 VIIIfx processors contained in 864 cabinets

Interconnect interconnected in a 6-dimensional torus topology

Memory 16GB(2GB/core)

Storage 1pB

Cabinet 96 racks containing 98,304 compute nodes

Power 7.9 MW

Cooling World complex water cooling system

Cooling System for K-computers: K computers having world complex cooling system on which its cooling is divided in two parts as follow:

Diagrammatic view of k-computers cooling

Page 27: Top 10 Supercomputer 2014

Applications of K-computers

15.Mira(Blue Gene/Q)Argonne National Laboratory United States, 2013

Manufacture: IBMCores: 786,432 coresPower: 3.9 MW

Page 28: Top 10 Supercomputer 2014

Interconnect:five-dimensional torus interconnectOperating System: CNK (for Compute Node Kernel) is the node level operating system for the IBM Blue Gene supercomputer. A CNK instance runs on each of the compute nodes. A CNK is a lightweight kernel that runs on each node and supports a single application running for a single user on that node. For the sake of efficient operation, the design of CNK was kept simple and minimal, and it was implemented in about 5,000 lines of C++ code.

15.1 Mira(Blue Gene/Q) Specification: The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit PowerPC A2 processor

cores and run at 1.6 GHz. 16 Processor cores are used for computing, and a 17th core for operating system assist

functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is used as a redundant spare, used to increase manufacturing yield.

Mira having total of 1024 compute nodes, 16,384 user cores and 16 TB RAM The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It delivers

a peak performance of 204.8 GFLOPS at 1.6 GHz, drawing about 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The chip is mounted on a compute card along with 16 GB DDR3 DRAM

Page 29: Top 10 Supercomputer 2014

Item Configuration

Processor The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit PowerPC A2 processor cores and run at 1.6 GHz

Interconnect interconnected in a 5-dimensional torus topology

Memory 16 TB RAM

Storage 768 TiB

Cabinet 16 compute drawers will have a total of 512 compute nodes, electrically interconnected in a 5D torus configuration. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes.

Power 3.9 MW

Cooling 91% of the cooling is provided by water and 9% of the cooling is accomplished by air

Applications:

Cooling System of Mira(blue gene/Q):

For the Blue Gene/Q compute rack, approximately 91% of the cooling is provided by water and 9% of the cooling is accomplished by air. For the air cooling portion, the air is drawn into the rack from both the front and back. Hot air is exhausted out the top of the rack. For compute racks, a hot/cold aisle is not required.

Page 30: Top 10 Supercomputer 2014

To show the cooling system:

16.piz daint(Cray XC30)Swiss National Supercomputing Centre Switzerland, 2013

Manufacture: Cray

Corers: 115,984-cores

Power:90KW

Interconnects: Aries interconnect

Page 31: Top 10 Supercomputer 2014

Operating System: Cray Linux Environment.

16.1 piz daint(Cray XC30)Specification: 64-bit Intel Xeon processor E5 family; up to 384 per cabinet Peak performance: initially up to 99 TFLOPS per system cabinet For added performance the Piz Daint supercomputer features the NVIDIA graphical

processing units. Therefore, though it has 116,000 cores it is capable of 6.3 petaflops of performance.

Cray continues to advance its cooling efficiency advantages, integrating a combination of vertical liquid coil units per compute cabinet and transverse air flow reused through the system. Fans in blower cabinets can be hot swapped and the system yields “room neutral” air exhaust.

Full line of FC, SAS and IB based disk arrays with support for FC and SATA disk drives, data storage system.

The Cray XC30 series architecture implements two processor engines per compute node, and has four compute nodes per blade. Compute blades stack in eight pairs (16 to a chassis) and each cabinet can be populated with up to three chassis, culminating in 384 sockets per cabinet.

Item Configuration

Processor 64-bit Intel Xeon processor E5 family; up to 384 per cabinet

Interconnect interconnected in a 5-dimensional torus topology

Memory 16 TB RAM

Storage 768 TiB

Cabinet 16 compute drawers will have a total of 512 compute nodes, electrically interconnected in a 5D torus configuration. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes.

Power 3.9 MW

Page 32: Top 10 Supercomputer 2014

Cooling 91% of the cooling is provided by water and 9% of the cooling is accomplished by air

Page 33: Top 10 Supercomputer 2014

Cooling System

17.Stampede Texas Advanced Computing Center United States, 2013

Manufacture: Dell

Corers: 102400 CPU cores

Power: 4.5 Megawatts

Interconnects: All components are integrated with an InfiniBand FDR network of Mellanox switches to deliver extreme scalability and high-speed networking.

Page 34: Top 10 Supercomputer 2014

Operating System: Linux (CentOS).

17.1 Stampede Specification: Stampede has 6,400 Dell C8220 compute nodes that are housed in 160 racks; each node has two

Intel E5 8-core (Sandy Bridge) processors and an Intel Xeon Phi 61-core (Knights Corner) coprocessor.

Stampede is a multi-use, cyber infrastructure resource offering large memory, large data transfer, and graphic processor unit (GPU) capabilities for data-intensive, accelerated or visualization computing.

Stampede can complete 9.6 quadrillion floating point operations per second. Here's a Dell Zeus node, with two Intel Sandy Bridge processors (for a total

of 16 cores) and an Intel Xeon Phi coprocessor

Page 35: Top 10 Supercomputer 2014

Successive researches done by stampede on

Page 36: Top 10 Supercomputer 2014

18.JUQUEEN Forschungszentrum Jülich Germany, 2013

Manufacture: IBM

Core: 458,752

Power: 2,301.00 kW

Interconnect: Taurus Interconnect

Operating System: SUSE Linux Enterprise Server

18.1 JUQUEEN specification 294,912 processor cores, 144 terabyte memory, 6 petabyte storage in 72 racks. With a peak performance of about one PetaFLOPS The system consists of a single rack (1,024 compute nodes) and 180 TB of storage. 90% water cooling (18-25°C,demineralized,closed circle); 10% air cooling

Temperature: in: 18°C, out: 27°C Simple core, designed for excellent power efficiency and small footprint. Embedded 64 b PowerPC compliant and Integrated health monitoring system. 2 links (4GB/s in 4GB/s out) feed an I/O PCI-e port

Page 37: Top 10 Supercomputer 2014
Page 38: Top 10 Supercomputer 2014

Performance comparison

Hot water and cooled water for cooling purpose: In this water is flown through the machine which is gradually replace by the cooled water

Page 39: Top 10 Supercomputer 2014

19.Vulcan (Blue Gene/Q) Lawrence Livermore National Laboratory United States, 2013

Manufacture: IBM

Cores: 393,216

Power: 1,972.00 kW

Interconnect: Taurus Interconnect

Operating System: CNK (for Compute Node Kernel) is the node level operating system for the IBM Blue Gene supercomputer

19.1 Vulcan Specification Vulcan is equipped with power BQC 16 core 1.6 Ghz processors.  The Vulcan supercomputer has 400,000 cores that perform at 4.3 petaflops.

This supercomputer is used by the US department of Energy’s National Nuclear Safety Administration at the Livermore National Laboratory.

Vulcan uses a massively parallel architecture and PowerPC processors – more than 393,000 cores, in all.

 Vulcan is a smaller version of Sequoia, the 20 petaflop/s system that was ranked the world’s fastest supercomputer in June 2012. The Vulcan supercomputer at LLNL is now available for collaborative work with industry and research universities to advance science and accelerate technological innovation through the High Performance Computing Innovation Center.

Page 40: Top 10 Supercomputer 2014

Item Configuration

Processor Vulcan is equipped with power BQC 16 core 1.6 Ghz processors. The Vulcan supercomputer has 400,000 cores that perform at 4.3 petaflops

Interconnect Taurus interconnect

Memory 393,216 GB of memory

Storage 14PB

Cabinet  24-rack

Power 1.9 kW

Cooling Similar to Sequoia strict water requirment

Cooling System:

The Vulcan installation mirrored the original Sequoia design, including the cooling system and the piping design. Akima Construction Services (ACS) applied similar processes using up to 12-in. Aquatherm Blue Pipe® (formerly Climatherm) and because the team was under a three-month turnaround time constraint, Aquatherm’s reliability and installation time and labor savings were key to the project.similar to the Sequoia installation, the computer manufacturer had established strict water treatment and water quality requirements for the cooling system, and Aquatherm’s chemical inertness played a key role in meeting that requirement.

Page 41: Top 10 Supercomputer 2014

20. Cray CS Strom United States, 2014

Manufacture: Cray Inc.

Cores: 72,800

Power: 1,498.90 kW

Interconnect: Infiniband FDR

Operating System: Linux

20.1 Cray CS Strom Specification: The Cray CS series of cluster supercomputers offers a scalable architecture

of high performance servers, network and software tools that can be fully integrated and managed as stand-alone systems.

The CS-Storm cluster, an accelerator-optimized system that consists of multiple high-density multi-GPU server nodes, is designed for massively parallel computing workloads.

Each of these servers integrates eight accelerators and two Intel Xeon processors, delivering 246 GPU teraflops of compute performance in one 48 rack.

The system can support both single- and double-precision floating-point applications.

Up to eight NVIDIA Tesla K40 GPU accelerators per node Optional passive or active chilled cooling rear-door heat exchangers A four cabinet Cray CS-Storm system is capable of delivering more than one

petaflop of peak performance.

Page 42: Top 10 Supercomputer 2014
Page 43: Top 10 Supercomputer 2014

21. Supercomputer based on Xeon processors:

Page 44: Top 10 Supercomputer 2014

22. Supercomputer based on IBM Power BQC Processors

Page 45: Top 10 Supercomputer 2014

23. Supercomputer based on Fujitsu SPARC 64Villfx Processor

Page 46: Top 10 Supercomputer 2014

24. Supercomputer base on NVIDIA tesla/Intel Phi processor

Page 47: Top 10 Supercomputer 2014

25. Efficiency and Performance chart:

References:1. http://www.top500.org/lists/2014/11/ 2. http://www.technewsdaily.com/408-9-super-cool-uses-for-supercomputers.html 3. http://www.google.de/imgres?

um=1&rlz=1C1GTPM_deDE539&hl=de&biw=1366&bih=679&tbm=isch&tbnid=2fPyPCcPDhID0M:&imgrefurl=http://www.dailymail.co.uk/sciencetech/article-1092131/Introducing-worlds-personal-supercomputer.html&docid=1Stkth1YbsjDDM&imgurl=http://i.dailymail.co.uk/i/pix/2008/12/05/article-1092131-02B40268000005DC-777_468x360_popup.jpg&w=637&h=520&ei=IVW3UeGJJMndswb7q4CgCg&zoom=1&iact=hc&v