real parallel computers
DESCRIPTION
Real Parallel Computers. Background Information. Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel Computing 2005. Short history of parallel machines. 1970s: vector computers 1990s: Massively Parallel Processors (MPPs) - PowerPoint PPT PresentationTRANSCRIPT
Real Parallel Computers
Background Information
Recent trends in the marketplace of high performance computing
Strohmaier, Dongarra, Meuer, Simon
Parallel Computing 2005
Short history of parallel machines
• 1970s: vector computers• 1990s: Massively Parallel Processors (MPPs)
– Standard microprocessors, special network and I/O
• 2000s: – Cluster computers (using standard PCs)– Advanced architectures (BlueGene)– Comeback of vector computer
(Japanese Earth Simulator)– GPUs, IBM Cell/BE
Performance development and predictions
Clusters
• Cluster computing– Standard PCs/workstations connected by fast network– Good price/performance ratio– Exploit existing (idle) machines or use (new) dedicated
machines
• Cluster computers vs. supercomputers (MPPs)– Processing power similar: based on microprocessors– Communication performance was the key difference– Modern networks have bridged this gap
• (Myrinet, Infiniband, 10G Ethernet)
Overview
• Cluster computers at our department– DAS-1: 128-node Pentium-Pro / Myrinet cluster (gone)– DAS-2: 72-node dual-Pentium-III / Myrinet-2000 cluster– DAS-3: 85-node dual-core dual Opteron / Myrinet-10G– DAS-4 (2010): cluster with accelerators (GPUs etc.)
• Part of a wide-area system:– Distributed ASCI Supercomputer
Distributed ASCI Supercomputer(1997-2001)
DAS-1 node configuration
• 200 MHz Pentium Pro• 128 MB memory• 2.5 GB disk• 100 Mbit/s Ethernet• Myrinet 1.28 Gbit/s
(full duplex)• Operating system:
Red Hat Linux
DAS-2 Cluster (2002-2006)
• 72 nodes, each with 2 CPUs (144 CPUs in total)
• 1 GHz Pentium-III• 1 GB memory per node• 20 GB disk• Fast Ethernet 100 Mbit/s• Myrinet-2000 2 Gbit/s (crossbar)• Operating system: Red Hat Linux• Part of wide-area DAS-2 system
(5 clusters with 200 nodes in total)Myrinet switch
Ethernet switch
DAS-3 Cluster (Sept. 2006)
• 85 nodes, each with 2 dual-core CPUs(340 cores in total)
• 2.4 GHz AMD Opterons (64 bit)• 4 GB memory per node• 250 GB disk• Gigabit Ethernet • Myrinet-10G 10 Gb/s (crossbar)• Operating system: Scientific Linux• Part of wide-area DAS-3 system (5 clusters; 263 nodes), using
SURFnet-6 optical network with 40-80 Gb/s wide-area links
DAS-3 NetworksNortel 5530 + 3 * 5510ethernet switch
85 compute nodes
85 * 1 Gb/s ethernet
Myri-10G switch
85 * 10 Gb/s Myrinet
10 Gb/s ethernet blade
8 * 10 Gb/s eth (fiber)
Nortel OME 6500with DWDM blade
80 Gb/s DWDMSURFnet6
1 or 10 Gb/s Campus uplink
Headnode(10 TB mass storage)
10 Gb/s Myrinet
10 Gb/s ethernet
Myrinet
Nortel
DAS-3 Networks
DAS-1 Myrinet
Components:• 8-port switches• Network interface card for each node
(on PCI bus)• Electrical cables: reliable links
Myrinet switches:• 8 x 8 crossbar switch• Each port connects to a node (network interface) or
another switch• Source-based, cut-through routing• Less than 1 microsecond switching delay
24-node DAS-1 cluster
128-node DAS-1 cluster
• Ring topology would have:– 22 switches– Poor diameter: 11– Poor bisection width: 2
Topology 128-node cluster
4 x 8 grid with wrap-aroundEach switch connected to 4 switches & 4 PCs32 switches (128/4)
Diameter: 6 ; Bisection width: 8
PC
PC
PC
PC
Performance
• DAS-2:– 9.6 μsec 1-way null-latency– 168 MB/sec throughput
• DAS-3:– 2.6 μsec 1-way null-latency– 950 MB/sec throughput
MareNostrum: large Myrinet cluster
• IBM system at Barcelona Supercomputer Center
• 4812 PowerPC 970 processors, 9.6 TB memory (2006)