interconnects -...

36
Interconnects Ken Raffenetti Principal Software Development Specialist Programming Models and Runtime Systems Group Mathematics and Computer Science Division Argonne National Laboratory

Upload: others

Post on 31-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Interconnects

Ken Raffenetti

Principal Software Development SpecialistProgramming Models and Runtime Systems GroupMathematics and Computer Science DivisionArgonne National Laboratory

Page 2: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

U.S. DOE System Architecture Targets

ATPESC (07/27/2020)

System attributes 2010 2018-2019 2021-2022

System peak 2 Peta 150-200 Petaflop/sec 1 Exaflop/sec

System memory 0.3 PB 5 PB 32-64 PB

Node performance 125 GF 3 TF 30 TF 10 TF 100 TF

Node memory BW 25 GB/s 0.1TB/sec 1 TB/sec 0.4TB/sec 4 TB/sec

Node concurrency 12 O(100) O(1,000) O(1,000) O(10,000)

System size (nodes) 18,700 50,000 5,000 100,000 10,000

Total Node Interconnect BW 1.5 GB/s 20 GB/sec 200GB/sec

MTTI days O(1day) O(1 day)

Past production

Current generation (e.g., CORAL)

ExascaleGoals

[Includes modifications to the DOE Exascale report]

Page 3: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

General Trends in System Architecture

§ Number of nodes is increasing, but at a moderate pace

§ Number of cores/threads on a node is increasing rapidly

§ Each core is not increasing in speed (clock frequency)

§ What does this mean for networks?– More sharing of the network infrastructure

– The aggregate amount of communication from each node will increase moderately, but will be divided into many smaller messages

– A single CPU core may not be able fully saturate the NIC

ATPESC (07/27/2020)

Page 4: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Agenda

ATPESC (07/27/2020)

Network Adapters

Network Topologies

Network/Processor/Memory Interactions

Page 5: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

A Simplified Network Architecture

§ Hardware components

– Processing cores and

memory subsystem

– I/O bus or links

– Network

adapters/switches

§ Software components

– Communication stack

§ Balanced approach required to maximize user-perceived network performance

ATPESC (07/27/2020)

P0

Core0 Core1

Core2 Core3

P1

Core0 Core1

Core2 Core3

Memory

MemoryI

/

O

B

u

s

Network Adapter

Network

Switch

Processing

Bottlenecks

I/O Interface

Bottlenecks

Network

End-host

BottlenecksNetwork

Fabric

Bottlenecks

Page 6: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Bottlenecks on Traditional Network Adapters

§ Network speeds plateaued at around 1Gbps– Features provided were limited

– Commodity networks were not considered scalable enough for large-scale systems

ATPESC (07/27/2020)

P0

Core0 Core1

Core2 Core3

P1

Core0 Core1

Core2 Core3

Memory

MemoryI/OBus

Network Adapter

Network Switch

Network Bottlenecks

Ethernet (1979 - ) 10 Mbit/sec

Fast Ethernet (1993 -) 100 Mbit/sec

Gigabit Ethernet (1995 -) 1000 Mbit /sec

ATM (1995 -) 155/622/1024 Mbit/sec

Myrinet (1993 -) 1 Gbit/sec

Fibre Channel (1994 -) 1 Gbit/sec

Page 7: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

End-host Network Interface Speeds

§ HPC network technologies provide high bandwidth links– InfiniBand EDR gives 100 Gbps per network link

• Will continue to increase (HDR 200 Gbps, etc.)

– Multiple network links becoming a common place• ORNL Summit and LLNL Sierra machines• Torus style or other multi-dimensional networks

§ End-host peak network bandwidth is “mostly” no longer considered a major limitation

§ Network latency still an issue– That’s a harder problem to solve – limited by physics, not technology

• There is some room to improve it in current technology (trimming the fat)• Significant effort in making systems denser so as to reduce network latency

§ Other important metrics: message rate, congestion, …

ATPESC (07/27/2020)

Page 8: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Simple Network Architecture (past systems)

§ Processor, memory, network are all decoupled

ATPESC (07/27/2020)

P0Core0 Core1

Core2 Core3

P1Core0 Core1

Core2 Core3

Memory

MemoryI/OBus

Network Adapter

Network Switch

Page 9: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Integrated Memory Controllers (current systems)

§ Memory controllers have been

integrated on to the processor

§ Primary purpose was scalable

memory bandwidth (NUMA)

§ Also helps network communication

– Data transfer to/from network requires

coordination with caches

§ Several network I/O technologies

exist

– PCIe, HTX, NVLink

– Expected to provide higher bandwidth

than what network links will have

ATPESC (07/27/2020)

P1

Core0 Core1

Core2 Core3

Memory

Memory

I

/

O

B

u

s

Network AdapterNetwork

Switch

Memory

Controller

P0

Core0 Core1

Core2 Core3

Memory

Controller

Page 10: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Integrated Network?

ATPESC (07/27/2020)

Network Switch

P0

Core0 Core1

Core2 Core3

Memory Controller

Network Interface

Off-chip Memory

§ May improve network bandwidth– Unclear if the I/O bus would be a

bottleneck

§ Improves network latencies– Control messages between the

processor, network, and memory are

now on-chip

§ Improved network functionality– Communication is a first-class citizen

and better integrated with processor features

– E.g., network atomic operations can be atomic with respect to processor atomics

§ Seems unlikely in the near-term

In-p

acka

ge

Mem

ory

P1

Core0 Core1

Core2 Core3

Memory Controller

Network Interface

Off-chip Memory

In-p

acka

ge

Mem

ory

Page 11: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Processing Bottlenecks in Traditional Protocols

§ Ex: TCP/IP, UDP/IP

§ Generic architecture for all networks

§ Host processor handles almost all aspects of communication

– Data buffering (copies on sender and receiver)

– Data integrity (checksum)

– Routing aspects (IP routing)

§ Signaling between different layers– Hardware interrupt on packet arrival or

transmission

– Software signals between different layers to handle protocol processing in different priority levels

ATPESC (07/27/2020)

P0

Core0 Core1

Core2 Core3

P1

Core0 Core1

Core2 Core3

Memory

MemoryI/OBus

Network Adapter

Network Switch

Processing Bottlenecks

Page 12: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Network Protocol Stacks: The Offload Era

§ Modern networks are spending more and more network real-estate on offloading various communication features on hardware

§ Network and transport layers are hardware offloaded for most modern HPC networks

– Reliability (retransmissions, CRC checks), packetization– OS-based memory registration, and user-level data transmission

ATPESC (07/27/2020)

Page 13: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Comparing Offloaded Network Stacks with Traditional Network Stacks

ATPESC (07/27/2020)

Application LayerMPI, PGAS, File Systems

Transport LayerLow-level interface

Reliable/unreliable protocols

Link LayerFlow-control, Error Detection

Physical Layer

Hardware Offload

Copper or Optical

HTTP, FTP, MPI,

File Systems

Routing

Physical Layer

Link Layer

Network Layer

Transport Layer

Application Layer

Traditional Ethernet

Sockets Interface

TCP, UDP

Flow-control and

Error Detection

Copper, Optical or Wireless

Network Layer Routing

Management tools

DNS management tools

Page 14: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Current State for Network APIs

§ A large number of network vendor specific APIs– InfiniBand Verbs, Intel PSM2, IBM PAMI, Cray Gemini/DMAPP, …

§ Recent efforts to standardize these low-level communication APIs– Open Fabrics Interface (OFI)

• Effort from Intel, CISCO, etc., to provide a unified low-level communication layer that exposes features provided by each network

– Unified Communication X (UCX)• Effort from Mellanox, IBM, ORNL, etc., to provide a unified low-level

communication layer that allows for efficient MPI and PGAS communication

– Portals 4• Effort from Sandia National Laboratory to provide a network hardware

capability centric API

ATPESC (07/27/2020)

Page 15: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Current State of Network APIs

ATPESC (07/27/2020)

Page 16: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

User-level Communication: Memory Registration

ATPESC (07/27/2020)

1. Registration Request • Send virtual address and length

2. Kernel handles virtual->physical mapping and pins region into physical memory• Process cannot map memory

that it does not own (security !)

3. Network adapter caches the virtual to physical mapping and issues a handle

4. Handle is returned to application

Before we do any communication:All memory used for communication must

be registered

1

3

4

Process

Kernel

Network

2

Page 17: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

User-level Communication: OS Bypass

ATPESC (07/27/2020)

Process

Kernel

Network

User-level APIs allow direct interaction with network adapters § Contrast with traditional network

APIs that trap down to the kernel

§ Eliminates heavyweight context switch

§ Memory registration caches allow for fast buffer re-use, further reducing dependence on the kernel

Page 18: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Point-to-point (2-sided) Communication

ATPESC (07/27/2020)

Network Interface

Memory Memory

Network Interface

Send Recv

MemorySegment

Send entry contains information about the send buffer (multiple non-contiguous

segments)

Processor Processor

Send Recv

MemorySegment

Receive entry contains information on the receive buffer (multiple non-contiguous segments); Incoming messages have to be matched to a

receive entry to know where to place

Hardware ACK

MemorySegment

MemorySegment

MemorySegment

Page 19: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

PUT/GET (1-sided) Communication

ATPESC (07/27/2020)

Network Interface

Memory Memory

Network Interface

Send Recv

MemorySegment

Send entry contains information about the send buffer (multiple segments) and the

receive buffer (single segment)

Processor Processor

Send Recv

MemorySegment

Hardware ACK

MemorySegment

MemorySegment

Page 20: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Atomic (1-sided) Operations

ATPESC (07/27/2020)

Network Interface

Memory Memory

Network Interface

Send Recv

MemorySegment

Send entry contains information about the send buffer and the receive buffer

Processor Processor

Send Recv

SourceMemorySegment

OP

DestinationMemorySegment

Page 21: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Network Protocol Stacks: Specialization

§ Increasing specialization is the focus today– Networks plan to have further support for noncontiguous data movement,

and multiple contexts for multithreaded architectures

§ Networks such as the Tofu, Aries, and InfiniBand are already offloading MPI and PGAS features onto hardware

– E.g., PUT/GET communication has hardware support– Increasing number of atomic operations being offloaded to hardware

• Compare-and-swap, fetch-and-add, swap

– Collective operations (NIC and switch support)• SHARP collectives

– Tag matching for MPI send/recv• Bull BXI, Mellanox Infiniband (ConnectX-5 and later)

ATPESC (07/27/2020)

Page 22: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Agenda

ATPESC (07/27/2020)

Network Adapters

Network Topologies

Network/Processor/Memory Interactions

Page 23: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Traditional Network Topologies: Crossbar

§ A network topology describes how different network adapters and switches are interconnected with each other

§ The ideal network topology (for performance) is a crossbar– Alltoall connection– Typically done on a single network ASIC

– Current network crossbar ASICs go up to ~64 ports

– All communication is nonblocking

ATPESC (07/27/2020)

Page 24: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Traditional Network Topologies: Fat-tree

§ Common topology for small and medium scale systems– Nonblocking fat-tree switches available in abundance

• Allows for pseudo nonblocking communication• Between all pairs of processes, there exists a completely nonblocking

path, but not all paths are nonblocking– More scalable than crossbars, but the number of network links still

increases super-linearly with node count• Can get expensive at scale

ATPESC (07/27/2020)

Page 25: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Scalable Network Topologies

§ Large-scale topologies must account for hardware cost

§ BlueGene, K, and Fugaku supercomputers use a torus network; Cray systems use dragonfly

– Linear increase in the number of links/routers with system size (cost savings)

– Increased network diameter causing increased latency

– Any communication that is more than one hop away has a possibility of interference –congestion is not just possible, but common

– Adaptive routing and traffic classes aim to help minimize congestion

§ Take-away: data locality is critical

ATPESC (07/27/2020)

Page 26: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Network Congestion Behavior: Torus

0

500

1000

1500

2000

2500

3000

3500

1 2 4 8 16 32 64128

256512 1K 2K 4K 8K

16K32K

64K128K

256K512K 1M

Band

wid

th (M

bps)

Message Size (bytes)

P2-P5P3-P4 (overlap)P3-P4 (no overlap)

ATPESC (07/27/2020)

P0 P1 P2 P3 P4 P5 P6 P7

Page 27: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

2D Nearest Neighbor: Process Mapping (XYZ)

ATPESC (07/27/2020)

X-AxisZ-Axis

Y-Axis

Page 28: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Nearest Neighbor Performance: Torus

ATPESC (07/27/2020)

0

100

200

300

400

500

600

700

800

900

2 4 8 16 32 64 128 256 512 1K

Exec

utio

n Ti

me

(us)

Grid Partition (bytes)

System Size : 16K Cores

XYZTTXYZZYXTTZYX

0

500

1000

1500

2000

2500

2 4 8 16 32 64 128 256 512 1K

Exec

utio

n Ti

me

(us)

Grid Partition (bytes)

System Size : 128K Cores

XYZTTXYZZYXTTZYX

2D Halo Exchange

Page 29: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Agenda

ATPESC (07/27/2020)

Network Adapters

Network Topologies

Network/Processor/Memory Interactions

Page 30: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Network Interactions with Memory/Cache

§ Most network interfaces understand and work with the cache coherence protocols available on modern systems– Users do not have to ensure that data is flushed from cache before

communication

– Network and memory controller hardware understand what state the data is in and communicate appropriately

ATPESC (07/27/2020)

Page 31: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Send-side Network Communication

ATPESC (07/27/2020)

North BridgeCPU

NIC

FSB Memory Bus

I/O Bus

Appln.Buffer

L3 $ Memory

Appln.Buffer

Page 32: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Receive-side Network Communication

ATPESC (07/27/2020)

North BridgeCPU

NIC

FSB Memory Bus

I/O Bus

Appln.Buffer

L3 $ Memory

Appln.Buffer

Page 33: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Network/Processor Interoperation Issues

§ Direct cache injection– Most current networks inject data into memory

• If data is in cache, they flush cache and then inject to memory

– Some networks are investigating direct cache injection• Data can be injected directly into the last-level cache• Can be tricky since it can cause cache pollution if the incoming data is not

used immediately

§ Atomic operations– Current network atomic operations are only atomic with respect to

other network operations and not with respect to processor atomics• E.g., network fetch-and-add and processor fetch-and-add might corrupt

each other’s data

ATPESC (07/27/2020)

Page 34: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Network Interactions with Accelerators

§ PCI Express peer-to-peer capabilities enables network adapters to

directly access third-party devices

– Coordination between network adapter and accelerator (GPUs, FPGAs, …)

– Data does not need to be copied to/from CPU buffers when going over

the network

– E.g. NVIDIA GPUDirect RDMA, AMD ROCmRDMA

– Network operations to/from GPU buffers are initiated by the CPU

– GPU triggered operations under investigation

ATPESC (07/27/2020)

GPU

Memory

CPU

Memory

Network Card

GPU

Memory

CPU

Memory

Network Card

Page 35: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Summary

§ These are interesting times for all components in the overall system architecture: compute, memory, interconnect– And interesting times for computational science on these systems

§ Interconnect technology continues to advance– Integration and interoperability is the key to removing bottlenecks

and improving functionality• Compute/memory/network integration will continue to advance for the

foreseeable future

– Offload technologies continue to evolve as we move more functionality to the network hardware

– Network topologies are becoming more “shared” (cost saving)

ATPESC (07/27/2020)

Page 36: Interconnects - press3.mcs.anl.govpress3.mcs.anl.gov/atpesc/files/2020/07/ATPESC-2020-Track-1-Talk-r-Raffenetti...Mathematics and Computer Science Division Argonne National Laboratory

Thank You!

Email: [email protected]