designing high-performance, resilient and heterogeneity...

24
Designing High-Performance, Resilient and Heterogeneity-Aware Key-Value Storage for Modern HPC Clusters Dipti Shankar Network-Based Computing Laboratory The Ohio State University http://hibd.cse.ohio-state.edu/

Upload: others

Post on 05-Jul-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

Designing High-Performance, Resilient andHeterogeneity-Aware Key-Value Storage

for Modern HPC Clusters

Dipti Shankar

Network-Based Computing Laboratory

The Ohio State University

http://hibd.cse.ohio-state.edu/

Page 2: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 2Network Based Computing Laboratory

• Three-layer architecture of Web 2.0

– Web Servers, Memcached Servers, Database Servers

• Memcached is a core component of Web 2.0 architecture

Overview of Web 2.0 Architecture and Memcached

!"#$%"$#&

Web Frontend Servers (Memcached Clients)

(Memcached Servers)

(Database Servers)

High

Performance

Networks

High

Performance

NetworksInternet

Page 3: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 3Network Based Computing Laboratory

Memcached Architecture

• Distributed Caching Layer– Allows to aggregate spare memory from multiple nodes

– General purpose

• Typically used to cache database queries, results of API calls

• Scalable model, but typical usage very network intensive

Main

memoryCPUs

SSD HDD

High Performance Networks

... ...

...

Main

memoryCPUs

SSD HDD

Main

memoryCPUs

SSD HDD

Main

memoryCPUs

SSD HDD

Main

memoryCPUs

SSD HDD

Page 4: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 4Network Based Computing Laboratory

Interconnects and Protocols in OpenFabrics Stack

Kernel

Space

Application / Middleware

Verbs

Ethernet

Adapter

EthernetSwitch

Ethernet Driver

TCP/IP

1/10/40/100 GigE

InfiniBand

Adapter

InfiniBand

Switch

IPoIB

IPoIB

Ethernet

Adapter

Ethernet Switch

Hardware Offload

TCP/IP

10/40 GigE-TOE

InfiniBand

Adapter

InfiniBand

Switch

User Space

RSockets

RSockets

iWARP Adapter

Ethernet Switch

TCP/IP

User Space

iWARP

RoCEAdapter

Ethernet Switch

RDMA

User Space

RoCE

InfiniBand

Switch

InfiniBand

Adapter

RDMA

User Space

IB Native

Sockets

Application / Middleware Interface

Protocol

Adapter

Switch

InfiniBand

Adapter

InfiniBand

Switch

RDMA

SDP

SDP

Page 5: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 5Network Based Computing Laboratory

• Introduced in Oct 2000• High Performance Data Transfer

– Interprocessor communication and I/O– Low latency (<1.0 microsec), High bandwidth (up to 12.5 GigaBytes/sec -> 100Gbps), and

low CPU utilization (5-10%)

• Flexibility for LAN and WAN communication• Multiple Transport Services

– Reliable Connection (RC), Unreliable Connection (UC), Reliable Datagram (RD), Unreliable Datagram (UD), and Raw Datagram

– Provides flexibility to develop upper layers

• Multiple Operations– Send/Recv– RDMA Read/Write– Atomic Operations (very unique)

• high performance and scalable implementations of distributed locks, semaphores, collective communication operations

• Leading to big changes in designing HPC clusters, file systems, cloud computing systems, grid computing systems, ….

Open Standard InfiniBand Networking Technology

Page 6: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 6Network Based Computing Laboratory

• Server and client perform a negotiation protocol

– Master thread assigns clients to appropriate worker thread

• Once a client is assigned a verbs worker thread, it can communicate directly and is “bound” to that thread

• All other Memcached data structures are shared among RDMA and Sockets worker threads

• Memcached Server can serve both socket and verbs clients simultaneously

• Memcached applications need not be modified; uses verbs interface if available

Memcached-RDMA Design

SocketsClient

RDMAClient

MasterThread

SocketsWorker Thread

VerbsWorker Thread

SocketsWorker Thread

VerbsWorker Thread

SharedData

MemorySlabsItems

1

1

2

2

Page 7: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 7Network Based Computing Laboratory

Performance on SDSC-Comet - OHB Latency &YCSB-B Benchmarks

• OHB Latency: End-to-end point-to-point Set/Get latency; Read:Write 90:10 Workload

- Improves performance by about 71% over Memcached-IPoIB

• YCSB: Three-node Memcached cluster, 64 GB memory/node, 32 compute nodes for clients

- Improves the overall throughput for Read-Heavy YCSB-B workload by about 5.7X, as compared to default

Memcached running over IPoIB

Page 8: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 8Network Based Computing Laboratory

Micro-benchmark Evaluation for OLDP workloads

012345678

64 96 128 160 320 400

Late

ncy

(se

c)

No. of Clients

Memcached-IPoIB (32Gbps)

Memcached-RDMA (32Gbps)

0

1000

2000

3000

4000

64 96 128 160 320 400

Thro

ugh

pu

t (

Kq

/s)

No. of Clients

Memcached-IPoIB (32Gbps)Memcached-RDMA (32Gbps)

• Illustration with Read-Cache-Read access pattern using modified mysqlslap load testing tool

• RDMA-Memcached can

- improve query latency by up to 66% over IPoIB (32Gbps)

- throughput by up to 69% over IPoIB (32Gbps)

D. Shankar, X. Lu, J. Jose, M. W. Rahman, N. Islam, and D. K. Panda, Can RDMA Benefit On-Line Data Processing Workloads with Memcached and MySQL, ISPASS’15

Page 9: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 9Network Based Computing Laboratory

Evaluation with Transactional and Web-oriented Workloads

0

20

40

60

4 8 16 32

Thro

ugh

pu

t (

Kq

/s)

No. of Clients

Memcached-IPoIB (32Gbps)Memcached-RDMA (32Gbps)

0

10

20

30

40

50

4 8 16 32 64 128

Thro

ugh

pu

t (

Kq

/s)

No. of Clients

Memcached-IPoIB (32Gbps)Memcached-RDMA (32Gbps)

Transactional workloads. Example: TATP

• Up to 29% improvement in overall throughput as compared to default Memcached

running over IPoIB

Web-Oriented workloads. Example: Twitter workload

• Up to 42% improvement in overall throughput compared to default Memcached running over IPoIB

Page 10: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 10Network Based Computing Laboratory

Performance Benefits on SDSC-Gordon – OHB Latency & Throughput Micro-Benchmarks

• ohb_memlat & ohb_memthr latency & throughput micro-benchmarks

• RDMA-Memcached can improve query latency by up to 70% over IPoIB and throughput by up to

2X over IPoIB

– No overhead in using hybrid mode when all data can fit in memory

0

2

4

6

8

10

64 128 256 512 1024

Thro

ugh

pu

t (m

illio

n

tran

s/se

c)

No. of Clients

IPoIB (32Gbps)

RDMA-Mem (32Gbps)

2X

0

100

200

300

400

500

Ave

rage

late

ncy

(u

s)

Message Size (Bytes)

IPoIB (32Gbps)RDMA-Mem (32Gbps)

Page 11: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 11Network Based Computing Laboratory

Performance on OSU-RI-SSD – Hybrid Memcached

ohb_memhybrid – Uniform Access Pattern, single client and single server with 64MB

• Success Rate of In-Memory Vs. Hybrid SSD-Memory for different spill factors

– 100% success rate for Hybrid design while that of pure In-memory degrades

• Average Latency with penalty for In-Memory Vs. Hybrid SSD-Assisted mode for spill factor 1.5.

– up to 53% improvement over In-memory with server miss penalty as low as 1.5 ms

0

200

400

600

800

1000

512 2K 8K 32K 128K

Late

ncy

wit

h p

enal

ty (

us)

Message Size (Bytes)

RDMA-Mem (32Gbps)

RDMA-Hybrid (32Gbps)

55

75

95

115

1 1.15 1.3 1.45

Succ

ess

Rat

e

Spill Factor

RDMA-Mem-UniformRDMA-Hybrid

53%

Page 12: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 12Network Based Computing Laboratory

Overview of SSD-Assisted Hybrid RDMA-Memcached

• Hybrid slab allocation and

management for higher data

retention

• Log-structured sequence of

blocks flushed to SSD

• SSD fast random read to achieve

low latency object access

• Uses LRU to evict data to SSD

Page 13: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 13Network Based Computing Laboratory

Key-Value Storage in HPC and Data Centers

• General purpose distributed memory-centric storage

– Allows to aggregate spare memory from multiple nodes (e.g, Memcached)

• Accelerating Online and Offline Analytics in High-Performance Compute (HPC) environments

• Our Basis: Current High-performance and hybrid key-value stores for modern HPC clusters

❖ High-Performance Network Interconnects (e.g., InfiniBand)

❖ Low end-to-end latencies with IP-over-InfiniBand (IPoIB) and Remote Direct Memory Access (RDMA)

❖ ‘DRAM+SSD’ hybrid memory designs

❖ Extend storage capabilities beyond DRAM capabilities using high-speed SSDs

!"#$%"$#&

Web Frontend Servers (Memcached Clients)

(Memcached Servers)

(Database Servers)

High

Performance

Networks

High

Performance

Networks

(Offline Analytical Workloads: Software-Assisted Burst-Buffer )

(Online Analytical Workloads: OLTP/NoSQL Query Cache)

Page 14: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 14Network Based Computing Laboratory

Drivers of Modern HPC Cluster Architectures

• Multi-core/many-core technologies

• Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)

• Solid State Drives (e.g., PCIe/NVMe-SSDs), NVRAM (e.g., PCM, 3DXpoint), Parallel

Filesystems (e.g., Lustre)

• Accelerators (e.g., NVIDIA GPGPUs)

• Production-scale HPC Clusters: SDSC Comet, TACC Stampede, OSC Owens, etc.

High Performance Interconnects

Multi-core Processors with vectorization support +

Accelerators (GPUs)

Large Memory Nodes (DRAM)

SSD, NVMe-SSD, NVRAM

Page 15: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 15Network Based Computing Laboratory

• Current and emerging HPC

systems

• Goals:

– Maximize end-to-end

performance

– Exploit all HPC resources

(compute/storage/network)

– Enable HPC Big Data

applications to leverage

memory-centric KV storage

Designing a High-Performance, Resilient and Heterogeneity-Aware KV Storage

Holistic

Approach to

HPC Centric

KV Store DesignRDMA-Aware Communication

Engine

Hybrid ‘DRAM+NVM’ Storage

SIMD-awareKV Access

Emerging DRAM/NVRAM

Designs

Page 16: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 16Network Based Computing Laboratory

High-Performance Non-Blocking API Semantics

❖ Heterogeneous Storage-Aware Key-Value Stores (e.g., ‘DRAM + PCIe/NVMe-SSD’)

• Higher data retention at the cost of SSD I/O; suitable for out-of-memory scenarios

• Performance limited by Blocking API semantics

❖ Goals: Achieve near in-memory speeds while being able to exploit hybrid memory

❖ Approach: Novel Non-blocking API Semantics to extend RDMA-Libmemcached library

• memcached_(iset/iget/bset/bget) APIs for SET/GET

• memcached_(test/wait) APIs for progressing communication

D. Shankar, X. Lu , N. Islam , M. W. Rahman , and D. K. Panda, “High-Performance Hybrid Key-Value Store on Modern Clusters with RDMA Interconnects and SSDs: Non-blocking Extensions, Designs, and Benefits”, IPDPS 2016

test/wait

Hybrid Slab Manager (RAM+SSD)

Blocking API

Non-blocking API Request

Non-Blocking API Response

CLI

ENT

SER

VER

RDMA-Enhanced Communication Engine

RDMA-Enhanced Communication Engine

Libmemcached Library

iset/igetbset/bgetset/get

Page 17: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 17Network Based Computing Laboratory

High-Performance Non-Blocking API Semantics

• Set/Get Latency with Non-Blocking API: Up to 8x gain in overall latency vs. blocking API semantics over RDMA+SSD hybrid design

• Up to 2.5x gain in throughput observed at client; Ability to overlap request and response phases to hide SSD I/O overheads

0

500

1000

1500

2000

2500

Set Get Set Get Set Get Set Get Set Get Set Get

IPoIB-Mem RDMA-Mem H-RDMA-Def H-RDMA-Opt-Block

H-RDMA-Opt-NonB-i

H-RDMA-Opt-NonB-b

Avera

ge L

ate

ncy (

us)

Miss Penalty (Backend DB Access Overhead)

Client Wait

Server Response

Cache Update

Cache check+Load (Memory and/or SSD read)

Slab Allocation (w/ SSD write on Out-of-Mem)

Overlap SSD I/O Access (near in-memory latency)

Page 18: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 18Network Based Computing Laboratory

❖ Erasure Coding (EC): A low storage-overhead

alternative to Replication

❖ Bottlenecks for Online EC:

❖ Compute Overhead: Encoding/Decoding

❖ New communication overhead: Scatter/ Gather

distributed data/parity chunks per-KV request

❖ Goal: Making Online EC viable for key-value stores

❖ Approach: Non-blocking RDMA-aware semantics to

enable compute/communication overlap

❖ Encode/Decode offload capabilities integrated into

Memcached client (CE/CD) and server (SE/SD)

Fast Online Erasure Coding with RDMA

Memory Overhead

EC

Perf

ormanc

e

Ideal Scenario

MemoryEfficient

High Performance

No FT

Replication

E.g., Storage Overhead 66% for Reed-Solomon EC vs. 200% of Rep=3

Replication EC

Page 19: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 19Network Based Computing Laboratory

Fast Online Erasure Coding with RDMA

D. Shankar , X. Lu , and D. K. Panda, “High-Performance and Resilient Key-Value Store with Online Erasure Coding for Big Data Workloads”, 37th International Conference on Distributed Computing Systems (ICDCS 2017)

0

500

1000

1500

2000

2500

3000

3500

512 4K 16K 64K 256K 1M

Tota

l S

et L

aten

cy (

us)

Value Size (Bytes)

Sync-Rep=3 Async-Rep=3

Era(3,2)-CE-CD Era(3,2)-SE-SD

Era(3,2)-SE-CD

~1.6x

~2.8x

0

50

100

150

200

250

4K 16K 32K 4K 16K 32K

YCSB-A YCSB-B

Agg

. T

hro

ug

hp

ut

(Kop

s/s

ec)

Memc-RDMA-NoRep Async-Rep=3

Era(3,2)-CE-CD Era(3,2)-SE-CD

~1.5x

~1.34x

• Experiments with YCSB for Online EC vs. Async. Rep:

• 150 Clients on 10 nodes on SDSC Comet Cluster (IB FDR + 24-core Intel Haswell) over 5-node RDMA-Memcached Cluster

• (1) CE-CD gains ~1.34x for Update-Heavy workloads; SE-CD on-par (2) CE-CD/SE-CD on-par for Read-Heavy workloads

Intel Westmere + QDR-IB ClusterRep=3 vs. RS (3,2)

Page 20: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 20Network Based Computing Laboratory

• Offline Data Analytics Use-Case: Hybrid and resilient key-value store-based Burst-Buffer system Over Lustre (Boldio)

• Overcome local storage limitations on HPC nodes; performance of `data locality’

• Light-weight transparent interface to Hadoop/ Spark applications

• Accelerating I/O-intensive Big Data workloads

– Non-blocking RDMA-Libmemcached APIs to maximize overlap

– Client-based replication or Online Erasure Coding with RDMA for resilience

– Asynchronous persistence to Lustre parallel file system at RDMA-Memcached Servers

D. Shankar, X. Lu, D. Panda, Boldio: A Hybrid and Resilient Burst-Buffer over Lustre for Accelerating Big Data I/O, IEEE International Conference on Big Data 2016 (Short Paper)

Co-Designing Key-Value Store-based Burst Buffer over PFS

Page 21: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 21Network Based Computing Laboratory

Co-Designing Key-Value Store-based Burst Buffer over PFS

• TestDFSIO on SDSC Gordon Cluster (16-core Intel Sandy Bridge and IB QDR) with 16-node MapReduce Cluster + 4-node Boldio Cluster

• Boldio can sustain 3x and 6.7x gains in read and write throughputs over stand-alone Lustre

0

20000

40000

60000

80000

60 GB 100 GB 60 GB 100 GB

Write Read

Agg.Throughput(M

Bps) Lustre-Direct Boldio

6.7x

3x

0

10000

20000

30000

40000

50000

60000

Write Read

DF

SIO

Ag

g.

Th

ro

ug

hp

ut

(MB

ps)

Lustre-Direct

Alluxio-Remote

Boldio_Async-Rep=3

Boldio_Online-EC

No EC overhead (Rep=3 vs. RS (3,2))

• TestDFSIO on Intel Westmere Cluster (8-core Intel Sandy Bridge and IB QDR); 8-node MapReduce Cluster + 5-node Boldio Cluster over Lustre

• Performance gains over designs like Alluxio(formerly Tachyon) in HPC environments with no local storage

Page 22: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 22Network Based Computing Laboratory

• RDMA for Apache Spark (RDMA-Spark), Apache Hadoop 2.x (RDMA-Hadoop-2.x),

RDMA for Apache HBase

• RDMA for Memcached (RDMA-Memcached)

– RDMA-aware `DRAM+SSD’ hybrid Memcached server design

– Non-Blocking RDMA-based Client API designs (RDMA-Libmemcached)

– Based on Memcached 1.5.3 and Libmemcached client 1.0.18

– Available for InfiniBand and RoCE

• OSU HiBD-Benchmarks (OHB)

– Memcached Set/Get Micro-benchmarks for Blocking and Non-Blocking APIs, and Hybrid

Memcached designs

– YCSB plugin for RDMA-Memcached

– Also includes HDFS, HBase, Spark Micro-benchmarks

• http://hibd.cse.ohio-state.edu

• Users Base: 290 organizations, 34 countries, 28,250 downloads

The High-Performance Big Data (HiBD) Project

Page 23: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 23Network Based Computing Laboratory

Concluding Remarks

• Presented an overview of Web 2.0 Memcached architecture

• Provided an overview of modern cluster networking technologies and key-

value store use-cases

• Presented RDMA for Memcached software under HiBD Project: (1) RDMA-

based Communication Engine for InfiniBand clusters; (2) Hybrid design with

high-speed SSDs; (3) Non-blocking API extensions to RDMA-LibMemcached

• Presented (1) Fast Online Erasure Coding Resilience with RDMA; (2) Key-Value

Store-based Burst-Buffer use-case for Offline Hadoop-based Analytics

• Enabling Big Data processing community to take advantage of modern HPC

technologies to carry out their analytics in a fast and scalable manner

Page 24: Designing High-Performance, Resilient and Heterogeneity ...mvapich.cse.ohio-state.edu/static/media/talks/... · InfiniBand Switch IPoIB IPoIB Ethernet Adapter Ethernet Hardware Offload

OSC Booth @ SC ’18 24Network Based Computing Laboratory

Conclusion & Future Avenues❖ Holistic approach to designing key-value storage by exploiting the capabilities of HPC

clusters for (1) performance, (2) scalability, and, (3) data resilience/availability

❖ RDMA-capable Networks: (1) Proposed Non-blocking RDMA-based Libmemcached APIs

(2) Fast Online EC-based RDMA-Memcached designs

❖ Heterogeneous Storage-Awareness: (1) Leverage `RDMA+SSD’ hybrid designs, (2)

`RDMA+NVRAM’ Persistent Key-Value Storage

❖ Application Co-Design: Memory-centric data-intensive applications on HPC Clusters

❖ Online (e.g., SQL query cache, YCSB) and Offline Data Analytics (e.g., Boldio Burst-Buffer for

Hadoop I/O)

❖ Future Work: Ongoing work in this thesis direction

❖ Heterogeneous compute capabilities of CPU/GPU: End-to-end SIMD-aware KVS designs

❖ Exploring co-design of (1) Read-intensive Graph-based workloads (E.g., LinkBench, RedisGraph)

(2) Key-value storage engine for ML Parameter Server frameworks