assumptions hypothesis hopes ramcloud mendel rosenblum stanford university

15
Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

Upload: charlotte-little

Post on 26-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

Assumptions

Hypothesis

HopesRAMCloud

Mendel Rosenblum

Stanford University

Page 2: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 2

Talk Agenda

● Present assumptions we are making in RAMCloud Infrastructure for data centers 5 years in future

● Hardware and Workload assumptions

● Elicit feedback Unconscious assumptions Hopelessly wrong assumptions

● Disclaimer: Goal #1 of RAMCloud is research Research agenda is pushing latency and scale Significant risk is OK Version 1.0 design decisions (OK to leave stuff out, oversimplify)

Page 3: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 3

Network Assumptions

● Assumption: Network latency going down Microsecond latencies require low latency switches & NICs

● Encouraging: Arista: 0.6 μs/switch● Lots of discouraging

Chicken and egg problem

● Assumption: No significant oversubscription Scale-based solutions require high bisection bandwidth

● Example: Recovery

● Actively engaging networking researchers and industry partners

Vote: Are we crazy?

Page 4: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 4

Workload Assumptions

● Less than 2 second recovery time is fast enough

● Applications will tolerate 2 second hiccups

Vote: Will enough important applications tolerate 2 second pauses when failures

happen?

Page 5: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 5

Server Assumptions

● Assumption: Simultaneous failures of all servers unlikely (i.e. all DRAMs can’t lose contents) Require Uninterruptible power supply Example: Google’s battery backup per machine

● Need only enough to transfer write buffers to non-volatile storage

● Want to avoid non-volatile latency in write request path

Vote: What do you think?

Page 6: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 6

Assumption: Latency is goodness

● High latency is a major application inhibitor Increases complexity, decreases innovation RAMCloud will enable new applications

● Low latency is underappreciated Users won’t ask for it but will love it

Vote: Are we right?

Page 7: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 7

Low Latency: Stronger Consistency?

● Might be able to help with scaling challenges Example: Durability, update multiple in-memory replicas

● Cost of consistency rises with transaction overlap:O ~ R*DO = # overlapping transactionsR = arrival rate of new transactionsD = duration of each transaction

● R increases with system scale Eventually, scale makes consistency unaffordable

● But, D decreases with lower latency Stronger consistency affordable at larger scale Is this phenomenon strong enough to matter?

Page 8: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 8

Locality Assumptions

● Locality is getting harder to find and exploit Example: Facebook usage

● Flat model rather than tiered storage

Vote: Right?

Page 9: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 9

Read/Write Ratio Assumption

● Read request dominate writes

● Postulate: Fast reads will cause applications to use more information leading to more reads than writes

Vote: What do you think?

Page 10: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 10

Replication Assumptions

● Replication isn’t needed for performance

● High throughput and load balancing eliminates the need for object replication

● Few applications need more than1M ops/second to a single object

Vote: Right?

Page 11: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 11

WAN Replication Assumption

● There are interesting applications that don’t require cross data center replication Speed of light not our friend

Page 12: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 12

Transaction Assumption

● Multi-object update support is required Distributed applications need support for concurrent accesses

Page 13: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 13

Security assumptions

● Threat model RAMCloud servers and network physically secure

● Don’t need encryption on RPCs, DoS defense, etc.

Page 14: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 14

Data Model Assumption

● Simple table/object with indexing sufficient Dictated by speed SQL/relational model can be built on top

Page 15: Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University

April 1, 2010 RAMCloud Assumptions Slide 15

Thank you