introduction to databus

31
Databus 1/29/2013 Databus 1

Upload: amy-w-tang

Post on 11-May-2015

1.716 views

Category:

Technology


4 download

DESCRIPTION

This talk was given by Subbu Subramanian (Staff Software Engineer @ LinkedIn) in 2012 at Netflix.

TRANSCRIPT

Page 1: Introduction to Databus

` Recruiting Solutions

Databus

1/29/2013 Databus 1

Page 2: Introduction to Databus

`

INTRODUCTION

2

Page 3: Introduction to Databus

`

LinkedIn by Numbers

World’s largest professional network 187M+ members world-wide as of Q3 2012 Growing at the rate of two per second

85 of Fortune 100 companies use Talent Solutions to hire

> 2.6M company pages > 4B search queries 75K+ developers leveraging out APIs 1.3M unique publishers

Databus 3

Page 4: Introduction to Databus

`

The Consequence of Specialization in Data Systems

Data Consistency is critical !!! Data Flow is essential

Page 5: Introduction to Databus

`

Solution: Databus

5

Primary DB Data Change Events

Databus

Standardization

Standardization

Standardization

Standardization

Standardization Search Index

Standardization

Standardization Graph Index

Standardization

Standardization

Read Replicas

Updates

Page 6: Introduction to Databus

`

Extract changes from database commit log

Tough but possible

Consistent!!!

Application code dual writes to database and pub-sub system

Easy on the surface

Consistent?

Two Ways

Page 7: Introduction to Databus

`

Key Design Decisions : Semantics

• Logical clocks attached to the source – Physical offsets could be used for internal

transport – Simplifies data portability

• Pull model – Restarts are simple – Derived State = f (Source state, Clock) – + Idempotence = Timeline Consistent!

7

Page 8: Introduction to Databus

`

Key Design Decisions : Systems

• Isolate fast consumers from slow consumers – Workload separation between online, catch-up,

bootstrap • Isolate sources from consumers

– Schema changes – Physical layout changes – Speed mismatch

• Schema-aware – Filtering, Projections – Typically network-bound can burn more CPU

8

Page 9: Introduction to Databus

`

Requirements

• Timeline consistency • Guaranteed, at least once delivery • Low latency • Schema evolution • Source independence • Scalable consumers • Handle for slow/new consumers without

affecting happy ones (look-back requirements)

9

Page 10: Introduction to Databus

`

ARCHITECTURE

10

Page 11: Introduction to Databus

`

Initial Design (2007)

DB

Databus 11

Relay In Memory Buffer

Direct Pull

Happy Consumer

Happy Consumer

Slow Consumer

Pros: 1. Consumer Scaling 2. Some isolation

Cons: Slow consumers overwhelming the DB

Source clock timer SCN

0

102400 DB

Relay

70000

100000 3 hrs

Proxied Pull

Page 12: Introduction to Databus

`

Software Architecture

Four Logical Components

• Fetcher – Fetch from db, relay…

• Log Store – Store log snippet

• Snapshot Store – Store moving data

snapshot

• Subscription Client – Orchestrate pull

across these

Page 13: Introduction to Databus

`

The Databus System

13

Relay In Memory Buffer

Source clock timer SCN 0

102400 DB

Relay 70000

100000

80000

30000

Snapshot

Bootstrap Service

Log Storage Snapshot Store

90000

Log

Server

3 hrs

10 days

infinite

Happy Consumer

Happy Consumer

Slow Consumer

Page 14: Introduction to Databus

`

The Relay

• Change event buffering (~ 2 – 7 days) • Low latency (10-15 ms) • Filtering, Projection • Hundreds of consumers per relay • Scale-out, High-availability through

redundancy

Page 15: Introduction to Databus

`

Deployment Options

Option 1: Peered Deployment Option 2: Clustered Deployment

Page 16: Introduction to Databus

`

The Bootstrap Service

• Catch-all for slow / new consumers • Isolate source OLTP instance from large scans • Log Store + Snapshot Store • Optimizations

– Periodic merge – Predicate push-down – Catch-up versus full bootstrap

• Guaranteed progress for consumers via chunking • Implementations

– Database (MySQL) – Raw Files

• Bridges the continuum between stream and batch systems

Page 17: Introduction to Databus

`

The Consumer Client Library

• Glue between Databus infra and business logic in the consumer

• Isolates the consumer from changes in the databus layer.

• Switches between relay and bootstrap as needed

• API – Callback with transactions – Iterators over windows

Page 18: Introduction to Databus

`

Fetcher Implementations

• Oracle – Trigger-based

• MySQL – Custom-storage-engine based

• In Labs – Alternative implementations for Oracle – OpenReplicator integration for MySQL

Page 19: Introduction to Databus

`

Meta-data Management

• Event definition, serialization and transport – Avro

• Oracle, MySQL – Avro definition generated from the table schema

• Schema evolution – Only backwards-compatible changes allowed

• Isolation between upgrades on producer and consumer

Page 20: Introduction to Databus

`

Scaling the consumers (Partitioning)

• Server-side filtering – Range, mod, hash – Allows client to control partitioning function

• Consumer groups – Distribute partitions evenly across a group – Move partitions to available consumers on failure – Minimize re-processing

Page 21: Introduction to Databus

`

A NEW CONSUMER

21

Page 22: Introduction to Databus

`

Development with Databus – Client Library

Databus 22

Consumers

onDataEvent(DbusEvent, Decoder) … …

register(consumers, sources , filter) start() , shutdown(),

Databus Client

Stream Event Callback

API

Bootstrap Event Callback

API

implement

Databus Client Library

Consumers

Client API

Page 23: Introduction to Databus

`

Databus Consumer Implementation class MyConsumer extends AbstractDatabusStreamConsumer {

ConsumerCallbackResult onDataEvent(DbusEvent e, DbusEventDecoder d){

//use map-like Avro GenericRecord

GenericRecord g = d.getGenericRecord(e, null);

//or use the auto-generated Java class

MyEvent e = d.getTypedValue(e, null,

MyEvent.class);

return ConsumerCallbackResult.SUCCESS; }

}

Databus 23

Page 24: Introduction to Databus

`

Starting the client public void main(String[]) { //configure

DatabusHttpClientImpl.Config clientConfig = new DatabusHttpClientImpl.Config();

clientConfig.loadFromFile(“mydbus”, “mdbus.props”);

DatabusHttpClientImpl client = new DatabusHttpClientImpl(clientConfig); //register callback

MyConsumer callback = new MyConsumer(); client.registerDatabusStreamListener(callback,

null, "com.linkedin.events.member2.MemberProfile”); //start client library

client.startAndBlock(); }

Databus 24

Page 25: Introduction to Databus

`

Event Callback APIs

Databus 25

Page 26: Introduction to Databus

`

PERFORMANCE

26

Page 27: Introduction to Databus

`

Relay Throughput

Databus 27

Page 28: Introduction to Databus

`

Consumer Throughput

Databus 28

Page 29: Introduction to Databus

`

End-End Latency

Databus 29

Page 30: Introduction to Databus

`

Snapshot vs Catchup

Databus 30

Page 31: Introduction to Databus

Recruiting Solutions Recruiting Solutions Recruiting Solutions 31