(sdd424) simplifying scalable distributed applications using dynamodb streams | aws re:invent 2014
DESCRIPTION
Dynamo Streams provides a stream of all the updates done to your DynamoDB table. It is a simple but extremely powerful primitive which will enable developers to easily build solutions like cross-region replication, and to host additional materialized views, for instance an ElasticSearch index, on top of DynamoDB tables. In this session we will dive deep into details of Dynamo Streams, and how customers can leverage Dynamo Streams to build custom solutions and to extend the functionality of DynamoDB. We will give a demo of an example application built on top of Dynamo Streams to demonstrate the power and simplicity of Dynamo Streams.TRANSCRIPT
SDD424
Simplifying Scalable Distributed
Applications Using DynamoDB Streams
• Why DynamoDB Streams
• What DynamoDB Streams
• How
Demo@akshatvig
use cases
Onlinegaming
Ad tech
Live voting
Socialmedia
Mobilemessaging
Backup & restore
@akshatvig
replication
@akshatvig
backups
@akshatvig
views
@akshatvig
Reactors – the new triggers
@akshatvig
Logs
@akshatvig
Op: PUT
John
Tokyo
Op: UPDATE
John
Pluto
Op: UPDATE
John
Mars
Logs and databases
Name Destination
TokyoMars
@akshatvig
is truth
construct state
@akshatvig
Logs are powerful.
Atomicity, consistency, and durability
Replication
Point-in-time restores
Materialized views
Auditing
And much more...@akshatvig
What is DynamoDB Streams?
It is a stream of updates
Scales with your table
DynamoDB StreamsDynamoDB
@akshatvig
Social network application
Comments and notifications
@akshatvig
Take 1
Comments Table
@akshatvig
post
Take 2
Comments TableProcessor
@akshatvig
Take 3
Comments Table
@akshatvig
Global users
@akshatvig
Cross region library
@akshatvig
groups
@akshatvig
Cross region replication
post
@akshatvig
Cross region library
Horizontal scaling : Workers
Load balancing
@akshatvig
Cross region library
Fault tolerant
Checkpointing
@akshatvig
Cross region replication
Shard 1
Partition
1Shard 2
Shard 3
Partition
2
KCL
Worker
KCL
Worker
KCL
Worker
@akshatvig
Cross region replication
Partition 1
Partition 2
Partition 3
Partition 4
Partition 5
Shard 1
Shard 2
Shard 3
Shard 4
KCL
Worker
KCL
Worker
KCL
Worker
KCL
Worker
@akshatvig
Cross region replication library
@akshatvig
Cross region replication library
@akshatvig
DynamoDB Streams and
DynamoDB Connectors simplify
cross region replication!
@akshatvig
Materialized view
Extending DynamoDB capabilities
@akshatvig
Cross region
StreamsStreams
Cross region
@akshatvig
Data Tier
Amazon DynamoDB
Amazon RDS
Amazon ElastiCache
Amazon S3
Amazon CloudSearch
Amazon Redshift
archiverich search
read replicas
hot reads
analyticscomplex queries& transactions
purpose
@akshatvig
Features of
@akshatvig
View Type Destination
Old Image – Before update Name = John, Destination = Mars
New Image – After update Name = John, Destination = Pluto
Old and New Images Name = John, Destination = Mars
Name = John, Destination = Pluto
Keys Only Name = John
types
@akshatvig
Asynchronous
i=A
i=A
ack
@akshatvig
Exactly once
i=Ai=Bi=B
i=B
@akshatvig
Strictly ordered records
i=Ai=B
i=C
i=C
@akshatvig
Durability & high availability
High throughput consensus protocol
Replicated across multiple AZs
@akshatvig
Managed
@akshatvig
Elasticity
Adjusts
@akshatvig
Performance
Sub-second latency
@akshatvig
Retention period
Records available for 24 hours
@akshatvig
DynamoDB Local
Desktop installable
Development & testing
Publicly available - http://bit.ly/1yt1r9q
Now supports DynamoDB Streams @akshatvig
Consuming AWSDynamoDBstreamsAdapterClient adapterClient =
new AWSDynamoDBstreamsAdapterClient(streamsCredentials, .. );..
AmazonDynamoDBClient dynamoDBClient =new AmazonDynamoDBClient(dynamoDBCredentials, ..);
..
KinesisClientLibConfiguration workerConfig =new KinesisClientLibConfiguration (.., streamId,
streamsCredentials, ..)
.withMaxRecords(100)
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
Worker worker =new Worker(recordProcessorFactory, workerConfig, adapterClient,
dynamoDBClient, ..);
Thread t = new Thread(worker);t.start();
@akshatvig
Consuming AWSDynamoDBstreamsAdapterClient adapterClient =
new AWSDynamoDBstreamsAdapterClient(streamsCredentials, .. );..
AmazonDynamoDBClient dynamoDBClient =new AmazonDynamoDBClient(dynamoDBCredentials, ..);
..
KinesisClientLibConfiguration workerConfig =new KinesisClientLibConfiguration (.., streamId,
streamsCredentials, ..)
.withMaxRecords(100)
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
Worker worker =new Worker(recordProcessorFactory, workerConfig, adapterClient,
dynamoDBClient, ..);
Thread t = new Thread(worker);t.start();
@akshatvig
Consuming AWSDynamoDBstreamsAdapterClient adapterClient =
new AWSDynamoDBstreamsAdapterClient(streamsCredentials, .. );..
AmazonDynamoDBClient dynamoDBClient =new AmazonDynamoDBClient(dynamoDBCredentials, ..);
..
KinesisClientLibConfiguration workerConfig =new KinesisClientLibConfiguration (.., streamId,
streamsCredentials, ..)
.withMaxRecords(100)
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
Worker worker =new Worker(recordProcessorFactory, workerConfig, adapterClient,
dynamoDBClient, ..);
Thread t = new Thread(worker);t.start();
@akshatvig
Consuming AWSDynamoDBstreamsAdapterClient adapterClient =
new AWSDynamoDBstreamsAdapterClient(streamsCredentials, .. );..
AmazonDynamoDBClient dynamoDBClient =new AmazonDynamoDBClient(dynamoDBCredentials, ..);
..
KinesisClientLibConfiguration workerConfig =new KinesisClientLibConfiguration (.., streamId,
streamsCredentials, ..)
.withMaxRecords(100)
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON);
Worker worker =new Worker(recordProcessorFactory, workerConfig, adapterClient,
dynamoDBClient, ..);
Thread t = new Thread(worker);t.start();
@akshatvig
Processingpublic class StreamsRecordProcessor implements IRecordProcessor {..@Overridepublic void processRecords(List<Record> records,.. ) {
for(Record record : records) {
if (record instanceof RecordAdapter) {
Record ddbStreamRecord = ((RecordAdapter)record).getInternalObject();
switch(ddbStreamRecord.getEventName()) {case "INSERT" : case "MODIFY" :
DemoHelper.putItem(dynamoDBClient, tableName,
ddbStreamRecord.getDynamodb().getNewImage());break;
... @akshatvig
Processingpublic class StreamsRecordProcessor implements IRecordProcessor {..@Overridepublic void processRecords(List<Record> records,.. ) {
for(Record record : records) {
if (record instanceof RecordAdapter) {
Record ddbStreamRecord = ((RecordAdapter)record).getInternalObject();
switch(ddbStreamRecord.getEventName()) {case "INSERT" : case "MODIFY" :
DemoHelper.putItem(dynamoDBClient, tableName,
ddbStreamRecord.getDynamodb().getNewImage());break;
... @akshatvig
Processingpublic class StreamsRecordProcessor implements IRecordProcessor {..@Overridepublic void processRecords(List<Record> records,.. ) {
for(Record record : records) {
if (record instanceof RecordAdapter) {
Record ddbStreamRecord = ((RecordAdapter)record).getInternalObject();
switch(ddbStreamRecord.getEventName()) {case "INSERT" : case "MODIFY" :
DemoHelper.putItem(dynamoDBClient, tableName,ddbStreamRecord.getDynamodb().getNewImage());
break;...
@akshatvig
Processingpublic class StreamsRecordProcessor implements IRecordProcessor {..@Overridepublic void processRecords(List<Record> records,.. ) {
for(Record record : records) {
if (record instanceof RecordAdapter) {
Record ddbStreamRecord = ((RecordAdapter)record).getInternalObject();
switch(ddbStreamRecord.getEventName()) {case "INSERT" : case "MODIFY" :
DemoHelper.putItem(dynamoDBClient, tableName,ddbStreamRecord.getDynamodb().getNewImage());
break;...
@akshatvig
Cross region replication@akshatvig
DynamoDB Reactors
Trigger Lambda functions
Example – Validate address, send notifications
Console support
@akshatvig
Preview available
Temporary endpoints: N. Virginia & Ireland
Available until global launch
Register for preview at http://amzn.to/11dh9M0
@akshatvig
Preview console
TBD: Console snapshot
@akshatvig
Please give us your feedback on this session.
Complete session evaluations and earn re:Invent swag.
http://bit.ly/awsevals