Optimizing Cassandra in AWS

Download Optimizing Cassandra in AWS

Post on 14-Jul-2015




1 download

Embed Size (px)


<p>Slide 1</p> <p>Optimizing Cassandra for AWSRuslan Meshenberg, Gregg Ulrich - NetflixAgendaNetflixAWSCassandraNetflix Inc.With more than 30 million streaming members in the United States, Canada, Latin America, the United Kingdom, Ireland, Sweden, Norway, Denmark and Finland, Netflix, Inc. is the world's leading internet subscription service for enjoying movies and TV series..11/20/12 10:30 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.</p> <p>3Why Cloud? Netflix.com is now ~100% CloudSome small back end data sources still in progressUSA specific logistics remains in the DatacenterWorking on SOX, PCI as scope starts to include AWSAll international product is cloud based</p> <p>What is Cassandra?Persistent data storeNoSQLDistributed key/value storeTunable eventual consistencyWhy did we choose Cassandra?Open sourced and written in JavaMulti-region replicationData model supports wide range of use-casesRuns on commodity hardwareEnhanced to understand AWS topologyDurableDurabilityNo single point of failure or specialized instancesMultiple copies of data across availability zonesBootstrapping and hints restore data quicklyAll writes appended to a commit logAsynchronous cross-regional replication</p> <p>How we configure Cassandra in AWSS3S3us-east-1eu-west-1S3us-west-2Durability (Quorum)One instance:Availability zone:Replica set:How we configure Cassandra in AWSMostly m2.4xlarge, but migrating to SSDsEphemeral storage for better performanceMultiple ASGs per cluster, each with one AZSingle tenanted clustersOverprovisioned clusters</p> <p>OptimizationsCassandra enhancementsClient librariesOperationsSchema and data management</p> <p>Cassandra enhancementsBug fixesNew featuresPerformanceSecurityAWS environmentMaking a better Java clientMulti-region and zone awareLatency aware load balancerFluent API on top of ThriftBest Practice RecipesFilling the operational voidTomcat webapp for Cassandra administrationAWS-style instance provisioningFull and incremental backupsJMX metrics collectionConsistent configuration across clustersREST API for most administrative operationsSecurity Groups configuration</p> <p>Managing your data and schemaMissing UI for Cassandra client usersView and edit schemaPoint queries and data updatesHigh level cluster status and metricsManages multiple Cassandra clustersIntegrated access controlSchema auditing</p> <p>High level cluster status</p> <p>Data query tool</p> <p>Schema management tool</p> <p>OperationsJune 29th AWS partial outageObservationsMonitoringMaintenances</p> <p>From the Netflix tech blog:Cassandra, our distributed cloud persistence store which is distributed across all zones and regions, dealt with the loss of one third of its regional nodes without any loss of data or availability. June 29th AWS partial outage During outageAll Cassandra instances in us-east-1a were inaccessiblenodetool ring showed all nodes as DOWNMonitoring other AZs to ensure availabilityWaited for AWS to resolve the issueRecovery power restored to us-east-1aMajority of instances rejoined the cluster without issueMost of remainder required a reboot to fixThe others needed to be replaced, one at a timeObservations: AWSEphemeral drive performance is better than EBSInstances seldom die on their ownUse as many availability zones as possibleUnderstand how AWS launches instancesI/O is constrained in most AWS instance typesRepairs are very I/O intensiveLarge size-tiered compactions can impact latencySSDs are game changers</p> <p>23 Developer in house Quickly find problems by looking into codeDocumentation/tools for troubleshooting are scarce</p> <p> repairs Affect entire replication set, cause very high latency in I/O constrained environment</p> <p> multi-tenant Hard to track changes being madeShared resources mean that one service can affect another oneIndividual usage only growsMoving services to a new cluster with the service live is non-trivial</p> <p> smaller per-node data Instance level operations (bootstrap, compact, etc) are faster</p> <p>23Observations: CassandraA slow node is worse than a down nodeCold cache increases load and kills latencyUse whatever dials you can find in an emergencyRemove node from coordinator listCompaction throttlingMin/max compaction thresholdsEnable/disable gossipLeveled compaction performance is very promising1.1.x and 1.2.x should address some big issues</p> <p>2424MonitoringActionableHardware and network issuesCluster consistencyCumulative Cassandra trendsThroughput and latencyKey Cassandra metrics (queues, dropped ops, table reads)InformationalSchema changesLog file errors/exceptionsRecent restarts</p> <p>2525MaintenanceRepair clusters regularlyRun off-line major compactions to avoid latencySSDs will make this unnecessaryAlways replace nodes when they failPeriodically replace all nodes in the clusterUpgrade to new versionsBinary (rpm) for major upgrades or emergenciesRolling AMI push over time26Scaling Cassandrahttp://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html800K writes per second in production</p> <p>Disk vs. SSD BenchmarkSame Throughput, Lower Latency, Half CostNetflix is all in with Cassandra50Number of production clusters15Number of multi-region clusters4Max regions, one cluster101Total TB of data across all clusters780Number of Cassandra nodes72/32Largest Cassandra cluster (nodes/data in TB)250k/800kMax read/writes per second on a single cluster30Future optimizationsCassandra as a ServiceFewer clusters, more dataAutoscaling CassandraPriam on PEDsSelf maintaining Cassandra clusters</p> <p>All optimizations are open sourcedEnhancements committed to open source projectNetflix@githubAstyanaxPriamCassandra Explorers (coming soon)MotivationsGive back to Apache licensed OSS communityHelp define best practices</p> <p>Netflix Open Source Center</p> <p>ConclusionCassandra is high performing and durable in AWSCassandra is flexible enough to handle most use-casesAWS offerings help provide a complete solutionCassandra performs well in AWS, especially on SSDsJust because Netflix does it doesnt make it right for you</p> <p>Follow ushttp://techblog.netflix.comhttp://netflix.github.comTwitter@Netflix@NetflixJobs@rusmeshenberg (Ruslan)@eatupmartha (Gregg)We are sincerely eager to hear your FEEDBACK on this presentation and on re:Invent.</p> <p>Please fill out an evaluation form when you have a chance. </p>