GumGum: Multi-Region Cassandra in AWS

Download GumGum: Multi-Region Cassandra in AWS

Post on 26-Jan-2017




5 download


CASSANDRA SUMMIT 2015 CASSANDRA SUMMIT 2015Mario LazaroSeptember 24th 2015#CassandraSummit2015MULTI-REGION CASSANDRA IN AWSMULTI-REGION CASSANDRA IN AWSWHOAMIWHOAMIMario Cerdan LazaroBig Data EngineerBorn and raised in SpainJoined GumGum 18 months agoAbout a year and a half experience withCassandra#5 Ad Platform in the U.S10B impressions / month2,000 brand-safe premiumpublisher partners1B+ global unique visitorsDaily inventory Impressionsprocessed - 213MMonthly Image Impressionsprocessed - 2.6B123 employees in seven officesAGENDAAGENDAOld clusterInternational ExpansionChallengesTestingModus OperandiTipsQuestions & Answers 25 Classic nodes cluster1 Region / 1 Rack hosted inAWS EC2 US EastVersion 2.0.8Datastax CQL driverGumGum's metadata includingvisitors, images, pages, and adperformanceUsage: realtime data accessand analytics (MR jobs)OLD C* CLUSTER - MARCH 2015OLD C* CLUSTER - MARCH 2015OLD C* CLUSTER - REALTIME USE CASEOLD C* CLUSTER - REALTIME USE CASEBillions of rowsHeavy read workload 60/40TTLs everywhere - TombstonesHeavy and critical use of countersRTB - Read Latency constraints(total execution time ~50 ms) OLD C* CLUSTER - ANALYTICS USE CASEOLD C* CLUSTER - ANALYTICS USE CASEDaily ETL jobs to extract / join datafrom C*Hadoop MR jobsAdHoc queries with PrestoINTERNATIONAL EXPANSIONINTERNATIONAL EXPANSIONFIRST STEPSFIRST STEPSStart C* test datacenters in USEast & EU West and test how C*multi region works in AWSRun capacity/performance tests.We expect 3x times more trafficin 2015 Q4FIRST THOUGHTSFIRST THOUGHTSUse AWS Virtual PrivateCloud (VPC)Cassandra & VPC presentsome connectivityissues challengesReplicate entire datawith same number of replicasTOO GOOD TO BE TRUE ...TOO GOOD TO BE TRUE ...CHALLENGESCHALLENGESProblems between Cassandra in EC2Classic / VPC and Datastax JavadriverEC2MultiRegionSnitch uses publicIPs. EC2 instances do not have aninterface with public IP address -Cannot connect between instances inthe same region using Public IPs/** * Implementation of {@link AddressTranslater} used by the driver that * translate external IPs to internal IPs. * @author Mario */public class Ec2ClassicTranslater implements AddressTranslater { private static final Logger LOGGER = LoggerFactory.getLogger(Ec2ClassicTranslater.class); private ClusterService clusterService; private Cluster cluster; private List publicDnss; @PostConstruct public void build() { publicDnss = clusterService.getInstances(cluster); } /** * Translates a Cassandra {@code rpc_address} to another address if necessary. * * * @param address the address of a node as returned by Cassandra. * @return {@code address} translated IP address of the source. */ public InetSocketAddress translate(InetSocketAddress address) { for (final Instance server : publicDnss) { if (server.getPublicIpAddress().equals(address.getHostString())) {"IP address: {} translated to {}", address.getHostString(), server.getPrivateIpAddress()); return new InetSocketAddress(server.getPrivateIpAddress(), address.getPort()); } } return null; } public void setClusterService(ClusterService clusterService) { this.clusterService = clusterService; } public void setCluster(Cluster cluster) { this.cluster = cluster; }}Problems between Cassandra in EC2Classic / VPC and Datastax JavadriverEC2MultiRegionSnitch uses publicIPs. EC2 instances do not have aninterface with public IP address -Cannot connect between instances inthe same region using Public IPsRegion to Region connectivity will usepublic IPs - Trust those IPs or usesoftware/hardware VPNProblems between Cassandra in EC2Classic / VPC and Datastax JavadriverEC2MultiRegionSnitch uses publicIPs. EC2 instances do not have aninterface with public IP address -Cannot connect between instances inthe same region using Public IPsRegion to Region connectivity will usepublic IPs - Trust those IPs or usesoftware/hardware VPNYour application needs to connect toC* using private IPs - Custom EC2translatorDatastax Java Driver Load BalancingMultiple choicesDCAware + TokenAwareDatastax Java Driver Load BalancingMultiple choicesDCAware + TokenAware + ?Datastax Java Driver Load BalancingMultiple choicesCHALLENGES Clients in one AZ attempt to always communicate with C*nodes in the same AZ. We call this zone-aware connections. Thisfeature is built into , Netflixs C* Java client library.Astyanax Aware Connection:Webapps in 3 different AZs: 1A, 1B, and 1CC* datacenter spanning 3 AZs with 3 replicasCHALLENGESCHALLENGES1A1B1C1B1BWe added it! - Rack/AZ awareness toTokenAware PolicyCHALLENGESCHALLENGESCHALLENGESCHALLENGESThird Datacenter: AnalyticsDo not impact realtime data accessSpark on top of CassandraSpark-Cassandra Datastax connectorReplicate specific keyspacesLess nodes with larger disk spaceSettings are differentEx: Bloom filter chanceCHALLENGESCHALLENGESThird Datacenter: AnalyticsCassandra Only DCRealtimeCassandra + Spark DCAnalyticsCHALLENGESCHALLENGESUpgrade from 2.0.8 to 2.1.5Counters implementation is buggy inpre-2.1 versions My code never has bugs. It just develops randomunexpected featuresCHALLENGESCHALLENGES To choose, or not to choose VNodes. That is the question.(M. Lazaro, 1990 - 2500)Previous DC using Classic NodesWorks with MR jobsComplexity for adding/removing nodesManual manage token rangesNew DCs will use VNodesApache Spark + Spark Cassandra DatastaxconnectorEasy to add/remove new nodes as trafficincreasesTESTINGTESTINGTESTINGTESTINGTesting requires creating and modifyingmany C* nodesCreate and configuring a C* clusteris time-consuming / repetitive taskCreate fully automated process forcreating/modifying/destroyingCassandra clusters with Ansible# Ansible settings for provisioning the EC2 instance--- ec2_instance_type: r3.2xlarge ec2_count: - 0 # How many in us-east-1a ? - 7 # How many in us-east-1b ? ec2_vpc_subnet: - undefined - subnet-c51241b2 - undefined - subnet-80f085d9 - subnet-f9138cd2 ec2_sg: - va-ops - va-cassandra-realtime-privateTESTING - PERFORMANCETESTING - PERFORMANCEPerformance tests using newCassandra 2.1 Stress Tool:Recreate GumGum metadata /schemasRecreate workload and makeit 3 times biggerTry to find limits / Saturateclients# Keyspace Namekeyspace: stresscql #keyspace_definition: |# CREATE KEYSPACE stresscql WITH replication = {'class': #'NetworkTopologyStrategy'### Column Distribution Specifications ###columnspec: - name: visitor_id size: gaussian(32..32) #domain names are relatively short population: uniform(1..999M) #10M possible domains to pick - name: bidder_code cluster: fixed(5) - name: bluekai_category_id - name: bidder_custom size: fixed(32) - name: bidder_id size: fixed(32) - name: bluekai_id size: fixed(32) - name: dt_pd - name: rt_exp_dt - name: rt_opt_out ### Batch Ratio Distribution Specifications ###insert: partitions: fixed(1) # Our partition key is the visitor_id so select: fixed(1)/5 # We have 5 bidder_code per domain batchtype: UNLOGGED # Unlogged batches ## A list of queries you wish to run against the schema#TESTING - PERFORMANCETESTING - PERFORMANCEMain worry:Latency and replication overseasUse LOCAL_X consistency levels in your clientOnly one C* node will contact only one C* node in adifferent DC for sending replicas/mutationsTESTING - PERFORMANCETESTING - PERFORMANCEMain worries:LatencyTESTING - INSTANCE TYPETESTING - INSTANCE TYPETest all kind of instance types. We decided togo with r3.2xlarge machines for our cluster:60 GB RAM8 Cores160GB Ephemeral SSD Storage for commit logs andsaved cachesRAID 0 over 4 SSD EBS Volumes for dataPerformance / Cost and GumGum usecase makes r3.2xlarge the best optionDisclosure: I2 instance family is the best ifyou can afford itTESTING - UPGRADETESTING - UPGRADEUpgrade C* Datacenter from 2.0.8 to 2.1.5Both versions can cohabit in the same DCNew settings and features triedDateTieredCompactionStrategy:Compaction for Time Series DataIncremental repairsCounters new architectureMODUS OPERANDIMODUS OPERANDIMODUS OPERANDIMODUS OPERANDISum upFrom: One cluster / One DC in US East To: One cluster / Two DCs in US East and one DCin EU WestMODUS OPERANDIMODUS OPERANDIFirst step:Upgrade old cluster snitch from EC2Snitch toEC2MultiRegionSnitchUpgrade clients to handle it (aka translators)Make sure your clients do not lose connection to upgradedC* nodes (JIRA DataStax - )JAVA-809 OPERANDIMODUS OPERANDISecond step:Upgrade old datacenter from 2.0.8 to 2.1.5nodetool upgradesstables (multiple nodes at a time)Not possible to rebuild a 2.1.X C* node from a 2.0.X C*datacenter.rebuildWARN [Thread-12683] 2015-06-17 10:17:22,845 -UnknownColumnFamilyException reading from socket;closing org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find cfId=XXXMODUS OPERANDIMODUS OPERANDIThird step:Start EU West and new US East DCs within the sameclusterReplication factor in new DCs: 0Use dc_suffix to differentiate new Virginia DC from old oneClients do not talk to new DCs. Only C* knows they existReplication factor to 3 on all except analytics 1 Start receiving new dataNodetool rebuild Old dataMODUS OPERANDIMODUS OPERANDIRF 3ClientsRF 3:0:0:0RF 3:3:3:1US East RealtimeEU West RealtimeUS East AnalyticsRebuildRebuildRebuildFrom 39d8f76d9cae11b4db405f5a002e2a4f6f764b1d Mon Sep 17 00:00:00 2001From: mario Date: Wed, 17 Jun 2015 14:21:32 -0700Subject: [PATCH] AT-3576 Start using new Cassandra realtime cluster--- src/main/java/com/gumgum/cassandra/ | 30 ++++------------------ .../com/gumgum/cassandra/ | 30 ++++++++++++++-------- src/main/java/com/gumgum/cluster/ | 3 ++- .../resources/applicationContext-cassandra.xml | 13 ++++------ src/main/resources/ | 2 +- src/main/resources/ | 3 +++ src/main/resources/ | 3 +-- src/main/resources/ | 3 +++ .../ | 2 -- .../asset/cassandra/ | 2 -- .../ | 2 -- .../com/gumgum/page/ | 2 -- .../cassandra/ | 2 -- 13 files changed, 39 insertions(+), 58 deletions(-)MODUS OPERANDIMODUS OPERANDIStart using new Cassandra DCsRF 3:3:3:1MODUS OPERANDIMODUS OPERANDIClientsUS East RealtimeEU West RealtimeUS East AnalyticsRF 0:3:3:1MODUS OPERANDIMODUS OPERANDIClientsUS East RealtimeEU West RealtimeUS East AnalyticsRF 0:3:3:1RF 3:3:1DecomissionTIPSTIPSTIPS - AUTOMATED MAINTENANCETIPS - AUTOMATED MAINTENANCEMaintenance in a multi-region C* cluster:Ansible + Cassandra maintenance keyspace +email report = zero human intervention!CREATE TABLE maintenance.history ( dc text, op text, ts timestamp, ip text, PRIMARY KEY ((dc, op), ts)) WITH CLUSTERING ORDER BY (ts ASC) AND bloom_filter_fp_chance=0.010000 AND caching='{"keys":"ALL", "rows_per_partition":"NONE"}' AND comment='' AND dclocal_read_repair_chance=0.100000 AND gc_grace_seconds=864000 AND read_repair_chance=0.000000 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'LZ4Compressor'};CREATE INDEX history_kscf_idx ON maintenance.history (kscf);233-133-65:/opt/scripts/production/groovy$ groovy CassandraMaintenanceCheck.groovy -dc us-east-va-realtime -op compaction -e marioTIPS - SPARKTIPS - SPARKNumber of workers above number of total C*nodes in analyticsEach worker uses: 1/4 number of cores of each instance1/3 total available RAM of each instanceCassandra-Spark connectorSpanBy.joinWithCassandraTable(:x, :y)Spark.cassandra.output.batch.size.bytesSpark.cassandra.output.concurrent.writesval conf = new SparkConf() .set("", cassandraNodes) .set("spark.cassandra.connection.local_dc", "us-east-va-analytics") .set("spark.cassandra.connection.factory", "com.gumgum.spark.bluekai.DirectLinkConnectionFactory") .set("spark.driver.memory","4g") .setAppName("Cassandra presidential candidates app")TIPS - SPARKTIPS - SPARKCreate "translator" if using EC2MultiRegionSnitchSpark.cassandra.connection.factorySINCE C* IN EU WEST ...SINCE C* IN EU WEST ...US West Datacenter!EU West DC US East DC Analytics DC US West DCQ&AQ&AGumGum is hiring!


View more >