Everybody Lies

Download Everybody Lies

Post on 21-Jul-2015

466 views

Category:

Software

0 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>E V E R Y B O D Y L I E ST O M A S Z K O W A L C Z E W S K I</p></li><li><p>C A R G O C U LT</p><p>During the Middle Ages there were all kinds of crazy ideas, such as that a piece of of rhinoceros horn would increase potency. Then a method was discovered for separating the ideas- which was to try one to see if it worked, and if it didn't work, to eliminate it. This method became organized, of course, into science. And it developed very well, so that we are now in the scientific age. It is such a scientific age, in fact, that we have difficulty in understanding how witch doctors could ever have existed, when nothing that they proposed ever really worked-or very little of it did. </p><p>Richard Feynman </p><p>From a Caltech commencement address given in 1974 </p></li><li><p>W H Y B O T H E R ?</p><p> You get what you measure </p><p>- Ineffective optimisations that complicate code </p><p>+ Numbers to convince management to do refactoring or migration to Java 8!</p></li><li><p>W H Y B O T H E R ?</p><p> Predictable is better than fast </p><p> One page display requires multiple calls (static and dynamic resources) </p><p> Multiple microservices are called to generate response </p><p> During a session user may do hundreds of displays of your webpages</p></li><li><p>W H Y D O T H I S ?</p><p> Every 100 ms increase in load time of Amazon.com decreased sales by 1%1 </p><p> Increasing web search latency 100 to 400 ms reduces the daily searches per user by 0.2% to 0.6%. Furthermore, users do fewer searches the longer they are exposed. For longer delays, the loss of searches persists for a time even after latency returns to previous levels.2</p><p>1Kohavi and Longbotham 20072Brutlag 2009</p></li><li><p>S U R V E Y</p><p> Do you</p></li><li><p>S U R V E Y</p><p> Use graphite?</p></li><li><p>S U R V E Y</p><p> Use graphite? </p><p> Feed it with Coda Hale/Dropwizard metrics?</p></li><li><p>S U R V E Y</p><p> Use graphite? </p><p> Feed it with Coda Hale/Dropwizard metrics? </p><p> Modify their source? Use nonstandard options?</p></li><li><p>S U R V E Y</p><p> Use graphite? </p><p> Feed it with Coda Hale/Dropwizard metrics? </p><p> Modify their source? Use nonstandard options? </p><p> Graph average? Median?</p></li><li><p>S U R V E Y</p><p> Use graphite? </p><p> Feed it with Coda Hale/Dropwizard metrics? </p><p> Modify their source? Use nonstandard options? </p><p> Graph average? Median? </p><p> Percentiles?</p></li><li><p>(c) xkcd.com</p>http://xkcd.com</li><li><p>W H AT M E T R I C S C A N W E U S E ?</p><p>graphite.send(prefix(name, "max"), ...); graphite.send(prefix(name, "mean"), ...); graphite.send(prefix(name, "min"), ...); graphite.send(prefix(name, "stddev"), ...); graphite.send(prefix(name, "p50"), ...); graphite.send(prefix(name, "p75"), ...); graphite.send(prefix(name, "p95"), ...); graphite.send(prefix(name, "p98"), ...); graphite.send(prefix(name, "p99"), ...); graphite.send(prefix(name, p999"), ...); </p></li><li><p>D O N T L O O K AT M E A N</p><p> 1000 queries - 0ms latency, 100 queries 5s latency </p><p> Average is 4,5ms </p><p> 1000 queries - 1ms latency, 100 queries - 5s latency </p><p> Average is 455ms </p><p> Does not help to quantify lags users will experience</p></li><li><p> A N S C O M B E ' S Q U A R T E T B Y F R A N C I S A N S C O M B E</p><p>These four data sets all have the same mean, median, and variance</p></li><li><p>P L O T T I N G M E A N I S F O R S H O W I N G O F F T O M A N A G E M E N T</p></li><li><p>M AY B E M E D I A N T H E N ?</p><p> What is the probability of end user encountering latency worse than median? </p><p> Remember: usually multiple requests are needed to respond to API call (e.g. N micro services, N resource requests per page)</p><p>1</p><p>2</p><p>N 100</p></li><li><p>P R O B A B I L I T Y O F E X P E R I E N C I N G L AT E N C Y B E T T E R T H A N M E D I A N</p><p>I N F U N C T I O N O F M I C R O S E R V I C E S I N V O LV E D</p><p>0 1 2 3 4 5 6 7 8 9 10</p><p>10</p><p>20</p><p>30</p><p>40</p><p>50</p><p>60</p><p>70</p><p>80</p><p>90</p><p>100</p></li><li><p>W H I C H P E R C E N T I L E I S R E L E VA N T T O Y O U ?</p><p> Is 99th percentile demanding constraint? </p><p> In application serving 1000 qps latency worse than that happens ten times per second. </p><p> User that needs to navigate through several web pages will most probably experience it </p><p> What is the probability of encountering latency better than 99th?</p><p>99</p><p>100</p><p>N 100</p></li><li><p>P R O B A B I L I T Y O F E X P E R I E N C I N G L AT E N C Y B E T T E R T H A N 9 9 T H P E R C E N T I L E</p><p>I N F U N C T I O N O F M I C R O S E R V I C E S I N V O LV E D</p><p>0 10 20 30 40 50 60 70 80 90 100</p><p>0</p><p>10</p><p>20</p><p>30</p><p>40</p><p>50</p><p>60</p><p>70</p><p>80</p><p>90</p><p>100</p></li><li><p>D O N O T AV E R A G E P E R C E N T I L E S</p><p>Example scenario: </p><p>1. Load balancer splits traffic unevenly (ELB anyone?) </p><p>2. Server S1 has 1 qps over measured time with 95%ile == 1ms </p><p>3. Server S2 has 100 qps over measured time with 95%ile == 10s </p><p>4. Average is ~5s. </p><p>5. What does that tell us? </p><p>6. Did we satisfy SLA if it says 95%ile must be below 8s? </p><p>7. Actual 95%ile percentile is ~10s</p></li><li><p> A L I C E ' S A D V E N T U R E S I N W O N D E R L A N D</p><p>If there's no meaning in it,' said the King, 'that saves a world of trouble, you know, as we </p><p>needn't try to find any </p></li><li><p>Every time you average max values someone in the world starts new JavaScript framework</p></li><li><p>Demo time</p></li><li><p> metricRegistry.timer("2015.standardTimer");</p><p>Standard timer will over or under report actual percentiles at will. </p><p>Green line represents actual MAX values.</p></li><li><p> metricRegistry.timer("2015.standardTimer");</p><p>Standard timer will over or under report actual percentiles at will. </p><p>Green line represents actual MAX values.</p></li><li><p>T I M E R S H I S T O G R A M R E S E R V O I R </p><p> Backing storage for Timers data </p><p> Contain statistically representative reservoir of a data stream </p><p> Default is ExponentiallyDecayingReservoir which has many drawbacks and is source of most inaccuracies observed throughout this presentation </p><p> Others include </p><p> UniformReservoir, SlidingTimeWindowReservoir, SlidingTimeWindowReservoir, SlidingWindowReservoir</p></li><li><p>E X P O N E N T I A L LY D E C AY I N G R E S E R V O I R</p><p> Stores 1028 random samples by default </p><p> Assumes normal distribution of recorded values </p><p> Many statistical tools applied in computer systems monitoring will assume normal distribution </p><p> Be suspicious of such tools </p><p> Why is that a bad idea?</p></li><li><p>-2,4 -2 -1,6 -1,2 -0,8 -0,4 0 0,4 0,8 1,2 1,6 2 2,4</p><p>0,5</p><p>1</p><p>1,5</p><p>2</p><p>2,5</p><p>3N O R M A L D I S T R I B U T I O N - W H Y S O U S E F U L ?</p><p> Central limit theorem </p><p> Chebyshev's inequality</p><p>f (x, , ) =1</p><p>p2</p><p>e</p><p> (x)2</p><p>22</p></li><li><p>10 10,5 11 11,5 12</p><p>-0,25</p><p>0,25</p><p>0,5</p><p>0,75</p><p>1C A L C U L AT E 9 5 % I L E B A S E D O N M E A N A N D S T D . D E V.</p><p> IFF latency values were distributed normally then we could calculate any percentile based on mean and standard deviation</p><p> = 10ms = 1ms</p><p> Lookup into standard normal (Z) table </p><p> 95%ile is located 1.65 std. dev. from mean </p><p> Result is 11,65ms</p></li><li><p>Latency profile resembling normal distribution</p></li><li><p>Add spikes due to young gen GC pauses</p></li><li><p>Add spikes due to old gen GC pauses</p></li><li><p>Add spikes due to calling other services (like DB)</p></li><li><p>Add spikes due to: lost tcp packet retransmission, disk swapping, kernel bookkeeping etc.</p></li><li><p>-2,4 -2 -1,6 -1,2 -0,8 -0,4 0 0,4 0,8 1,2 1,6 2 2,4</p><p>0,5</p><p>1</p><p>1,5</p><p>2</p><p>2,5</p><p>3N O R M A L D I S T R I B U T I O N - W H Y N O T A P P L I C A B L E ?</p><p> The value of the normal distribution is practically zero when the value x lies more than a few standard deviations away from the mean. </p><p> It may not be an appropriate model when one expects a significant fraction of outliers </p><p> [] other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data.</p><p>1</p><p>f (x, , ) =1</p><p>p2</p><p>e</p><p> (x)2</p><p>22</p><p>1All quotes on this slide from Wikipedia</p></li><li><p>Blue line represents metric reported from Timer class Green line represents request rate</p></li><li><p>T I M E R , T I M E R N E V E R C H A N G E S </p><p> Timer values decay exponentially </p><p> giving artificial smoothing of values for server behaviour that may be long gone </p><p> Timer that is not updated does not decay </p><p> If Timer is not updated (e.g. subprocess failed and we stopped sending requests to it) its values will remain constant </p><p> Check this post for potential solutions: taint.org/2014/01/16/145944a.html</p>http://taint.org/2014/01/16/145944a.html</li><li><p>H D R H I S T O G R A M</p><p> Supports recording and analysis of sampled data across configurable range with configurable accuracy </p><p> Provides compact representation of data while retaining high resolution </p><p> Allows configurable tradeoffs between space and accuracy </p><p> Very fast, allocation free, not thread safe for maximum speed (thread safe versions available) </p><p> Created by Gil Tene of Azul Sytems</p></li><li><p>R E C O R D E R</p><p> Uses HdrHistogram to store values </p><p> Supports concurrent recording of values </p><p> Recording is lock free but also wait free on most architectures (that support lock xadd) </p><p> Reading is not lock free but does not stall writers (writer-reader phaser) </p><p> Checkout Marshall Pierces library for using it as a Reservoir implementation</p></li><li><p>S O L U T I O N S</p><p> Always instantiate Timer with custom reservoir </p><p> new ExponentiallyDecayingReservoir(LARGE_NUMBER)</p><p> new SlidingTimeWindowReservoir(1, MINUTES)</p><p> new HdrHistogramResetOnSnapshotReservoir()</p><p> Only last one is safe and accurate and will not report stale values if no updates were made</p></li><li><p>JMH benchmarks (from my laptop, caveat emptor!)</p></li><li><p>S M O K I N G B E N C H M A R K I N G I S T H E L E A D I N G C A U S E O F S TAT I S T I C S I N T H E W O R L D</p></li><li><p>C O O R D I N AT E D O M I S S I O N</p><p> As formulated by Gil Tene of Azul Systems </p><p> When load driver is plotting with system under test to deceive you </p><p> Most tools do this </p><p> Most benchmarks do this </p><p> Yahoo Cloud Serving Benchmark had that problem1</p><p>1Recently fixed by Nitsan Wakart, see psy-lob-saw.blogspot.com/2015/03/fixing-ycsb-coordinated-omission.html</p>http://psy-lob-saw.blogspot.com/2015/03/fixing-ycsb-coordinated-omission.html</li><li><p>-1,6 -0,8 0 0,8 1,6 2,4 3,2 4 4,8 5,6 6,4 7,2</p><p>-0,8</p><p>0,8</p><p>1,6</p><p>2,4</p><p>3,2</p><p>4</p><p>4,8</p><p>5,6</p><p>request arrival time</p><p>Application pause time </p><p>Requests according to test plan. Only red one will be </p><p>send. Others will be missing from test.</p><p>latency</p></li><li><p> C R E AT E D W I T H G I L T E N E ' S H D R H I S T O G R A M P L O T T I N G S C R I P T </p><p>Effects on benchmarks at high percentiles are spectacular</p></li><li><p>C O O R D I N AT E D O M I S S I O N S O L U T I O N S</p><p>1. Ignore the problem! </p><p>perfectly fine for non interactive system where only throughput matters</p></li><li><p>C O O R D I N AT E D O M I S S I O N S O L U T I O N S</p><p>2. Correct it mathematically in sampling mechanism </p><p>HdrHistogram can correct CO with these methods (choose one!):</p><p>histogram.recordValueWithExpectedInterval( value, expectedIntervalBetweenSamples );</p><p>histogram.copyCorrectedForCoordinatedOmission( expectedIntervalBetweenSamples );</p></li><li><p>C O O R D I N AT E D O M I S S I O N S O L U T I O N S</p><p>3. Correct it on load driver side </p><p>by noticing pauses between sent requests. </p><p>newly issued request will have timer that starts counting from time it should have been sent but wasn't</p></li><li><p>C O O R D I N AT E D O M I S S I O N S O L U T I O N S</p><p>4. Fail the test </p><p>for hard real time systems where pause causes human casualties (breaks, pacemakers, Phalanx system)</p></li><li><p>C O O R D I N AT E D O M I S S I O N</p><p> Mathematical solutions can overcorrect when load driver has pauses (e.g. GC). </p><p> Do not account for the fact that server after pause has no work to do instead of N more requests waiting to be executed </p><p> In real world it might have never recovered </p><p> Most tools ignore the problem </p><p> Notable exception: Twitter Iago</p></li><li><p> L O A D D R I V E R M O T T O</p><p>Do not bend to the tyranny of reality </p></li><li><p>S U M M A R Y</p><p> Measure what is meaningful not just what is measurable </p><p> Set SLA before testing and creating dashboards </p><p> Do not trust Timer class, use custom reservoirs, HdrHistogram, Recorder, never trust EMWA for request rate </p><p> Do not average percentiles unless you need a random number generator </p><p> Do not plot averages unless you just want to look good on dashboards </p><p> When load testing be aware of coordinated omission</p></li><li><p>S O U R C E S , T H A N K Y O U S A N D R E C O M M E N D E D F O L L O W U P S Coda Hale for great metrics library </p><p> Gil Tene </p><p> latencytipoftheday.blogspot.de </p><p> www.infoq.com/presentations/latency-pitfalls </p><p> github.com/HdrHistogram/HdrHistogram </p><p> Nitsan Wakart </p><p> psy-lob-saw.blogspot.de/2015/03/fixing-ycsb-coordinated-omission.html </p><p> and whole blog </p><p> Matin Thompson et. al. </p><p> groups.google.com/forum/#!forum/mechanical-sympathy</p>http://latencytipoftheday.blogspot.dehttp://www.infoq.com/presentations/latency-pitfallshttp://github.com/HdrHistogram/HdrHistogramhttp://psy-lob-saw.blogspot.de/2015/03/fixing-ycsb-coordinated-omission.htmlhttp://groups.google.com/forum/#!forum/mechanical-sympathy</li><li><p>R E C O M M E N D E D</p><p>Great introduction to statistics and queueing theory. </p><p>Performance Modeling and Design of Computer Systems: Queueing Theory in Action </p><p>Prof. Mor Harchol-Balter</p></li><li><p>F E E D B A C K K I N D LY R E Q U E S T E D</p><p>https://www.surveymonkey.com/s/B5KGWWN</p></li></ul>