elk: moose-ively scaling your log system

Post on 21-Jan-2017

13.523 Views

Category:

Technology

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

ELK: Moose-ively scaling your log system

Lessons From Etsy’s 3-year Journey with Elasticsearch, Logstash and Kibana

Monitoring And Scaling Logstash

ACT 2

# Beyond Web-Scale: Moose-Scale Elasticsearch

ACT 3

#Sizing Up Your Elasticsearch Cluster

ACT 1

#

Agenda

3

REQUIREMENTS FROM YOU: 1. ASK QUESTIONS

2. DISCUSS THE TOPICS

4

5

HANG IN THERE , SOME OF THIS IS

A LITTLE DRY!

6

P R O L O G U E

Etsy’s ELK clusters

7

GUESS THE CLUSTER SIZE:

STORAGE CAPACITY

8

321

Combined cluster sizeNumber of clusters Log lines indexed

Six 300 ES instances 141 physical servers

4200 CPU cores 38Tb RAM

1.5Pb Storage

10 billion/day Up to 400k/sec

Etsy’s ELK clusters

9

P R O L O G U E

Healthy advice

10

Healthy Advice

• Rename your cluster from “elasticsearch” to something else. When you end up with two Elasticsearch clusters on your network, you’ll be glad you did.

• Oops, deleted all the indices again!Set action.destructive_requires_name=true

• Always use SSDs. This is not optional.

• If you’re seeing this talk, you probably need 10G networking too.

• Use curator. We developed our own version before it was available.

11

AC T 1 , S C E N E 1

Sizing up your Elasticsearch Cluster

12

What resources influence cluster make-up?

• CPU

- Cores > clock speed

• Memory

- Number of documents

- Number of shards

• Disk I/O

- SSD sustained write rates

• Network bandwidth

- 10G mandatory on large installations for fast recovery / relocation

13

What resources influence cluster memory?

• Memory

- Segment memory: ~4b RAM per document = ~4Gb per billion log lines

- Field data memory: Approximately same as segment memory(less for older, less accessed data)

- Filter cache: ~1/4 to 1/2 of segment memory, depending on searches

- All the rest (at least 50% of system memory) for OS file cache

- You can't have enough memory!

14

What resources influence cluster I/O?

• Disk I/O

- SSD sustained write rates

- Calculate shard recovery speed if one node fails:

- Shard size = (Daily storage / number of shards)

- (Shards per node * shard size) / (disk write speed / shards per node)

• Eg: 30Gb shards, 2 shards per node, 250Mbps write speed:

- (2 * 30Gb) / 125Mbps = 8 minutes

• How long are you comfortable losing resilience?

• How many nodes are you comfortable losing?

• Multiple nodes per server increase recovery time

15

What resources influence cluster networking?

• Network bandwidth

- 10G mandatory on large installations for fast recovery / relocation

- 10 minute recovery vs 50+ minute recovery:

• 1G Bottleneck: Network uplink

• 10G Bottleneck: Disk speed

16

AC T 1 , S C E N E 2

Sizing up your Logstash Cluster

17

Sizing Up Your Logstash Cluster: Resources

CPU

18

Sizing Up Logstash: CPU

• Rule 1: Buy as many of the fastest CPU cores as you can afford

• Rule 2: See rule 1

• More filtering == more CPU

19

WE'LL RETURN TO LOGSTASH CPU SHORTLY!

BUT FIRST…

20

AC T 2

Monitoring

21

• Easy to use • Data saved to ES • So many metrics! • No integration • Costs $$$

• Time to develop • Integrates with your

systems • Re-inventing the wheel • Free (libre, not gratis)

Roll your ownMarvel

22

Monitoring: Elasticsearch

• Metrics are exposed in several places:

- _cat APICovers most metrics, human readable

- _stats API, _nodes APICovers everything, JSON, easy to parse

• Send to Graphite

• Create dashboards

23

Monitoring: Systems

• SSD endurance

• Monitor how often Logstash says the pipeline is blockedIf it happens frequently, find out why (mention the possibilities and that we’ll cover them later)

24

Monitoring: Systems

• Dynamic disk space thresholds

• ((num_servers - failure_capacity) / num_servers) - 15%

- 100 servers

- Allow up to 6 to fail

- Disk space alert threshold = ((100 - 6) / 100) - 15% Disk space alert threshold = 79%

• Let your configuration management system tune this up and down for you, as you add and remove nodes from your cluster.

• The additional 15% is to give you some extra time to order or build more nodes.

25

AC T 3 , S C E N E 1

Scaling Logstash

26

Scaling Logstash: What impacts performance?

• Line length

• Grok pattern complexity - regex is slow

• Plugins used

• Garbage collection

- Increase heap size

• Hyperthreading

- Measure, then turn it off

27

Scaling Logstash: Measure Twice

• Writing your logs as JSON has little benefit, unless you do away with grok, kv, etc. Logstash still has to convert the incoming string to a ruby hash anyway.

28

HOW MUCH DOES RUBY LOVE

CREATING OBJECTS?

29

Scaling Logstash: Garbage Collection

• Defaults are usually OK

• Make sure you’re graphing GC

• Ruby LOVES to generate objects: monitor your GC as you scale

• Write plugins thoughtfully with GC in mind:

- Bad: 1_000_000.times { "This is a string" } user system total realtime 0.130000 0.000000 0.130000 ( 0.132482)

- Good: foo = 'This is a string'; 1_000_000.times { foo } user system total realtime 0.060000 0.000000 0.060000 ( 0.055005)

30

Scaling Logstash

Plugin performance

31

Scaling Logstash: Plugin Performance: Baseline

• How to establish a baseline

• Measure again with some filters

• Measure again with more filters

• Establish the costs of each filter

• Community filters are for the general case

- You should write their own for your specific case

- Easy to do

• Run all benchmarks for at least 5 mins, with a large data set

32

Scaling Logstash: Plugin Performance: Baseline

• Establish baseline throughput: Python, StatsD, Graphite

• Simple logstash config, 10m apache log lines, no filtering:

- input { file { path => "/var/log/httpd/access.log" start_position => "beginning" } } output { stdout { codec => "dots" } }

33

Scaling Logstash: Plugin Performance: Baseline

• Establish baseline throughput: Python, StatsD, Graphite

• Python script to send logstash throughput to statsd:

- sudo pip install statsd

- #!/usr/bin/env python import statsd, sys c = statsd.StatsClient('localhost', 8125) while True: sys.stdin.read(1) c.incr('logstash.testing.throughput', rate=0.001)

• Why don't we use the statsd output plugin? It slows down output!

34

Scaling Logstash: Plugin Performance: Baseline

• Establish baseline throughput

• Tie it all together:

- logstash -f logstash.conf | pv -W | python throughput.py

Garbage collection!

35

HOW MUCH DID GROK SLOW DOWN

PROCESSING IN 1.5?

36

Scaling Logstash: Plugin Performance: Grok

• Add a simple grok filter

• grok { match => [ "message", "%{ETSY_APACHE_ACCESS}" ] }

• 80% slow down with only 1 worker

Oops! Only one f i l ter worker!

37

Scaling Logstash: Plugin Performance: Grok

• Add a simple grok filter

• grok { match => [ "message", "%{APACHE_ACCESS}" ] }

• Add: -w <num_cpu_cores>, throughput still drops 33%: 65k/s -> 42k/s

No Grok 1 worker

1 Grok 1 worker

1 Grok 32 workers

38

YOUR BASELINE IS THE MINIMUM AMOUNT OF

WORK YOU NEED TO DO

39

Scaling Logstash: Plugin Performance: kv

• Add a kv filter, too:kv { field_split => "&" source => "qs" target => "foo" }

• Throughput similar, 10% drop (40k/s)

• Throughput more variable due to heavier GC

40

DON’T BE AFRAID TO REWRITE

PLUGINS!

41

Scaling Logstash: Plugin Performance

• kv is slow, we wrote a `splitkv` plugin for query strings, etc: kvarray = text.split(@field_split).map { |afield| pairs = afield.split(@value_split) if pairs[0].nil? || !(pairs[0] =~ /^[0-9]/).nil? || pairs[1].nil? || (pairs[0].length < @min_key_length && !@preserve_keys.include?(pairs[0])) next end if !@trimkey.nil? # 2 if's are faster (0.26s) than gsub (0.33s) #pairs[0] = pairs[0].slice(1..-1) if pairs[0].start_with?(@trimkey) #pairs[0].chop! if pairs[0].end_with?(@trimkey) # BUT! in-place tr is 6% faster than 2 if's (0.52s vs 0.55s) pairs[0].tr!(@trimkey, '') if pairs[0].start_with?(@trimkey) end if !@trimval.nil? pairs[1].tr!(@trimval, '') if pairs[1].start_with?(@trimval) end pairs } kvarray.delete_if { |x| x == nil } return Hash[kvarray]

42

SPLITKV LOGSTASH CPU: BEFORE: 100% BUSY

AFTER: 33% BUSY

43

Scaling Logstash: Elasticsearch Output

• Logstash output settings directly impact CPU on Logstash machines

- Increase flush_size from 500 to 5000, or more.

- Increase idle_flush_time from 1s to 5s

- Increase output workers

- Results vary by log lines - test for yourself:

• Make a change, wait 15 minutes, evaluate

• With the default 500 from logstash, we peaked at 50% CPU on the logstash cluster, and ~40k log lines/sec. Bumping this to 10k, and increasing the idle_flush_time from 1s to 5s got us over 150k log lines/sec at 25% CPU.

44

Scaling Logstash: Elasticsearch Output

45

Scaling Logstash

Pipeline performance

46

…/vendor/…/lib/logstash/pipeline.rbChange SizedQueue.new(20)

to SizedQueue.new(500)—pipeline-batch-size=500

After Logstash 2.3

Before Logstash 2.3

This is best changed at the end of tuning. Impacted by output plugin performance.

47

Scaling Logstash

Testing configuration changes

48

Scaling Logstash: Adding Context

• Discovering pipeline latency

- mutate { add_field => [ "index_time", "%{+YYYY-MM-dd HH:mm:ss Z}" ] }

• Which logstash server processed a log line?

- mutate { add_field => [ "logstash_host", "<%= node[:fqdn] %>" ] }

• Hash your log lines to enable replaying logs

- Check out the hashid plugin to avoid duplicate lines

49

Scaling Logstash: Etsy Plugins

http://github.com/etsy/logstash-plugins

50

Scaling Logstash: Adding Context

• ~10% hit from adding context

51

SERVERSPEC

52

Scaling Logstash: Testing Configuration Changes

describe package('logstash'), :if => os[:family] == 'redhat' do it { should be_installed } end

describe command('chef-client') do its(:exit_status) { should eq 0 } end

describe command('logstash -t -f ls.conf.test') do its(:exit_status) { should eq 0 } end

describe command('logstash -f ls.conf.test') do its(:stdout) { should_not match(/parse_fail/) } end

describe command('restart logstash') do its(:exit_status) { should eq 0 } end

describe command('sleep 15') do its(:exit_status) { should eq 0 } end

describe service('logstash'), :if => os[:family] == 'redhat' do it { should be_enabled } it { should be_running } end

describe port(5555) do it { should be_listening } end

53

Scaling Logstash: Testing Configuration Changes

input { generator { lines => [ '<Apache access log>' ] count => 1 type => "access_log" } generator { lines => [ '<Application log>' ] count => 1 type => "app_log" } }

54

Scaling Logstash: Testing Configuration Changes

filter { if [type] == "access_log" { grok { match => [ "message", "%{APACHE_ACCESS}" ] tag_on_failure => [ "parse_fail_access_log" ] } } if [type] == "app_log" { grok { match => [ "message", "%{APACHE_INFO}" ] tag_on_failure => [ "parse_fail_app_log" ] } } }

55

Scaling Logstash: Testing Configuration Changes

output { stdout { codec => json_lines } }

56

Scaling Logstash: Summary

• Faster CPUs matter

- CPU cores > CPU clock speed

• Increase pipeline size

• Lots of memory

- 18Gb+ to prevent frequent garbage collection

• Scale horizontally

• Add context to your log lines

• Write your own plugins, share with the world

• Benchmark everything

57

AC T 3 , S C E N E 2

Scaling Elasticsearch

58

Scaling Elasticsearch

Let's establish our baseline

59

Scaling Elasticsearch: Baseline with Defaults

• Logstash output: Default options + 4 workers Elasticsearch: Default options + 1 shard, no replicasWe can do better!

60

Scaling Elasticsearch

What Impacts Indexing Performance?

61

Scaling Elasticsearch: What impacts indexing performance?

• Line length and analysis, default mapping

• doc_values - required, not a magic fix:

- Uses more CPU time

- Uses more disk space, disk I/O at indexing

- Helps blowing out memory.

- If you start using too much memory for fielddata, look at the biggest memory hogs and move them to doc_values

• Available network bandwidth for recovery

62

Scaling Elasticsearch: What impacts indexing performance?

• CPU:

- Analysis

- Mapping

• Default mapping creates tons of .raw fields

- doc_values

- Merging

- Recovery

63

Scaling Elasticsearch: What impacts indexing performance?

• Memory:

- Indexing buffers

- Garbage collection

- Number of segments and unoptimized indices

• Network:

- Recovery speed

• Translog portion of recovery stalls indexingFaster network == shorter stall

64

Scaling Elasticsearch

Memory

65

Scaling Elasticsearch: Where does memory go?

• Example memory distribution with 32Gb heap:

- Field data: 10%Filter cache: 10%Index buffer: 500Mb

- Segment cache (~4 bytes per doc): How many docs can you store per node?

• 32Gb - ( 32G / 10 ) - ( 32G / 10 ) - 500Mb = ~25Gb for segment cache

• 25Gb / 4b = 6.7bn docs across all shards

• 10bn docs / day, 200 shards = 50m docs/shard1 daily shard per node: 6.7bn / 50m / 1 = 134 days5 daily shards per node: 6.7bn / 50m / 5 = 26 days

66

Scaling Elasticsearch: Doc Values

• Doc values help reduce memory

• Doc values cost CPU and storage

- Some fields with doc_values:1.7G Aug 11 18:42 logstash-2015.08.07/7/index/_1i4v_Lucene410_0.dvd

- All fields with doc_values:106G Aug 13 20:33 logstash-2015.08.12/38/index/_2a9p_Lucene410_0.dvd

• Don't blindly enable Doc Values for every field

- Find your most frequently used fields, and convert them to Doc Values

- curl -s 'http://localhost:9200/_cat/fielddata?v' | less -S

67

Scaling Elasticsearch: Doc Values

• Example field data usage: total request_uri _size owner ip_address117.1mb 11.2mb 28.4mb 8.6mb 4.3mb 96.3mb 7.7mb 19.7mb 9.1mb 4.4mb 93.7mb 7mb 18.4mb 8.8mb 4.1mb139.1mb 11.2mb 27.7mb 13.5mb 6.6mb 96.8mb 7.8mb 19.1mb 8.8mb 4.4mb145.9mb 11.5mb 28.6mb 13.4mb 6.7mb 95mb 7mb 18.9mb 8.7mb 5.3mb 122mb 11.8mb 28.4mb 8.9mb 5.7mb 97.7mb 6.8mb 19.2mb 8.9mb 4.8mb 88.9mb 7.6mb 18.2mb 8.4mb 4.6mb 96.5mb 7.7mb 18.3mb 8.8mb 4.7mb 147.4mb 11.6mb 27.9mb 13.2mb 8.8mb146.7mb 10mb 28.7mb 13.6mb 7.2mb

68

Scaling Elasticsearch: Memory

• Run instances with 128Gb or 256Gb RAM

• Configure RAM for optimal hardware configuration

- Haswell/Skylake Xeon CPUs have 4 memory channels

• Multiple instances of Elasticsearch

- Do you name your instances by hostname?Give each instance it’s own node.name!

69

Scaling Elasticsearch

CPUs

70

Scaling Elasticsearch: CPUs

• CPU intensive activities

- Indexing: analysis, merging, compression

- Searching: computations, decompression

• For write-heavy workloads

- Number of CPU cores impacts number of concurrent index operations

- Choose more cores, over higher clock speed

71

Scaling Elasticsearch: That Baseline Again…

• Remember our baseline?

• Why was it so slow?

72

Scaling Elasticsearch: That Baseline Again…

[logstash-2016.06.15][0] stop throttling indexing:numMergesInFlight=4, maxNumMerges=5

MERGING SUCKS

73

Scaling Elasticsearch: Merging

• Step 1: Increase shard count from 1 to 5

• Step 2: Disable merge throttling, on ES < 2.0:index.store.throttle.type: none

Much better!

74

Scaling Elasticsearch: Split Hosts

• Oops, we maxed out CPU! Time to add more nodes

75

Scaling Elasticsearch: Split Hosts

• Running Logstash and Elasticsearch on separate hosts

76

Scaling Elasticsearch: Split Hosts

• Running Logstash and Elasticsearch on separate hosts: 50% throughput improvement: 13k/s -> 19k/s

77

CPU IS REALLY IMPORTANT

78

DOES HYPERTHREADING

HELP?

79

Scaling Elasticsearch: Hyperthreading

• YES! About 20% of our performance! Leave it on.

80

WHAT ELSE HELPS?

81

CPU SCALING GOVERNORS!

BUT HOW MUCH?

82

Scaling Elasticsearch: CPU Governor

• # echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

• ~15-30% performance improvement. Remember to apply at boot!

83

Scaling Elasticsearch

Storage

84

A STORY OF SSDS

85

Scaling Elasticsearch: Disk I/O

86

Scaling Elasticsearch: Disk I/O

• Common advice

- Use SSD

- RAID 0

- Software RAID is sufficient

87

Scaling Elasticsearch: Disk I/O

• Uncommon advice

- Good SSDs are importantCheap SSDs will make you very, very sad

- Don’t use multiple data paths, use RAID 0 insteadHeavy translog writes to one disk will bottleneck

- If you have heavy merging, but CPU and disk I/O to spare:Extreme case: increase index.merge.scheduler.max_thread_count(But try not to…)

88

Scaling Elasticsearch: Disk I/O

• Uncommon advice

- Reduced durabilityindex.translog.durability: asyncTranslog fsync() every 5s, may be sufficient with replication

- Cluster recovery eats disk I/OBe prepared to tune it up and down during recovery, eg: indices.recovery.max_bytes_per_sec: 300mb cluster.routing.allocation.cluster_concurrent_rebalance: 24cluster.routing.allocation.node_concurrent_recoveries: 2

- Any amount of consistent I/O wait indicates a suboptimal state

89

CHOOSE YOUR SSD'S WISELY

90

Scaling Elasticsearch: Choosing SSDs

• Consumer grade drives

- Slower writes

- Cheap

- Lower endurance, fewer disk writes per day

• Enterprise grade drives

- Fast

- Expensive

- Higher endurance, higher disk writes per day

91

Scaling Elasticsearch: Choosing SSDs

• Read intensive

- Lower endurance, 1-3 DWPD

- Lower write speeds, least expensive

• Mixed use

- Moderate endurance, 10 DWPD

- Balanced read/write performance, pricing middle ground

• Write intensive

- High endurance, 25DWPD

- High write speeds, most expensive

92

YOU MENTIONED AN FSYNC() TUNABLE?

93

Scaling Elasticsearch: That Baseline Again…

• Remember this graph? Let's make it better!

94

Scaling Elasticsearch: Reduced Durability

• Benchmark: Reduced durability. Old baseline: ~20k-25k. New baseline: Similar, smoother:

95

WHY WAS THE IMPROVEMENT

SMALLER?

96

Scaling Elasticsearch: Thanks, Merges

• MERRRRRRGGGGGGGGGGGGGGGIIIIIIIINNNNGGGGGG!!

• $ curl -s 'http://localhost:9200/_nodes/hot_threads?threads=10' | grep % 73.6% (367.8ms out of 500ms) 'elasticsearch[es][bulk][T#25]' 66.8% (334.1ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #139]' 66.3% (331.6ms out of 500ms) 'elasticsearch[es][[logstash][3]: Lucene Merge Thread #183]' 66.1% (330.7ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #140]' 66.1% (330.4ms out of 500ms) 'elasticsearch[es][[logstash][4]: Lucene Merge Thread #158]' 62.9% (314.7ms out of 500ms) 'elasticsearch[es][[logstash][3]: Lucene Merge Thread #189]' 62.4% (312.2ms out of 500ms) 'elasticsearch[es][[logstash][2]: Lucene Merge Thread #160]' 61.8% (309.2ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #115]' 57.6% (287.7ms out of 500ms) 'elasticsearch[es][[logstash][0]: Lucene Merge Thread #155]' 55.6% (277.9ms out of 500ms) 'elasticsearch[es][[logstash][2]: Lucene Merge Thread #161]'

97

LET 'S FIX THIS MERGING…

98

…AFTER SOME LAST WORDS ON

DISK I/O

99

Scaling Elasticsearch: Multi-tiered Storage

• Put your most accessed indices across more servers, with more memory, and faster CPUs.

• Spec out “cold” storage

- SSDs still necessary! Don't even think about spinning platters

- Cram bigger SSDs per server

• Set index.codec: best_compression

• Move indices, re-optimize

• elasticsearch-curator makes this easy

100

Scaling Elasticsearch

Merging

101

WHY DOES THE DEFAULT CONFIGURATION

MERGE SO MUCH?

102

Scaling Elasticsearch: Default Mapping

• $ curl 'http://localhost:9200/_template/logstash?pretty'

• "string_fields" : { "mapping" : { "index" : "analyzed", "omit_norms" : true, "type" : "string", "fields" : { "raw" : { "ignore_above" : 256, "index" : "not_analyzed", "type" : "string" } } }, "match_mapping_type" : "string", "match" : "*" }

Do you see it?

103

Scaling Elasticsearch: Default Mapping

• $ curl 'http://localhost:9200/_template/logstash?pretty'

• "string_fields" : { "mapping" : { "index" : "analyzed", "omit_norms" : true, "type" : "string", "fields" : { "raw" : { "ignore_above" : 256, "index" : "not_analyzed", "type" : "string" } } }, "match_mapping_type" : "string", "match" : "*" }

Do you see it?

104

Scaling Elasticsearch: Custom Mapping

• $ curl 'http://localhost:9200/_template/logstash?pretty'

• "string_fields" : { "mapping" : { "index" : "not_analyzed", "omit_norms" : true, "type" : "string" }, "match_mapping_type" : "string", "match" : "*" }

105

Scaling Elasticsearch: Custom Mapping

• A small help.. Unfortunately the server is maxed out now!Expect this to normally have a bigger impact :-)

106

Scaling Elasticsearch

Indexing performance

107

Scaling Elasticsearch: Indexing Performance

• Increasing bulk thread pool queue can help under bursty indexing

- Be aware of the consequences, you're hiding a performance problem

• Increase index buffer

• Increase refresh time, from 1s to 5s

• Spread indexing requests to multiple hosts

• Increase output workers until you stop seeing improvementsWe use num_cpu/2 with transport protocol

• Increase flush_size until you stop seeing improvementsWe use 10,000

• Disk I/O performance

108

Scaling Elasticsearch: Indexing Performance

• Indexing protocols

- HTTP

- Node

- Transport

• Transport still slightly more performant, but HTTP has closed the gap.

• Node is generally not worth it. Longer start up, more resources, more fragile, more work for the cluster.

109

Scaling Elasticsearch: Indexing Performance

• Custom mapping template

- Default template creates an additional not_analyzed .raw field for every field.

- Every field is analyzed, which eats CPU

- Extra field eats more disk

- Dynamic fields and Hungarian notation

• Use a custom template which has dynamic fields enabled, but has them not_analyzedDitch .raw fields, unless you really need them

• This change dropped Elasticsearch cluster CPU usage from 28% to 15%

110

Scaling Elasticsearch: Indexing Performance

• Message complexity matters. Adding new lines which are 20k, compared to the average of 1.5k tanked indexing rate for all log lines:

111

Scaling Elasticsearch: Indexing Performance

• ruby { code => "if event['message'].length > 10240 then event['message'] = event['message'].slice!(0,10240) end" }

112

Scaling Elasticsearch: Indexing Performance

• Speeding up Elasticsearch lets Logstash do more work!

113

Scaling Elasticsearch

Index Size

114

Scaling Elasticsearch: Indices

• Tune shards per index

- num_shards = (num_nodes - failed_node_limit) / (number_of_replicas + 1)

- With 50 nodes, allowing 4 to fail at any time, and 1x replication:num_shards = (50 - 4) / (1 + 1) = 23

• If your shards are larger than 25Gb, increase shard count accordingly.

• Tune indices.memory.index_buffer_size

- index_buffer_size = num_active_shards * 500Mb

- “Active shards”: any shard updated in the last 5 minutes

115

Scaling Elasticsearch: Indices

• Tune refresh_interval

- Defaults to 1s - way too frequent!

- Increase to 5s

- Tuning higher may cause more disk thrashing

- Goal: Flushing as much as your disk’s buffer than take

• Example: Samsung SM863 SSDs:

- DRAM buffer: 1Gb

- Flush speed: 500Mb/sec

Thank you!

Q&A

@avleen

http://github.com/etsy/logstash-plugins

117

118

S E C R E T AC T 4

Filesystem Comparison

119

5230 segments 29Gb memory

10.5Tb disk space

124 segments 23Gb memory

10.1Tb disk space

OptimizedUnoptimized

Scaling Elasticsearch: Optimize Indices

120

ruby { code => "event['message'] = event['message'].slice!(0,10240)" }

ruby { code => "if event['message'].length > 10240; then event['message'] = event['message'].slice!(0,10240) end" }

The Thoughtful WayThe Easy Way

top related