cassandra eu 2012 - storage internals by nicolas favre-felix
DESCRIPTION
Nicolas' talk from Cassandra Europe on March 28th 2012.TRANSCRIPT
Cassandrastorage internals
Nicolas Favre-FelixCassandra Europe 2012
What this talk covers
• What happens within a Cassandra node
• How Cassandra reads and writes data
• What compaction is and why we need it
• How counters are stored, modified, and read
Concepts• Memtables
• SSTables
• Commit Log
• Key cache
• Row cache
• On heap, off-heap
• Compaction
• Bloom filters
• SSTable index
• Counters
Why is this important?• Understand what goes on under the hood
• Understand the reasons for these choices
• Diagnose issues
• Tune Cassandra for performance
• Make your data model efficient
A word about hard drives
A word about hard drives
• Main driver behind Cassandra’s storage choices
• The last moving part
• Fast sequential I/O (150 MB/s)
• Slow random I/O (120-200 IOPS)
What SSDs bring• Fast sequential I/O
• Fast random I/O
• Higher cost
• Limited lifetime
• Performance degradation
Disk usage with B-trees
• Important data structure in relational databases
• In-place overwrites (random I/O)
• LogB(N) random accesses for reads and writes
Disk usage with Cassandra• Made for spinning disks
• Sequential writes, much less than 1 I/O per insert
• Several layers of cache
• Random reads, approximately 1 I/O per read
• Generally “write-optimised”
Writingto Cassandra
Writing to Cassandra
Row Key Column Column Column Column
Let’s add a row with a few columns
Memtable
In the JVM
On disk Commit log
New data
The Cassandra write path
The Commit Log• Each write is added to a log file
• Guarantees durability after a crash
• 1-second window during which data is still in RAM
• Sequential I/O
• A dedicated disk is recommended
Memtables• In-memory Key/Value data structure
• Implemented with ConcurrentSkipListMap
• One per column family
• Very fast inserts
• Columns are merged in memory for the same key
• Flushed at a certain threshold, into an SSTable
Full MemtableIn the JVM
On disk Commit log
Dumping a Memtable on disk
New MemtableIn the JVM
On disk SSTableCommit log
Dumping a Memtable on disk
The SSTable
• One file, written sequentially
• Columns are in order, grouped by row
• Immutable once written, no updates!
In the JVM
On disk
Memtable
Commit log
SSTables start piling up!
SSTable SSTable SSTable
SSTable SSTable SSTable
SSTable SSTable SSTable
SSTable SSTable SSTable
SSTables• Can’t keep all of them forever
• Need to reclaim disk space
• Reads could touch several SSTables
• Scans touch all of them
• In-memory data structures per SSTable
Compacting SSTables
Compaction• Merge SSTables of similar size together
• Remove overwrites and deleted data (timestamps)
• Improve range query performance
• Major compaction creates a single SSTable
• I/O intensive operation
Recent improvements
• Pluggable compaction
• Different strategies, chosen per column family
• SSTable compression
• More efficient SSTable merges
Reading from Cassandra
Reading from Cassandra• Reading all these SSTables would be very inefficient
• We have to read from memory as much as possible
• Otherwise we need to do 2 things efficiently:
• Find the right SSTable to read from
• Find where in that SSTable to read the data
First step for reads
• The Memtable!
• Read the most recent data
• Very fast, no need to touch the disk
MemtableIn the JVM
On disk SSTableCommit log
Row cacheOff-heap (no GC)
Row cache
• Stores a whole row in memory
• Off-heap, not subject to Garbage Collection
• Size is configurable per column family
• Last resort before having to read from disk
In the JVM
On disk
Memtable
Commit log
Finding the right SSTable
SSTable SSTable
SSTable SSTable SSTable
SSTable SSTable SSTable SSTable
Bloom filter• Saved with each SSTable
• Answers “contains(Key) :: boolean”
• Saved on disk but kept in memory
• Probabilistic data structure
• Configurable proportion of false positives
• No false negatives
MemtableIn the JVM
On disk Commit log SSTable
Bloom filter
exists(key)?
true/false
Bloom filter
SSTable
Bloom filter
SSTable
Bloom filter
Reading from an SSTable
• We need to know where in the file our data is saved
• Keys are sorted, why don’t we do a binary search?
• Keys are not all the same size
• Jumping around in a file is very slow
• Log2(N) random I/O, ~20 for 1 million keys
Reading from an SSTableLet’s index key ranges in the SSTable
SSTable
Key: k-128 Key: k-256 Key: k-384
Position: 12098 Position: 23445 Position: 43678
SSTable index• Saved with each SSTable
• Stores key ranges and their offsets: [(Key, Offset)]
• Saved on disk but kept in memory
• Avoids searching for a key by scanning the file
• Configurable key interval (default: 128)
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
SSTable index
Sometimes not enough
• Storing key ranges is limited
• We can do better by storing the exact offset
• This saves approximately one I/O
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter Key cache
The key cache
Key cache
• Stores the exact location in the SSTable
• Stored in heap
• Avoids having to scan a whole index interval
• Size is configurable per column family
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
Row cacheOff-heap (no GC)
Key cache
1
2
3 4 5
6
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
Row cacheOff-heap (no GC)
Key cache
1
2
3 4 5
6
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
Row cacheOff-heap (no GC)
Key cache
1
2
3 4 5
6
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
Row cacheOff-heap (no GC)
Key cache
1
2
3 4 5
6
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
Row cacheOff-heap (no GC)
Key cache
1
2
3 4 5
6
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
Row cacheOff-heap (no GC)
Key cache
1
2
3 4 5
6
MemtableIn the JVM
On disk SSTableCommit log
SSTableindexBloom filter
Row cacheOff-heap (no GC)
Key cache
1
2
3 4 5
6
Distributed counters
Distributed counters
• 64-bit signed integer, replicated in the cluster
• Atomic inc and dec by an arbitrary amount
• Counting with read-inc-write would be inefficient
• Stored differently from regular columns
Consider a clusterwith 3 nodes, RF=3
Internal counter data• List of increments received by the local node• Summaries (Version,Sum) sent by the other nodes• The total value is the sum of all counts
Internal counter data• List of increments received by the local node• Summaries (Version,Sum) sent by the other nodes• The total value is the sum of all counts
node
Local increments
Received from
Received from
+5
version: 3count: 5
version: 5count: 10
+2 -3
Incrementing a counter• A coordinator node is chosen (blue node)
Local increments +5 +2 -3
Incrementing a counter• A coordinator node is chosen
• Stores its increment locally
Local increments +5 +2 -3 +1
Incrementing a counter• A coordinator node is chosen
• Stores its increment locally
• Reads back the sum of its increments
Local increments +5 +2 -3 +1
Incrementing a counter• A coordinator node is chosen
• Stores its increment locally
• Reads back the sum of its increments
• Forwards a summary to other replicas: (v.4, sum 5)
Local increments +5 +2 -3 +1
Incrementing a counter• A coordinator node is chosen
• Stores its increment locally
• Reads back the sum of its increments
• Forwards a summary to other replicas
• Replicas update their records:
Received from version: 4count: 5
Reading a counter
• Replicas return their counts and versions
• Including what they know about other nodes
• Only the most recent versions are kept
Reading a counter
version: 6count: 12
Reading a counter
version: 6count: 12
{ v. 3, count 5v. 6, count 12v. 2, count 8
{ v. 3, count 5v. 5, count 10v. 4, count 5
Reading a counter
version: 6count: 12
{ v. 3, count 5v. 6, count 12v. 2, count 8
{ v. 3, count 5v. 5, count 10v. 4, count 5
Counter value: 5 + 12 + 5 = 22
Storage problems
Tuning• Cassandra can’t really use large amounts of RAM
• Garbage Collection pauses stop everything
• Compaction has an impact on performance
• Reading from disk is slow
• These limitations restrict the size of each node
Recap• Fast sequential writes
• ~1 I/O for uncached reads, 0 for cached
• Counter increments read on write, columns don’t
• Know where your time is spent (monitor!)
• Tune accordingly
Questions?
http://www.flickr.com/photos/kubina/326628918/sizes/l/in/photostream/http://www.flickr.com/photos/alwarrete/5651579563/sizes/o/in/photostream/http://www.flickr.com/photos/pio1976/3330670980/sizes/o/in/photostream/http://www.flickr.com/photos/lwr/100518736/sizes/l/in/photostream/
• In-kernel backend• No Garbage Collection• No need to plan heavy compactions• Low and consistent latency• Full versioning, snapshots• No degradation with Big Data