redis enterprise and vmax all flash - dell emc · executive summary 4 redis enterprise and vmax all...
TRANSCRIPT
REDIS ENTERPRISE AND VMAX ALL FLASH
Performance Assessment Tests and Best Practices June 2017
VMAX® Engineering White Paper
VMAX® Engineering White Paper
ABSTRACT
This white paper provides details on the performance assessment tests and best
practices for deploying Redise Pack with Dell EMC VMAX All Flash storage arrays.
H16119.1
This document is not intended for audiences in China, Hong Kong, Taiwan,
and Macao.
White Paper
Copyright
2 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. Other trademarks may be the property of their respective owners.
Published in the USA June 2017 H16119.1.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
Contents
3 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
Contents
Executive summary ........................................................................................................................ 4
VMAX All Flash storage array product overview ......................................................................... 5
Redis Enterprise product overview ............................................................................................... 6
Benefits of running Redise with VMAX All Flash .......................................................................... 9
Performance assessment tests ................................................................................................... 11
Using SnapVX to create and restore Redise cluster gold copies ............................................. 16
Summary ........................................................................................................................................ 18
References ..................................................................................................................................... 18
Executive summary
4 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
Executive summary
Redis is the leading open-source, in-memory database platform for high performance
operational analytics or hybrid use cases, powering e-Commerce, mobile, social,
personalization, and other real-time applications.
Redis Enterprise (Redise) from Redis Labs, increases the power of Redis with an
enhanced set of technologies that deliver effortless scaling, always-on availability and
significant cost reduction. Redise Pack is the downloadable version of Redise and can be
installed in your environment of choice.
The Dell EMC VMAX™ All Flash family of all-flash arrays is designed and optimized for
high-performance, while providing ease-of-use, reliability, availability, security, and a
robust set of data services. VMAX All Flash delivers unparalleled performance as a
mission-critical multi-controller platform. VMAX management is easy using Dell EMC
Unisphere™, CLI, or REST APIs. The data is protected with T10-DIF (data integrity field),
and can be encrypted with D@RE1 . With Dell EMC SnapVX™ local snapshots can be
created or restored in seconds (regardless of data capacity), and Dell EMC SRDF™
provides consistent remote replications to any distance.
The combination of Redise Pack with VMAX All Flash offers a unique set of benefits
relative to other deployments such as a public cloud or direct-attached storage (DAS).
Each of the following items is explained in the paper:
Supporting hundreds of thousands of ACID ops/sec per cluster node, while
maintaining sub-millisecond I/O latency.
Full write persistency while maintaining sub-millisecond database operations latency
Enterprise-grade availability and scale reduce cluster complexity and data footprint
Improved geographical resiliency
Fast creation of gold copies and test/dev/reporting environments
Together Redise Pack and VMAX All Flash provide a strong set of features that are unique
to this deployment, and help customers achieve their goals in a secure and easy to
manage environment.
This white paper is intended for system administrators, storage administrators, and
system architects who are responsible for implementing Redise Pack in environments with
VMAX All Flash storage systems. Readers should have some familiarity with Redis and
with VMAX storage arrays.
1 D@RE refers to Data at Rest Encryption, an optional feature of VMAX All Flash.
Audience
VMAX All Flash storage array product overview
5 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
VMAX All Flash storage array product overview
The VMAX family of storage arrays is built on the strategy of simple, intelligent, modular
storage. It incorporates a Dynamic Virtual Matrix interface that connects and shares
resources across all VMAX engines, enabling the storage array to seamlessly grow from
an entry-level configuration into the world’s largest storage array. It provides the highest
levels of performance, scalability, and availability, and features advanced hardware and
software capabilities.
In 2016, Dell EMC announced the new VMAX All Flash 250F, 450F, and 850F arrays. In
May 2017 Dell EMC introduced VMAX 950F, which replaces the VMAX 450F and 850F,
and provides higher performance at a similar cost. VMAX All Flash, as shown in Figure 1,
provides a combination of ease of use, scalability, high performance, and a robust set of
data services that makes it an ideal choice for database deployments.
Figure 1. VMAX All Flash 950F (left) and 250F (right) storage arrays
VMAX All Flash storage arrays provide the following benefits:
Ease of use—VMAX uses virtual provisioning to create new storage devices in
seconds. All VMAX devices are thin, consuming only the storage capacity that is
actually written to, which increases storage efficiency without compromising
performance. VMAX devices are grouped into storage groups and managed as a unit
for operations such as: device masking to hosts, performance monitoring, local and
remote replications, compression, host I/O limits, and more. In addition, you can
manage VMAX by using Unisphere for VMAX, Solutions Enabler CLI, or REST APIs.
High performance—VMAX All Flash is designed for high performance and low
latency. It scales from one up to eight engines (V-Bricks™). Each engine consists of
dual directors, where each director includes two-socket Intel CPUs, front-end and
back-end connectivity, hardware compression module, InfiniBand internal fabric, and
a large mirrored and persistent cache.
VMAX All Flash
family
VMAX All Flash
benefits
Redis Enterprise product overview
6 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
All writes are acknowledged to the host as soon as they are registered with VMAX
cache2. Writes are subsequently, after multiple updates, written to flash. Reads also
benefit from the VMAX large cache. When a read is requested for data that is not
already in cache, FlashBoost technology delivers the I/O directly from the back-end
(flash) to the front-end (host). Reads are only later staged in the cache for possible
future access. VMAX also excels in servicing high bandwidth sequential workloads
that leverage pre-fetch algorithms, optimized writes, and fast front-end and back-end
interfaces.
Data services—VMAX All Flash offers a strong set of data services. It natively
protects all data with T10-DIF from the moment data enters the array until it leaves
(including replications). With SnapVX™ and SRDF, VMAX provides many topologies
for consistent local and remote replications. VMAX provides optional D@RE,
integrations with Data Domain™,such as ProtectPoint™, or cloud gateways with
CloudArray™. Other VMX data services include Quality of Service (QoS)3 ,
compression, the “Call-Home” support feature, non-disruptive upgrades (NDU), non-
disruptive migrations (NDM), and more. In virtual environments VMAX also supports
vStorage APIs for Array Integration (VAAI) primitives such as write-same and xcopy.
Note: While outside the scope of this paper, you can also purchase VMAX as part of a Converged
Infrastructure (CI). For details, refer to Dell EMC VxBlock™ System 740 and VBlock System 740.
Redis Enterprise product overview
Redis Enterprise (Redise) encapsulates the open source Redis with a shared-nothing
architecture that decouples the data-path from the cluster management path and provides
seamless scalability, instant auto-failover, built-in multi-tenancy, enhanced performance,
fast data persistence, multi-geo-region replication, backups and disaster recovery.
Redise comes with multiple deployment models: Redise Cloud, Redise Cloud Private, Redise
Pack, and Redise Pack Managed. In this white paper we used the Redise Pack, which is
downloadable Redise software that can run on any environment.
Note: For simplification, future references in this paper to Redise imply Redis Enterprise
Pack. References to plain Redis imply the open-source version of the Redis database.
Redise consists of one or more cluster nodes, where each node is made up several
components, as explained below:
Redis Shard—Each node can run one or more Redis shards. A shard is a Redis
instance that manages the entire dataset or part of a dataset. A node can run
shards from multiple databases or from a single database. Each shard can have
master or slave role, but master and slave shards of the same dataset will never
2 VMAX All Flash cache is large (from 512 GB-16 TB, based on configuration), mirrored, and
persistent due to the vault module that protects the cache content in case of power failure, and
restores it when the system comes back up.
3 Two separate features support VMAX QoS. The first relates to Host I/O limits that enable placing
IOPS and/or bandwidth limits on “noisy neighbors” applications (set of devices) such as test/dev
environments. The second relates to slowing down the copy rate for local or remote replications.
Redise cluster
architecture
Redis Enterprise product overview
7 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
run on the same node. Redise shards come with enhanced functionality that will be
discussed later in this document
Proxy—Each node includes a multi-threaded proxy that masks the cluster complexity
from the Redis clients. The proxy acts like a smart load-balancer, it manages the
databases’ endpoints, enforces security rules and forwards clients requests to the
relevant database’s shard. Furthermore, the proxy accelerates the Redis
performance by creating an optimal number of long-lived connections with each Redis
shard that it is connected to. It then multiplexes requests from different clients on the
same long-lived connections in a fully secured and consistent manner to achieve
longer pipeline operations at the Redis side, which significantly boost Redis
performance.
Cluster manager—Responsible for multiple cluster and node operations, such as:
database provisioning, watchdogs (at the node level and at the cluster level), failover,
shard-migration, re-sharding, rebalancing, auto-scaling, and so on.
Secured UI, API, CLI—Each node runs a secured web server with access to the
management user interface. Therefore, the entire cluster can be monitored and
managed from any node. Each node has a CLI (rladmin) and allows REST API
communication with the cluster for ease of management.
Figure 2. Redise cluster architecture
Redis persistence with RDB
Because host cache (DRAM) is vulnerable to power loss and host failures, in-memory
databases require a way to persist their data. Redis provides two main methods for data
persistence. The first method is RDB, which persists the data using periodic snapshots of
the entire database at specified intervals. RDB creates a single compacted file with each
snapshot and therefore is faster to recover. The RDB snapshot is created by a
background process after forking the Redis server process. When using RDB, all data
updates since the last complete snapshot will be lost in case of a host or database failure.
Redis persistency with AOF
The other persistence method is Append Only File (AOF). The AOF stores all update
operations to the Redis server in a file that keeps increasing in size. It can therefore
replay these updates upon system restart. Thus, AOF reduces the exposure to data loss
due to server failure, although this can take longer to recover the database. When using
Redis
persistence with
RDB and AOF
Redis Enterprise product overview
8 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
the AOF, there are different storage synchronization policies, referred to as fsync4: fsync
at every write (full persistence), fsync once a second (small exposure), and no fsync at all
(leaving it to default file system persistence behavior). While full persistence provides a
way to avoid data loss, the requirement to persist each write relies on the storage’s ability
to perform such activity without introducing performance overhead.
Note: For more information on Redis persistency, refer to Documentation – Redise Pack. A guide
to Redise Pack installation, operation, and administration.
Redise and AOF persistence options
Redise includes several built-in enhancements that enable better performance in an AOF
configuration (for both fsync every write or fsync every second):
With Redise, it is easy to create a master-shard only cluster and have all shards
running on just a few nodes. Since Redis is single-threaded, running multiple shards
on the same node helps to better utilize a multi-core node. Additionally, this will also
help to determine how much throughput is possible from a single node and then scale
the cluster accordingly, if more throughput is needed.
Since the size of the AOF grows with every ‘write’ operation, an AOF rewrite process
is needed to control the size of the file and reduce the recovery time from disk. By
default (and also configurable), Redis triggers a rewrite operation when the size of the
AOF has doubled the size of the RAM. So in a write intensive scenario, the AOF
rewrite operation can block the main loop of Redis (as well as other Redis instances
that are running on the same node) from executing ongoing requests to disk.
Redise uses a greedy AOF rewrite algorithm that attempts to postpone AOF rewrite
operations as much as possible without infringing the SLA for recovery time (a
configurable parameter) and without reaching the disk space limit. The advantage of
this approach is that throughput with Redise is higher, especially in an ‘fsync every
write’ configuration, because the rewrite process is optimized.
The Redise storage layer allows multiple Redis instances to write to the same
persistent storage in a non-blocking way; for example, a busy shard that is constantly
writing to disk (for example, when AOF rewrite is performing) will not block other
shards from executing durable operations.
Redise slaves and persistency options
When Redise is configured to work in a full HA mode with in-memory replication and data-
persistence, by default, data-persistence operations will be taken care of by the slave
shards only. This offloads the master shards from dealing with writing to persistent
storage and as a result, significantly increases performance and reduces latencies. In
case of a high speed ‘write’ scenario, when the slave shards cannot cope with the speed
of write operations, Redise will add shards to the cluster, using a re-sharding mechanism,
until the performance of the write operations is met by the slaves.
The Redise cluster also supports a configuration in which both master and slave are
dealing with durable write operations, but this can have a significant implication on
4 Fsync is the process in which a filesystem transfers (“flushes”) all modified data (and metadata) to
the underlying storage, making it persistent.
Redise
advantages to
persistence
Benefits of running Redise with VMAX All Flash
9 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
performance. Redise supports tunable consistency, in which any write operation can have
relaxed consistency characteristics or strong consistency characteristics:
Relaxed consistency refers to a configuration in which the master shard
acknowledges the write operation without waiting to receive acknowledgment from a
slave. This guarantees better performance but may also impose data loss if the
master shard fails after acknowledging the write operation to the client and before the
write was executed by the slave.
Strong consistency refers to a configuration in which the master waits for the slave to
execute the write operation before acknowledging to the client. If the slave is
configured for ‘fsync every write’ to a persistence storage, this configuration
guarantees that the master shard will only acknowledge the client after the slave was
executing the write operation and the operation was written to a persistence storage.
Strong consistency is achieved by following a write operation with a Redis’ WAIT
command.
Benefits of running Redise with VMAX All Flash
The combination of Redise with VMAX All Flash offers a unique set of benefits relative to
other deployments such as a public cloud or direct-attached storage (DAS). The
combination includes the following benefits:
All writes to VMAX are acknowledged to the host as soon as they are registered with the
VMAX cache5. As a result, Redise can achieve full persistence (fsync for every write) with
Append Only File (AOF), while maintaining sub-millisecond transaction time.
This advantage is important because public cloud access to storage often has slower
protocols and even with solid-state drive (SSD) storage cannot achieve such low write
latencies. Similarly, DAS deployments with SSD do not often use DRAM-based write-back
cache, and writes to SSD are slower than DRAM. The outcome is that such deployments
often cannot rely on full Redis persistency and an outage can result in some data loss.
Furthermore, because VMAX cache is persistent, VMAX can accept many data updates
(re-writes) prior to writing to the flash media (VMAX Write Folding feature). When VMAX
eventually performs the writes, it can aggregate many small writes into a single optimized
large write (VMAX Write Coalescing feature). VMAX All Flash design significantly reduces
the effect of SSD write-amplification, and avoids SSD garbage-collection delays that often
plague other storage solutions, limiting their SSD media lifespan and adding I/O latencies.
VMAX All Flash provides an enterprise storage platform which is highly scalable, resilient,
available, and secure. It uses non-disruptive upgrades (NDU) and non-disruptive
migrations (NDM). Redise features, together with the use of shards, improve node
resource utilization while maintaining high performance and cluster high-availability.
5 VMAX All Flash cache is large (from 512GB-16TB, based on configuration), and DRAM-based,
mirrored, and persistent due to the vault module that protects the cache content in case of power
failure, and restores it when the system comes back up.
Full write
persistency at
sub-millisecond
latencies
Availability,
scale, and
reduced
complexity
Benefits of running Redise with VMAX All Flash
10 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
The combination of Redise with VMAX All Flash creates a deployment that is both high
performance and highly available. At the same time, it maintains overall deployment
simplicity and a smaller footprint, as fewer slaves are required to either add availability or
performance.
This is important because in a shared-nothing DAS deployment, where the disks are
located in each commodity server, there is a higher chance of node failure. To
accommodate that, more nodes are typically utilized with a higher level of replications.
This increases cluster complexity, and requires additional copies of the database by the
slaves.
Redise cluster can use the ‘replica-of’ feature for geographic resiliency. In this
configuration, another Redise cluster can be configured as a replica of the source cluster
in a different geo-region with a highly optimized compressed and secured replication link.
VMAX SRDF provides an optional alternative solution with consistent synchronous and
asynchronous replications, from as little as a few devices to spanning multiple arrays.
Storage-based replications simplify management (compared to many servers), provide a
consistent storage state across all the servers and multiple databases and applications,
and utilize either FC- or IP-based networks.
VMAX SnapVX can create gold copies of production environments, and create and
refresh test, development, or reporting environments. VMAX snapshots are created in
seconds, regardless of database size, and do not consume any host resources.
The VMAX snapshots also do not consume additional storage capacity (except for data
changes after the snapshot is taken) and can be linked to up to 1,024 targets. This allows
for a robust way to create and refresh copies of the primary database and introduce them
to other environments such as test and development.
Improved
geographical
resiliency
Fast creation of
database
snapshots
Performance assessment tests
11 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
Performance assessment tests
We used the following hardware and software configuration to assess the performance of
Redise with Dell PowerEdge R730 servers and VMAX All-Flash storage.
Note: A large Redis cluster deployment will include additional hosts and cluster nodes. The goal
of the following tests was to achieve an understanding of the performance of a single node cluster
running with 16 shards. A linear scalability can be achieved by just adding nodes and shards to
the cluster. The VMAX All Flash can scale to support many such nodes.
Physical configuration
Figure 3 shows the physical test environment. It consists of a single V-Brick VMAX
950FX, two Dell R730 servers (Redis client and Redise cluster node), network and SAN
switches.
Each of the servers uses two dual-port host bus adapters (HBAs) for a total of four initiators per host connected to the SAN switches. The servers use two networks; a 1 GbE public network for user connectivity and management, and a 10 GbE private network for inter-node communication. Jumbo frames and VLANs are configured for the 10 GbE network.
Figure 3. Physical configuration
Table 1 describes the hardware and software components used for the performance
assessment tests.
Test
configuration
Performance assessment tests
12 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
Table 1. Performance assessment tests hardware and software components
Device Quantity Configuration Description
VMAX 950FX
1 1 x V-Brick (32 x SSD in RAID5)
1 x 2 TB device masked to Redis Server
HYPERMAX OS 5977.1124
VMAX 950F All Flash single engine (V-Brick)
Redise cluster node
1 Dell R730
Red Hat Enterprise Linux (RHEL) 7.2
12 core x 2 Intel Xeon E5-2690 v3 @ 2.60 GHz
128 GB Memory
2 x 10 GbE network ports (bonded)
2 x dual port 16 Gb HBAs (total of 4 ports)
Redise Pack version 4.4.2-30
XFS file system
Dell EMC PowerPath™ 6.1
Redise cluster node configured for full write persistency (fsync for every write)
Redis client
1 Dell R730
Red Hat Enterprise Linux (RHEL) 7.2
12 core x 2 Intel Xeon E5-2690 v3 @ 2.60 GHz
128 GB Memory
2 x 10 GbE network ports (bonded)
2 x dual port 16 Gb HBAs (not used)
Redis client used for running the Memtier Benchmark tool
We created a single 2 TB device and masked it to the Redise cluster node. We created a
single partition on the device and made sure it is aligned at 2048 blocks (1MB offset)
based on VMAX best practices. We created an XFS file system on the partition and
mounted it as /mnt/redis, which was later used for Redise data persistency location.
Note: We performed preliminary tests with a striped Logical Volume Manager (LVM) over multiple
smaller devices as opposed to a single device, which showed no performance advantage using
LVM. For that reason, a single device was used for the data persistency path on the Redise cluster
node.
We installed Redise Pack using the downloaded RPM package. After installing the
package, we performed the setup using Redise management user interface (UI). For
persistent storage, we used the VMAX device mount point of /mnt/redis. Redis
ephemeral (non-persistent) storage was not used in our testing.
We created a new Redis database with a 50 GB RAM limit and 16 shards. We set data
persistence to Append Only File (AOF) – fsync every write. We entered a password for
the database, along with a free endpoint port number.
Note: For more information on Redise installation and setup see A guide to Redis Pack
installation, operation, and administration on the Redis Labs website.
Redis cluster
node setup
Performance assessment tests
13 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
Redis client setup
For the Redis client, VMAX storage was unnecessary, as the client was used only to
generate transactions. The Redis client host communicated with the Redis cluster node
using a private 10Gb bonded network. We installed the Memtier Benchmark on the client
host to generate various load patterns against the Redis database cluster node. The
Memtier Benchmark is an open source benchmarking tool developed by Redis Labs to
measure system performance. Among other things, the tool can control the ratio of GET
and SET operations for a specified test run and modify the pipeline size.
Note: For more information on Memtier Benchmark see memtier_benchmark: A High-Throughput
Benchmarking Tool for Redis & Memcached blog on the Redis Labs website.
Test overview
For this white paper, we performed a series of tests using Memtier Benchmark with varied
read/write ratios to assess the performance of a single node Redis cluster with VMAX All
Flash storage.
We ran the tests with write-to-read ratios of 1:1 (50% write, 50% read), 1:9 (10% write,
90% read), and 9:1 (90% write, 10% read). We used a pipeline parameter to identify the
number of concurrent requests, with a value of 20 or 1. We used object sizes of 100 and
6,000 bytes to simulate common use cases.
These parameters created four test cases:
1. 100 bytes object size with pipeline of 20
2. 100 bytes object size with pipeline of 1
3. 6KB object size with pipeline of 20
4. 6KB object size with pipeline of 1
Within each test case, we chose a general number of clients and threads that best fit that
test case, and kept it the same while we ran the tests with the different read/write ratios.
This helped make the results easier to analyze.
Note: Changing the number of clients/threads within a test case would have shown better results
(especially with 90% read), but would have made the result tables harder to read and compare.
We ran each test for 900 seconds, and collected results once the Redise management UI
performance graphs showed a steady state. A steady state was reached in each test case
almost immediately.
Note: We collected test results only from the Redise management UI, focusing on database
latencies and throughput as they represent best the end-user experience and makes the results
more meaningful. In addition, we captured VMAX performance statistics from Unisphere to
provide storage I/O latencies as well.
Performance
assessment
tests
Performance assessment tests
14 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
Table 2 shows the results for the test case using 100-byte records.
Table 2. Test results for 100-byte records
Test parameters Redis GUI Unisphere
Write / read ratio
Pipeline Clients Threads Data (bytes)
Runtime (sec)
OPS / sec (k)
Avg. latency (ms)
% write
Write RT (ms)
IOPS Avg. write (kb)
MB/ sec
1:1 20 70 1 100 900 662 0.84 100% 0.1 41,167 4.8 194
1:9 20 70 1 100 900 670 0.71 100% 0.1 35,200 3.9 133
9:1 20 70 1 100 900 640 0.86 100% 0.1 40,233 5.7 223
1:1 1 20 30 100 900 513 0.89 100% 0.1 42,402 4.6 188
1:9 1 20 30 100 900 551 0.68 100% 0.1 26,869 3.8 97
9:1 1 20 30 100 900 513 0.89 100% 0.1 41,817 5.3 213
As the table shows, in all the tests the Redise management UI reported an average
response time of less than 1 ms. For VMAX arrays, the average write latency was about
100 microseconds (0.1 ms). As can be expected, 90% read tests always provided the
best operations per second within their test case (as compared to 1:1 or 9:1 tests). Write
IOPS as measured from Unisphere, ranged from 26K to 42K, based on the specific test.
Table 3 shows the results for the test case using 6,000-byte (6 KB) records.
Table 3. Test results for 6,000-byte records
Test parameters Redis GUI Unisphere
Write / read ratio
Pipeline Clients Threads Data (bytes)
Runtime (sec)
OPS / sec (k)
Avg. latency (ms)
% write
Write RT (ms)
IOPS Avg. write (kb)
MB/ sec
1:1 20 70 1 6000 900 90 0.64 100% 0.1 35,027 11.3 395
1:9 20 70 1 6000 900 128 0.19 100% 0.1 17,305 7.2 120
9:1 20 70 1 6000 900 80 0.85 100% 0.1 35,663 16 563
1:1 1 20 30 6000 900 111 0.70 100% 0.1 26,972 15.8 413
1:9 1 20 30 6000 900 113 0.17 100% 0.1 11,589 8.5 95
9:1 1 20 30 6000 900 107 0.88 100% 0.2 35,069 21 687
As the table shows, in all the tests the Redise management UI reported an average
response time of less than 1 ms. For VMAX arrays, the reported average write latency
was about 100 microseconds (0.1 ms). As can be expected, 90% read tests always
provided the best operations per second within their test case (as compared to 1:1 or 9:1
tests). Write IOPS as measured from Unisphere, ranged from 11.5K to 35K, based on the
specific test.
Test results for
100-byte records
Test results for 6
KB records
Performance assessment tests
15 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
Figure 4 shows a portion of the Redis management UI performance graphs. The straight
lines indicate steady state run, with the minimum and maximum values at the bottom of
the graph confirming that latency metrics are under 1 ms.
Figure 4. Sample performance graphs
Figure 5 shows a portion of the VMAX Unisphere performance graphs of the same test as above. The straight lines indicate a steady state run, where the latency is about 100 microseconds (0.1 ms) and IOPS are about 40,000.
Figure 5. VMAX Unisphere performance graphs
For in-memory databases such as Redis, when the data requires persistence, storage
write latency is a critical factor. Redise Pack together with VMAX All Flash enables fsync
for every write (full data persistency), while still maintaining sub millisecond operation
latency. This latency was kept under 1ms in both the smaller 100 bytes tests, which is
more typical for a Redis workload, as well as the larger, 6000-byte tests.
Note: The performance numbers achieved are not VMAX All-Flash performance limits and they
only represent the test environment and configuration used during the tests.
Sample
performance
graphs
Test conclusion
Using SnapVX to create and restore Redise cluster gold copies
16 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
Using SnapVX to create and restore Redise cluster gold copies
The purpose of this test is to show how to create and restore storage snapshots using
VMAX SnapVX of the Redise cluster.
VMAX SnapVX snapshots are created and restored in seconds, regardless of data size.
All snapshots get a user-defined name and an optional duration (time-to-live). As
snapshots are pointer-based, they do not consume any capacity. Only changes made
after the snapshot is taken can add capacity.
Some key SnapVX benefits include:
Create Redise cluster “gold” copies as backups or as a save-point prior to an
operation such as data load, or other changes to the database that may require rolling
back if something goes wrong.
Link SnapVX snapshots to a matching set of target devices. Use these devices as a
source for test, development, or reporting environments that can be refreshed
periodically from production.
SnapVX snapshots are inherently consistent and therefore can be taken while the Redis
workload is running. You can snap simultaneously all the devices, regardless of whether
they are located in a single array or span multiple VMAX arrays.
Test overview
The following test shows an example of the steps involved in creating a point-in-time
snapshot of an active Redise cluster with shards pointing to /mnt/redis mount point,
and also restoring the database using the snapshot.
While the test case used a single Redis cluster node, it can be easily applied to many
nodes as SnapVX can operate on a group of LUNs at once, maintaining consistency
across them during the snapshot operation.
SnapVX can be managed using the VMAX Solutions Enabler Command Line Interface
(CLI), Unisphere UI, or REST API. In the following examples, CLI is used.
Test steps
1. To simulate existing user data we created two Redis keys, Company and
Product, with values of Dell and VMAX_All_Flash, respectively:
# redis-cli -p 12345 -a <pwd> mset Company Dell Product
VMAX_All_Flash
OK
# redis-cli -p 12345 -a <pwd> mget Company Product
1) "Dell"
2) "VMAX_All_Flash"
2. To simulate activity during the snapshot operation (which is taken while the
database is active), Memtier Benchmark workload was used. We then created a
snapshot. The symsnapvx command specifies the storage group that includes
the source devices for the snapshot (dbcf0217_sg in this case, which contains
the /mnt/redis mount-point device). The user also provides a snapshot name,
The benefits of
using storage
snapshots with
Redis
How to use
SnapVX with
Redise cluster
Using SnapVX to create and restore Redise cluster gold copies
17 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices
White Paper
which in this case is redis_217_snap. Alternatively, the name can describe the
cluster name or application that is being snapped, the purpose of the snapshot,
and so on. The establish command tells the CLI to create a new snapshot:
# symsnapvx -sg dbcf0217_sg -name redis_217_snap establish
3. After creating the snapshot, we deleted both keys from the Redis database,
simulating a failure:
# redis-cli -p 12345 -a <pwd> del Company Product
(integer) 2
# redis-cli -p 12345 -a <pwd> mget Company Product
1) (nil)
2) (nil)
4. After stopping the Redis application and unmounting the file system, we restored
the database using the snapshot created earlier. Restoring the database enabled
us to access the deleted keys, simulating fast access to data prior to the failure.
Note: Before restoring the snapshot, the Redis database was stopped and the file
system unmounted to avoid any remaining open files or locks on the file system which
can corrupt the restored data.
Note: VMAX snapshots are always protected, and can be re-used as many times as
necessary regardless of host changes after the snapshot is restored or linked. Even if the
user forgot to unmount the file system first, they can repeat the process correctly.
a. We restored the snapshot with the deleted keys:
# symsnapvx -sg dbcf0217_sg -snapshot_name redis_217_snap1 restore
b. We verified that the devices had been restored:
# symsnapvx -sg dbcf0217_sg -snapshot_name redis_217_snap1 verify
-restore
All devices in the group 'dbcf0217_sg' are in 'Restored' state.
c. We remounted the file system:
# mount -t xfs -o noatime,nodiratime,nobarrier,nodiscard,nodev
/dev/emcpowerbu1 /mnt/redis
d. We restarted Redis:
# redis_ctl start-all
# supervisorctl start all
e. We verified that the deleted keys had been restored:
# redis-cli -p 12345 -a <pwd> mget Company Product
1) "Dell"
2) "VMAX_All_Flash"
Summary
18 Redis Enterprise and VMAX All Flash Performance Assessment Tests and Best Practices White Paper
This step confirmed that the snapshot was restored successfully, including the
deleted key values.
Test conclusion
SnapVX can help in creating multiple point-in-time storage-based snapshots of a running
Redise cluster that are both space-efficient and easy to use.
Summary
The tests and guidelines provided in this paper demonstrate the ability of Redise Pack,
together with VMAX All Flash to provide a high-performance cluster environment that is
highly available and enriched with data services from both the database and the storage.
The combination also provides unique advantages to performance, scale, and availability
that are hard to achieve in other environments.
References
Dell EMC documentation
VMAX All Flash data sheet
Dell EMC Product Online Support
Dell EMC VxBlock™ System 740 and VBlock System 740
Redis documentation
Redislabs - Products
Redise Pack download page
A guide to Redis Pack installation, operation, and administration
Redise Pack Documentation – Database Persistence
EBook – Redis in Action – Persistence Options
Memtier Benchmark documentation
Redis Labs / Memtier Benchmark blog
Redis Labs / Memtier Benchmark page