clariion foundations r29 - ppt
Post on 19-Jan-2016
68 Views
Preview:
TRANSCRIPT
© 2009 EMC Corporation. All rights reserved.
CLARiiON FoundationsCLARiiON Foundations
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 2
Course Objectives
Upon completion of this course, you will be able to:
Identify the models, components, and basic architecture of a CLARiiON
Identify supported CLARiiON RAID types
Identify CLARiiON data integrity features
Identify CLARiiON data availability features
Identify CLARiiON management options
Identify CLARiiON storage provisioning objects
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 3
Module 1: CLARiiON Models, Components, and RAID Types
Upon completion of this module, you will be able to:
Identify the models, components, and basic architecture of a CLARiiON disk array
Identify the operation of CLARiiON components
Identify supported CLARiiON RAID types
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 4
CLARiiON Models –Timeline
CX200
CX400
CX600
2002
CX300
CX500
CX700
2003
CX300i
CX500i
2005
CX3-20
CX3-40
CX3-80
2006
CX3-10c
CX3-20c
CX3-20f
CX3-40c
CX3-40f
2007
CX4-120
CX4-240
CX4-480
CX4-480P
CX4-960
2008
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 5
Mid-tier Storage: Defined
Non-disruptive everything– Upgrades, operation, and service
Predictable performance
Availability
Functionality– Replicate any amount of data, across any distance, without impact to
service levels
Flexibility– Capacity, performance, multi-protocol connectivity, workloads, etc.
Manage service levels– Centralized management of the storage environment
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 6
CLARiiON CX4 Models 120, 240, 480, and 960
Latest generation, full Fibre Channel networked storage running FLARE Operating Environment
Flexible drive and I/O configurations
UltraFlex I/O modules to grow connectivity– Populate with different I/O options (FC or iSCSI)
Scalable processing power– Dual- or Quad-core processors support advanced
storage-based functionality
Industry-leading performance and availability
Cross-generational software support
Non-disruptive hardware replacement and software upgrades
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 7
CLARiiON CX4 UltraFlex I/O Modules
Provides the Storage Processor with 4 Fibre Channel (FC) ports or 2 iSCSI ports
Any FC port can be configured as either front-end or back-end connection
Each FC port can operate at 1, 2, 4, or 8 Gb/s
iSCSI ports can operate at 10/100/1000 Mb/s or 10 Gb/s speeds
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 8
CLARiiON CX4 Series Architecture
1.Storage Processoe
CLARiiON Messaging Interface (CMI)
1, 2, 4, or 8Gb/s FC I/O modules10,100,1000 Mb/s or 10Gb/s iSCSI I/O
modules
2/4/8 Gb Fibre Channel Back End
LCC
LCC
LCC
LCC
4Gb/s LCC LCC
LCC
Multi-Lane PCI-Express bridge link
4Gb/s LCC
4Gb/s LCC
4Gb/s LCC
4Gb/s LCC
4Gb/s LCC
4Gb/s LCC
4Gb/s LCC
StorageProcessor
StorageProcessor
DAE3P
SPE
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 9
Modular Building Blocks
DAE3P Disk Enclosure– Supports up to 15 low-profile, FC,
SATA II, low power SATA II, or EFD disk drives
– Drive fillers must be installed in empty slots
Uses same chassis and power supply/cooling module as DAE2P enclosure
Uses same cables as DAE2P enclosure
Two 4 Gb/s Link Control Cards (LCCs)
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 10
CLARiiON SATA II Disks
Lower cost per MB for backup or bulk storage
Alternative to Fibre Channel disks
Same software capability as FC
Full HA features– Dual-ported access
– Redundant power and LCCs
– Hot swap capability
Uses FC interconnect – Mix FC, ATA, and SATA II cabinet (not within the same DAE3P)
– First DAE3P must contain Fibre Channel drives (except for the CX4-120 and CX4-240 which can use SATA drives for vault disks)
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 11
Enterprise Flash Drives
Enterprise Flash Drives (EFD)– Consistent with Symmetrix terminology
Supported on all CLARiiON CX4 models
Drives can be configured on different bus– Drives can be used on any 4 Gb/s enclosure
– EFDs can be mixed with FC in the same enclosure
– EFDs and SATA drives must be in a separate enclosure
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 12
Storage Processor Introduction Storage processors are configured in
pairs for maximum availability
One or two CPU processors per Storage Processor board
UltraFlex I/O modules (FC and iSCSI protocols)
Dual-ported Fibre Channel disk drives at the back end– One, two, four, or eight Arbitrated Loop
connections
Mirrored write cache– Uses the CLARiiON Messaging Interface
(CMI )– Persistent cache– Write caching accelerates host writes
Ethernet connection for managementStorage Processor
Mirrored Cache
CPU CPU
FC-AL FC-AL
CMI
LCC
LCC
I/O module
FC or iSCSI
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 13
CLARiiON RAID Types
Disk– No protection
– JBOD
RAID-0: Stripe– No protection
– Performance JBOD
RAID-1: Mirroring– Some performance gain by splitting read operations
– Protection against single disk failure
– Minimum performance hit during failure
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 14
CLARiiON RAID Types (Cont)
RAID-1/0: Striped Mirrors– Performance of stripes combined with split read operations
– Protection against single disk failure
– Minimum performance hit in failure mode
RAID-3: Striped Elements – Each data element striped across disks – parity kept on the last disk
in the RAID group
– Extremely fast read access from the disk
– Used for streaming media
– Parity protection against single disk failure
– Performance penalty during failure
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 15
CLARiiON RAID Types (Cont)
RAID-5: Striping with Parity– Performance of striping
– Protection from single disk failure
– Parity distributed across member drives within the RAID Group
– Write performance penalty
– Performance impact if a disk fails in the RAID Group
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 16
CLARiiON RAID Types (Cont)
RAID-6: Dual distributed parity – high fault tolerance– Protection from double drive failures
– Parity distributed across member drives using diagonal and row parity within the RAID Group
– Write performance penalty
– Configure for availability NOT performance
Hot spare– Takes the place of failed disk within a RAID Group
– Must have equal or greater capacity than the disk it replaces
– Can be located anywhere except on Vault disks
– When failing disk is replaced, the hot spare restores the data to the replacement disk and returns to the hot spare pool
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 17
Module 1 Summary
Key points covered in this module:
Models, components, and basic architecture of a CLARiiON disk array
Operation of CLARiiON components
Supported CLARiiON RAID types
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 18
Module 2: CLARiiON Features
Upon completion of this module, you will be able to:
Identify CLARiiON high availability design
Identify CLARiiON data integrity features
Identify CLARiiON data availability features
Identify CLARiiON performance features
Identify CLARiiON power saving features
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 19
Flexible, High Availability Design
Fully redundant architecture with a 64-bit operating system
Multi-protocol support
Continuous dual I/O paths with non-disruptive failover
Leader in data integrity– Mirrored write cache
– Destage write cache to DISK upon power failure
– SNiiFF verify
– Background verify – per RAID Group
No single point of failure, modular architecture
Fibre Channel, SATA II, EFD, and ATA disk drives
Flexibility– Individual disk– RAID levels 0, 1, 1/0, 3, 5, 6– Mix drive types– Mix RAID levels– Thin LUN provisioning
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 20
Data Integrity
Mirrored write cache
RAID protection
Vault
CLARiiON disk sector format
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 21
Mirrored Write Caching
Write cache size is user configurable and is allocated in pages
How much write cache is used by each SP is dynamically adjusted based on workload
All write requests to a given SP are copied to the other SP
Data integrity ensured through hardware failure events
CMI used to communicate between SPs
Persistent cache support
SP-ASP-A SP-BSP-B
Storage SystemStorage System
Base Software
SP-AWrite Cache
SP-AWrite Cache
SP-BWrite Cache
Mirror
SP-BWrite Cache
Mirror
Read CacheRead Cache
Base Software
SP-AWrite Cache
Mirror
SP-AWrite Cache
Mirror
SP-BWrite Cache
SP-BWrite Cache
Read CacheRead Cache
CMI
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 22
Data Integrity – Mirrored Write Cache Example
SPA
WriteCache
Disk Drives
SPB
WriteCache
Disk Drives
0101101
WRITE
0101101
0101101
ACK
Peer Bus(CMI)
PCI Express
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 23
Data Integrity – The Vault
Reserved area found on specific protected disks– Drives 0-4 in the first enclosure on CX series
– Drives 0-9 in the first enclosure on FC series
Hidden from hosts (and administration utilities)
RAID protected
Holds write cache content in the event of a failure
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 24
Data Integrity – Persistent Cache CX4 Series
Write cache data is maintained under these scenarios:– Non-disruptive upgrades
– Single Storage Processor (SP) restart
– Single Storage Processor (SP) removed
– SP or I/O module replacement or repair
– Standby Power Supply (SPS) single SP hard fault or transient hardware failure
– Power Supply failure
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 25
Data Integrity – Vault Operation: Power Failure (CX, CX3, CX4)
0100101010 11010010010010100100100110100100110010010100110101001101010011010101101001001110110100101010110101100101 11010010010010100100100110100100110010010100110101001101010011010101101001001110110100101010110101100101 11010010010010100100100110100100110010010100110101001101010011010101101001001110110100101010110101100101
Write Cache
Back-endFibre THE VAULT
• SP detects failure of required hardware
• SP disables cache and copies cache content into the vault
• SP detects failure of required hardware
• SP disables cache and copies cache content into the vault
CACHEDISABLED
CACHEDUMPED
ARRAYSHUTDOWN
POWERFAILURE
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 26
Data Integrity – Vault Operation: Hardware Failure (CX, CX3, CX4)
0100101010 11010010010010100100100110100100110010010100110101001101010011010101101001001110110100101010110101100101 11010010010010100100100110100100110010010100110101001101010011010101101001001110110100101010110101100101 11010010010010100100100110100100110010010100110101001101010011010101101001001110110100101010110101100101
Write Cache
Back-endFibre THE VAULT
• SP detects failure of required HW
• SP disables cache and copies cache content into the vault
• SP detects failure of required HW
• SP disables cache and copies cache content into the vault
CACHEDISABLED
CACHEDUMPED
SYSTEMCONTINUES
FRUFAILURE
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 27
Data Integrity – RAID Protection
CLARiiON offers a choice of redundant RAID levels– 1, 1/0, 3, 5, and 6
Data protected by parity or mirroring
Other RAID Group types are non-redundant– RAID 0, single disk, hot spare
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 28
Data Integrity – Sector Format
Disks formatted with 520 bytes per sector– Only 512 bytes seen by hosts
Extra 8 bytes include a 2-byte longitudinal redundancy checksum (LRC)
512 bytes – user data
timestampshedstamp
writestampchecksum
8 bytes – Patented high availability extensions
data
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 29
Two Levels of Verify
Sniff Verify– Low priority check of an entire storage system
– Very low impact on performance
Background Verify (BV)– High priority check of a LUN or a RAID Group
– May be initiated automatically by system (non-vol verify)
– May be initiated manually by operator
– Once started, it runs to completion and cannot be interrupted
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 30
Data Availability
Hardware redundancy
RAID protection
Global hot spare disks
Error reporting capability
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 31
Data Availability – Hardware Redundancy
No single point of failure
Dual or n+1 components– Dual-ported disks
– Dual LCCs (Link Control Cards)
– Dual SPSs (Standby Power Supplies)
– Dual PSUs (Power Supply Units)
– Dual SPs (Storage Processors)
– n+1 fans in fan modules
– Dual CMI channels on CX, CX3, and CX4 Series
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 32
Data Availability – RAID
RAID Group can survive loss of a disk– RAID 1, 1/0, 3, and 5
– RAID 6 can survive two disk failures
RAID 1/0 may survive loss of multiple disks– Mirrored data (cannot survive loss of mirrored pairs)
Data may be reconstructed from remaining disks– Built onto new disk/hot spare if one exists
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 33
Data Availability – Global Hot Spare Disks
May take the place of any failed disk
Must be as large as largest disk in array if single HS
Sizes may be mixed in the array
May not be a vault disk
Proactive hot sparing
Hot spare operation– Disk drive in protected RAID Group fails– Data rebuilt onto hot spare (if available)– Failed drive replaced– Once hot spare rebuild complete, data copied to replaced disk– Hot spare returns to “ready” state
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 34
Data Availability – Error Reporting
Event Monitor has several notification modes
Integrated with Navisphere Manager
CLARalert/onALERT
Allows for proactive repair of problems
Alerts (Needs Attention)
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 35
CLARiiON Array Performance Features
Cache
Back-end Fibre Channel loops
Dual Storage Processors (SPs)
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 36
Array Performance – Cache Benefits
Burst smoothing – absorbs bursts of writes into memory, avoids disks becoming a bottleneck
Locality – merge several (RAID 5) writes to the same disk area (stripe) into a single operation
Write caching is optimized for burst smoothing
Read caching is optimized for immediacy
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 37
Array Performance – Cache Configuration
Array level– Administrator enables/disables write cache
– Allocates amounts of read, write cache
SP level– Administrator enables/disables read cache
LUN level– Administrator enables/disables read, write cache
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 38
Array Performance – Cache Organization
Pages – 2 KB, 4 KB, 8 KB, 16 KB– Smallest portion of memory allocated in cache
– Page dedicated to one I/O
– Global – read and write cache on both SPs
– Includes the additional 8 bytes per sector
– Best practice – match the filesystem I/O size
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 39
Array Performance – Flushing Write Cache
To maintain free space in write cache, pages are flushed from write cache to the drives
Three levels of flushing:– Idle flushing
– Watermark processing
– Forced flushing
For maximum performance:– Provide a “cushion” of unused cache for I/O bursts
– Minimize/avoid forced flushes
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 40
Array Performance – Read Cache
Read cache predicts the next data to be requested, based on the currently requested data– Fetches data in advance: prefetch
Two types of prefetching– Constant: prefetches a fixed amount of data
– Variable: amount of data prefetched is a multiple of the size of the host request
A Least Recently Used (LRU) algorithm determines which data is discarded when read cache is full
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 41
Array Performance – Back-end Fibre Channel
Number of loops depends on the model
Up to eight loops on higher-end models– Single loop on lower-end models
2 Gb/s or 4 Gb/s speeds
Accessible by both SPs
Simultaneously active
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 42
Array Performance – Dual SPs
Front-end FC connections at 1, 2, 4, or 8 Gb/s rates
Front-end iSCSI connections at 10,100, 1000 Mb/s or 10 Gb/s
Uses SFPs for front and back-end connections– SFPs are built into the back-end cable
Auto sensing
Simultaneously active
LUNs may be balanced between SPs– Auto
– Assign to default owner
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 43
CLARiiON Power Saving Features
Storage system power savings
Storage pool power savings
Drive spin-down
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 44
CLARiiON Power Saving
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 45
CLARiiON Power Saving – Drive Spin Down
Disk drives can transition to a low power state when idle– In FLARE R29, the spindle motor spins down (0 RPM)
– The electronics remain at full power
– Power saving of 55% – 60% when drives are spun down
Drives in slots 0 – 4 (vault drives) are not eligible for spin down
Drive spin-down functionality is only available on the CX4 arrays
This feature is not available for layered features– MetaLUN, thin LUNs, WIL, RLP, CPL, etc.
– Mutually exclusive on a RAID Group basis
Only drives qualified for the low power state of spin down will participate in power savings
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 46
CLARiiON Power Saving – RAID Groups
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 47
CLARiiON Power Saving – Thin Pools & EFDs
Thin Pools use storage resources better by providing on-demand storage– Allows for storage to be used only when needed
– Reduces power cost by optimizing storage system resources
Enterprise Flash Drives use less power compared to standard drives– No moving parts means less power needed to run
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 48
Module 2 Summary
Key points covered in this module:
CLARiiON high availability design
CLARiiON data integrity features
CLARiiON data availability features
CLARiiON performance features
Identify CLARiiON power saving features
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 49
Module 3: CLARiiON Management Options
Upon completion of this module, you will be able to:
Identify CLARiiON management software suite
Identify CLARiiON management options
Identify CLARiiON storage provisioning objects
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 50
EMC Navisphere Management Software
Centralized management for CLARiiON storage throughout the enterprise
Allows user to quickly adapt to business changes
Key features– Multiple server support
– Management framework integration
Navisphere software suite– Navisphere Manager
– Navisphere Secure CLI
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 51
CLARiiON Management Options
Navisphere Secure CLI (Command Line Interface)– naviseccli commands can be entered from the command line and
can perform all management functions
Navisphere GUI (Graphical User Interface)– Navisphere Manager is the graphical interface for all management
functions to the CLARiiON array
Navisphere Service Task Bar– Hardware and software registration and configuration of the array
Navisphere Wizard– Easy to use Wizard for management of the CLARiiON
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 52
CLARiiON Secure CLI
Secure CLI syntax– naviseccli –h <SP address> <commands>
Entered from the command line Performs all management functions
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 53
Navisphere Manager Discover
– Discovers all managed CLARiiON systems
Monitor– Shows status of storage systems– Provides centralized alerting
Apply and provision– Configures volumes and assigns
storage to hosts– Configures snapshots and
remote mirrors– Sets system parameters
Report– Provides extensive performance
statistics via Navisphere Analyzer
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 54
Navisphere Service Taskbar (NST)
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 55
Navisphere Wizards
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 56
Storage Configuration and Provisioning
Understanding application and server requirements and planning configuration is critical
Storage pools are a collection of physical disks– RAID protection level is assigned to all disks within the pool
– Thin Pools
– RAID Groups
LUNs are created from space within a storage pool
Storage groups are collections of LUNs that a host or group of hosts can access
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 57
Which RAID Level is Right for You? RAID 0 – data striping
– No parity protection, least-expensive storage– Applications using read-only data that require quick access, such as data
downloading
RAID 1 – mirroring between two disks– Excellent availability, but expensive storage– Transaction logging or record keeping applications
RAID 1/0 – data striping with mirroring– Excellent availability, but expensive storage– Provides the best balance of performance and availability
RAID 3 – data striping with dedicated parity disk
RAID 5 – data striping/parity spread across all drives– Very good availability and inexpensive storage
RAID 6 – dual distributed parity across drives– Suitable for larger RAID Groups where data availability is a priority
Support mixed types of RAID in the same chassis
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 58
Creating Storage Pools RAID protection levels are set when the storage pool is
created
Physical disks part of one storage pool only
– Drive types cannot be mixed in the RAID Group
May include disks from any enclosure
RAID types may be mixed in an array
Some storage pools can be expanded
Users do not access storage pools directly5 disk RAID 5 RAID Group
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 59
Storage Pools – Thin
Thin Pools– Collection of disks that are dedicated for use by thin LUNs
– Can contain a few disks or hundreds of disks
– Consist of any supported Fibre Channel or SATA disk drive
– RAID 5 and RAID 6 only
– Smallest pool size is three drives for RAID 5 and four drives for RAID 6
– Monitored using “threshold alerts”
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 60
Storage Pools – RAID Groups
RAID Groups– Limited to 16 disks
– All capacity is allocated
– Consist of any supported Fibre Channel, SATA, or EFD drives
– All existing RAID types are supported
– Defragmentation is allowed except for RAID 6 Groups
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 61
Creating a Storage Pool – General
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 62
Creating a LUN (Binding)
Creating a LUN (referred to as binding) is the process of building LUNs onto storage pools
Limited number of LUNs per storage pool
Limited number of LUNs per CLARiiON array
LUNs are assigned to one SP at a time– The SP owns the LUN
– The SP manages the RAID protection of the LUN
– The SP manages access to the LUN
LUNs occupy a part of each disk in the storage pool– Same sectors on each disk
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 63
Create LUN Operations – Setting Parameters
Fixed LUN parameters– Disk numbers, RAID type, LUN #, element size
– Cannot be changed without unbinding and rebinding
Variable parameters– Cache enable, rebuild time, verify time, auto assignment
– Can change without unbinding
– Thin LUNs have cache property settings
Bind operation– Fastbind is the almost instantaneous bind achieved on a factory
system or implemented in the latest code
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 64
Storage Groups
Storage Groups are a feature of Access Logix and used to implement LUN masking
Storage Groups define the LUNs each host can access – A Storage Group contains a subset of LUNs grouped for access by
one or more hosts and inaccessible to other hosts Without Storage Groups, all hosts can access all LUNs
– Access Logix controls which hosts have access to a Storage Group
– Hosts access the array and provide information through the Initiator Registration Records process
Storage Group planning is required
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 65
Module 3 Summary
Key points covered in this module:
CLARiiON management software suite
CLARiiON management options
CLARiiON storage provisioning objects
© 2009 EMC Corporation. All rights reserved. CLARiiON Foundations - 66
Course Summary
Key points covered in this course:
Models, components, and basic architecture of a CLARiiON
Supported CLARiiON RAID types
CLARiiON data integrity features
CLARiiON data availability features
CLARiiON management options
CLARiiON storage provisioning objects
top related