storage basics
DESCRIPTION
Storage Basics. Agenda/learning objectives. Introduce the components of the computer and show how they request and store data Introduce RAID technology and RAID protection schema Introduce Storage Area Networks and Network Attached Storage - PowerPoint PPT PresentationTRANSCRIPT
11
Storage Basics
22
Agenda/learning objectives
Introduce the components of the computer and show how they request and store data
Introduce RAID technology and RAID protection schema
Introduce Storage Area Networks and Network Attached Storage
Introduce different data protection capabilities available
Show how all the components fit into the Information Lifecycle Management vision
33
The Input / Output Machine
The CPU or Central Processing Unit(Server)
The CPU or Central Processing Unit(Server)
INP
UT
“Print thisE-mail”
OU
TP
UT
INP
UT
“OpenForecast.doc”
OU
TP
UT
INP
UT
“Save changesto Forecast.doc”
OU
TP
UT
44
Where Data Is Stored
Main MemoryMain Memory
Very fast access—no moving parts Very expensive compared to mechanical or magnetic storage Volatile—represents “one’s” and “zero’s” with positive or negative
charge of electricity - data is lost if there is no source of power Provides instructions and data to the CPU and stores results of
CPU calculations—information constantly changing during processing
55
Where Data is Stored
Non-Volatile Magnetic Memory: Tape and DiskNon-Volatile Magnetic Memory: Tape and Disk
Storage surface coated with magnetic substance Ones and zeros represented by positive or negative magnetic
polarization Retains magnetic polarization even without power Mechanical operation to position a read / write head, over a specific
area of the magnetic surface to:– Write data: write head changes the magnetic pole to positive or negative to
represent a one or zero– Read data: read head senses the positive or negative pole that represents a
one or zero
66
First Magnetic Tape Drive - 1952
77
Where Data is Stored
TapeTape
Organizes data sequentially on the tape in the order it receives the information
More general and simplistic formatting—allows tapes written by one system to be read to a different system in many cases
Cannot directly access each piece of data—it reads from the beginning of the tape until it gets to the data requested
Sequential Access provides good performance to read or write large amounts of data from start to finish, but very poor performance for random access
The tape is independent from the tape drive making it easily portable to other systems or to a safe location
88
First Magnetic Storage Devices for Computers
99
First Magnetic Storage Devices for Computers
1010
Where Data is Stored
DiskDisk
Organizes data into specific and addressable areas to read or write data directly
The disk must be formatted to match the disk addressing structure of the operating system
Direct access provides fairly consistent performance for mixed tasks of reading and writing sequential and random groups of data
Disk performance can be impacted by the length of idle time necessary to position the read / write head over the area being addressed
Disk is physically connected to the system—impractical or impossible to move the disk to a new location or new system
1111
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
HB
AH
BA
Physical Disk Connections
Rules for physical connection– Type of cable– Number of paths– Physical connectors
Rules for logical connection– To identify a read or write command vs. data
Format of drive– Addressing scheme
Controller system or circuit card
– ESCON for mainframe
– Host bus adapters for open systems– Proprietary cards for AS/400
1212
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
HB
AH
BA
How the I/O Works
Initiating the Read Request
1313
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
HB
AH
BA
How the I/O Works
Completing the Read Request
1414
System Bus
CPUCPU
MAINMAINMEMORYMEMORY
(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
HB
AH
BA
A Smarter Way to Use Main Memory and CPU
Customer 1 Meter Reading
Customer 2 Meter Reading
Customer 3 Meter Reading
Customer 1Meter Reading
Customer 2Meter Reading
“Let’s see, customer 1, then customer 2, what might be next? ... I predict customer 3”
CA
CH
E
Customer 3Meter Reading
1515
Customer 1 Meter Reading
Customer 2 Meter Reading
Customer 3 Meter Reading
A Smarter Way to Use Main Memory and CPU
Customer 1Meter Reading
Customer 2Meter Reading
Customer 3Meter Reading
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
HB
AH
BA
CPUCPU
CA
CH
E
1616
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
How the I/O Works
HB
AH
BA
Initiating the Write Command
1717
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
How the I/O Works
HB
AH
BA
Completing the Write Command
1818
Customer 1 Meter Reading
Customer 2 Meter Reading
Customer 3 Meter Reading
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
A Smarter Way to Use Main Memory and CPU
HB
AH
BA
CPUCPU
CA
CH
E
WRITE COMMAND“The Customer’s
Completed monthly Bill”The Write Confirmationis issued as soon as the data
and write command are securein a completely fault
tolerant area
The Write Confirmationis issued as soon as the data
and write command are securein a completely fault
tolerant area
1919
Peripheral Components of a Computer System
HB
AH
BA
HB
AH
BA
NICNIC
HB
AH
BA
System Bus
CPUCPU
MAIN MEMORYMAIN MEMORY(RAM)(RAM)
ROMROM(Read Only Memory)(Read Only Memory)
Storage ArrayStorage Array Tape Drive DeviceTape Drive DeviceNetworkNetworkRouterRouter
SANSANSwitchSwitch
2020
Data Storage: A Closer Look
Disk Drive
Memory Board
Tape Cartridge
2121
The Disk Drive: A Closer Look
2222
Formatting the Drive for Direct Access
A uniquely addressable area within a A uniquely addressable area within a disk drive is Cylinder, Head, and Sectordisk drive is Cylinder, Head, and Sector
Track The Disk Platter is segmented into a number of concentric rings, called Tracks
Cylinder
A specific Track in the same position on all of the disk platters in a spindle, together is called a Cylinder
The disk platter is also segmented into individual wedge shaped sections called Sectors
Sector
2323
Disk Drive Access Time
Seek Time: Seek Time:
The average amount of time necessary to move the actuator arm to position the read / write head over the track
2424
Disk Drive Access Time
Latency:Latency:
The average amount of time necessary to wait for the data to arrive to the read / write as the disk spins
Also called rotational delay
2525
Disk Drive Access Time
Transfer Rate: Transfer Rate:
The amount of time necessary to read data from, or write data to, the platter and move the data through the disk drive.
2626
Disk Drive Performance Variables
Seek time speed RPM speed of the disk platters
– Faster RPM reduces latency– Faster RPM has minor impact on
transfer rate
Disk drive interface speed– Ultra SCSI 40 MB/sec– Fibre channel 100MB/sec
2727
Evolution of Disk Technology
Drive capacities continue to increase dramatically from increased data density
Performance increasing marginally– Increased RPM speed– Increased use of memory and cache at the drive level
Disk drive interfaces driven by industry standards– Ultra SCSI– Fibre Channel– ATA
Industry challenge– Higher capacity per disk drive reduces cost, but…– Reduces the number of actuators for a given capacity
2828
Symmetrix CLARiiON Centera
SAN / NAS
SAN / NAS /Backup-to-Disk CAS
Tape &Tape
Emulation
DMX800
DMX1000-M2
DMX1000
DMX2000-M2
DMX2000
DMX3000-M2
DMX3000
CX700
CX500
CX300
Centera
AX 100 Netwin 110
NS700/G
CelerraCNS
ADIC Scalar SeriesDL700
EMC Storage Offerings
2929
Inside the Disk Arrays
Fault TolerantCache Memory
Array Controller Array Controller
Disk Directors Disk Directors
Host Interface Host Interface
3030
RAID Technology
RedundantArrays ofIndependentDisks
3131
RAID 0: Striping Data Across Many Disks without Adding Redundancy
Volume 1End
Raid 0Defined to the host computer as above, but data is physically moved to balance activity
Volume 1Middle
Volume 1BeginningWithout RAID
3 physical drives Defined to the host computer Volume 2Volume 2
EndEnd
Volume 2Volume 2MiddleMiddle
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3EndEnd
Volume 3Volume 3MiddleMiddle
Volume 3Volume 3BeginningBeginning
Volume 1End
Volume 2Volume 2EndEnd
Volume 3Volume 3EndEnd
Volume 1Middle
Volume 2Volume 2MiddleMiddle
Volume 3Volume 3MiddleMiddle
Volume 1Beginning
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3BeginningBeginning
3232
RAID 1 or Mirroring
Without RAID3 physical drivesDefined to the host computer
RAID 1A mirrored pair is created for each physical volume
Volume 1End
Volume 1Middle
Volume 1Beginning
Volume 2Volume 2EndEnd
Volume 2Volume 2MiddleMiddle
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3EndEnd
Volume 3Volume 3MiddleMiddle
Volume 3Volume 3BeginningBeginning
Volume 1End
Volume 1Middle
Volume 1Beginning
Volume 1End
Volume 1Middle
Volume 1Beginning
Volume 2Volume 2EndEnd
Volume 2Volume 2MiddleMiddle
Volume 2Volume 2BeginningBeginning
Volume 2Volume 2EndEnd
Volume 2Volume 2MiddleMiddle
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3EndEnd
Volume 3Volume 3MiddleMiddle
Volume 3Volume 3BeginningBeginning
Volume 3Volume 3EndEnd
Volume 3Volume 3MiddleMiddle
Volume 3Volume 3BeginningBeginning
3333
RAID 0 + 1 Performance and Redundancy
RAID 1 + 0A mirrored pair is created for each physical volume Volume 1
EndVolume 2Volume 2
EndEndVolume 3Volume 3
EndEndVolume 1
EndVolume 2Volume 2
EndEndVolume 3Volume 3
EndEnd
Volume 1Middle
Volume 2Volume 2MiddleMiddle
Volume 3Volume 3MiddleMiddle
Volume 1Middle
Volume 2Volume 2MiddleMiddle
Volume 3Volume 3MiddleMiddle
Volume 1Beginning
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3BeginningBeginning
Volume 1Beginning
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3BeginningBeginning
Without RAID3 physical drivesDefined to the host computer Volume 1
End
Volume 1Middle
Volume 1Beginning
Volume 2Volume 2EndEnd
Volume 2Volume 2MiddleMiddle
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3EndEnd
Volume 3Volume 3MiddleMiddle
Volume 3Volume 3BeginningBeginning
3434
Data Parity
Parity for 3rd
Group = 11 LOST DATALOST DATA11
Parity for 2ndParity for 2ndGroup = 1Group = 1 01100
Parity for 1Parity for 1stst
Group = 0Group = 00 11 11Group 1
Group 2
Group 3
Group 1 0 + 1 + 1 = 0
Group 2 0 + 1 + 0 = 1
Group 3 1 + 1 + ? = 1
DATA + DATA + DATA = Parity
3535
RAID 5
Parity for1st Group
Volume 1End
Volume 2Volume 2EndEnd
Volume 3Volume 3EndEnd
Parity for Parity for 2nd Group2nd Group
Volume 1Middle
Volume 2Volume 2MiddleMiddle
Volume 3Volume 3MiddleMiddle
Parity forParity for3rd Group3rd Group
Volume 1Beginning
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3BeginningBeginning
Without RAID3 physical drivesDefined to the host computer Volume 1
End
Volume 1Middle
Volume 1Beginning
Volume 2Volume 2EndEnd
Volume 2Volume 2MiddleMiddle
Volume 2Volume 2BeginningBeginning
Volume 3Volume 3EndEnd
Volume 3Volume 3MiddleMiddle
Volume 3Volume 3BeginningBeginning
RAID 5A group of drives are bound together as a physical volume
3636
0 Striping with no Parity Large Block Performance, No Redundancy
1 Mirrored Disks Highest Availability and Performance Simple Implementation
2 Hamming Code Large Block PerformanceMultiple Check Disks Availability, Poor Cost
3 Striping with Parity Large Block PerformanceSingle Check Disk Availability at Less Cost
4 Independent Read/Write Transaction Processing, High Availability,Single Parity Disk High
Percentage of Reads
5 Independent Read/Write Transaction Processing, High Availability, Independent Parity Disks High
Percentage of Reads
6 Independent Read/Write Transaction Processing, High Availability, Multiple Independent Parity Disks High
Percentage of Reads
Raid Level Technique Application
RAID Levels
3737
Storage Consolidation
Server/storage islands due to distributed computing model
Difficult to manage with reduced manpower
Poor utilization of storage
Integration of infrastructure due to merger/acquisition is difficult
Asset management is difficult
Server/Storage environmentTypical customer challenges:
3838
What is a Storage Area Network (SAN)?
…A dedicated network carrying block-based storage traffic
Users / ApplicationClients
Servers / Applications Storage / ApplicationData
SANSwitchesDirectors
LANSwitches
IPNETWORK
Fibre ChannelNETWORK
3939
SAN Benefits
High availability and manageability– All servers access same storage– Simplified management– Service for multiple platforms
Application performance– SAN provides a dedicated network– DBMS / transaction processing– Fastest record access
Fast scalability– Hundreds of servers – Hundreds of storage devices– Leverages existing infrastructure– Overcomes distance limitations
Better replication and recovery options
Storage consolidation optimizes TCO
4040
What is Network Attach Storage (NAS)?
…A network carrying file-based traffic
Users / ApplicationClients
Servers / Applications Storage / File Data
SANSwitchesDirectors
LANSwitches
IPNETWORK
Fibre ChannelNETWORK
Gateway
4141
NAS Benefits
Global access to information– File sharing– Any distance– Many to one, or one to many– Access from multiple platforms
Consolidation minimizes TCO
Collaboration– Improve time to market– Improve product quality
Information management– Leverage existing security– Leverage existing personnel– Leverage existing infrastructure
Replication and recovery options
Scalable without server changes
4242
High Availability
Typical customer issues:
Mission critical data – requires 7x24 uptime– No single point of failure (SPOF)
Time to market requirements are tighter– Development cycle is shorter– Development of technology is quicker– Competition is everywhere
Amount of data is growing, backup windows are shrinking
Meet Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO) easier
4343
Data Path Protection
4444
What is Path Management Software? Allows you to manage multiple paths to a device to maximize
application uptime– Path
• Refers to the route traveled by I/O between a host and a logical device. • Comprises a host bus adapter (HBA), one or more cables, a switch or hub, an
interface and port, and a logical device.– Multi-Pathing
• Configuring multiple paths to a single logical device
Redirect I/O– For Load Balancing– For Path Failover
Monitor– HBAs, Paths, and Devices
Manage– Priorities, Policies for information access– Reconfiguration– Component repair
4545
Path Management Overview
Mirrored CacheSP B
0 1 2 3 0 1 2 3
Server
HBA0
HBA1
Mirrored CacheSP A
0 1 2 30 1 2 3
4 Paths Configured– 2 from each HBA through
switch to each SP
Provides Data Access
Provides Failover – Upon HBA, switch or SP failure
Provides Potential for Load Balancing
Four native devices configured – c1t2d1, c1t2d2, c2t2d1 and
c2t2d2
4646
Local Replication Protection
4747
SnapShots: Logical Point-in-Time Views
Pointer-based copy of data– Takes only seconds to create a
complete snapshot– Requires only a fraction of original file
system
Snapshots can be persistent across re-initialization of the array
Can be used to restore Source data
Up to eight snapshots can be created per Source LUN
ProductionInformation
Snap
snapshot
Logical Point-In-Time View
snapshotsnapshot
snapshotsnapshot
snapshotsnapshot
snapshot
4848
SP Memory
OriginalBlock C
Snap Shot “Copy-on-First-Write”
Block A
Block B Reserved LUN
Secondary Host
Production Host
Block C in the Reserved LUN now reflects the change that the Production Application made and the pointer is updated to point to the Reserved LUN for Block C
Source LUN
0 0 0
0 0 00 000 0 0
0 00
00 0
00
UpdatedBlock C
Block D 1
Block A
Block B
Block C
Block D
4949
BCVs: Full Image Copies
ProductionInformation
Clone
BCV
Full Image Copies
BCVBCV
BCVBCV
BCVBCV
BCV
Physically independent point-in-time copies of source volume
– Available after initial synchronization– Once established, no performance impact
between source / BCV– Can be used to restore or replace source in
event of hardware or software error
Can be incrementally re-established – Only changed BCV data overwritten by source
Up to eight BCVs can be established against a single source LUN concurrently
– Can be any RAID type or drive type(regardless of source)
5050
Remote Replication Protection
5151
Remote Replication Business Drivers
Primary Applications– Disaster Recovery– Business Continuance
Secondary Applications– Backup– Testing– Data Warehousing/mining– Content Distribution– Report Generation
5252
Recovery Objectives Recovery Point Objective (RPO)
– How far back in time does data need to be recovered if disaster occurs ?
Recovery Time Objective (RTO)– How much time will pass after a disaster before operations
are online again?
5353
Replication Models
Synchronous
Asynchronous– Periodic incremental update– Traditional asynchronous– Semi-synchronous– Full copy
5454
Replication Model - Synchronous
No data exposure
Unit of transfer is individual I/O– Transfer trigger is receipt of I/O from host– No acknowledgement of I/O to host until remote copy is
updated
Write ordering– I/Os are applied to target in the same order they were
received
High bandwidth and low latency are critical– Distance is limited
RPO – Zero (No data exposure)
5555
Synchronous Model
1
2
34
I/O
I/O
ACK
ACK
1. I/O from host to local storage system2. I/O from local storage system to remote (target) system3. Acknowledgement back from remote to local system4. Acknowledgement from local storage system back to host
5656
Replication Model - Asynchronous Periodic Update
Unit of transfer is Delta Set – What’s changed since last transfer– Transfer trigger can be discrete event or time cycle– I/O is acknowledged to source host immediately
Write ordering is not an issue– Most recent changes are applied to the destination– Updates are applied atomically
RPO – User defined, business driven
Link bandwidth and latency requirements are flexible
5757
Asynchronous Periodic Update Model
1
2
3 4
I/O
ACK
1. I/O from host to storage system2. Acknowledgement from local storage system back to host3. Trigger event4. Delta Sets from local storage system to remote (target) system5. Acknowledgement back from remote to local system
ChunkDelta set
ACK
5
5858
Backup/Restore Options
5959
Backup to Tape Today
Over 80% of all data backed up today goes to tape Typical backup operation
– Full backups usually done weekly– Incremental - backups of all data since the last backup (full or
incremental)– Differential backups-backup of all data that has changed
since the last full backup– For example, a customer performs a full backup of data on
Sunday but only performs an incremental backup of the data each the rest of the week
Customers typically keep multiple “copies” of their backed up data
– Average number of “copies” is 8.5, some at more than 16:1– EMC Best Practices is 4:1or lower
6060
What is Backed up? Operating Environments
– Servers – Desktop PCs– Laptop PCs
Applications– ERP – (i.e. SAP, Oracle Apps, Peoplesoft, etc.)– CRM- (i.e. Siebel, etc)– Databases (Oracle, UDB, MS SQL)– Homegrown (using DB technology, file systems, etc.)– Messaging ( Microsoft Exchange, etc)
Application data– For all of the above– Miscellaneous other end user data
Logs and journals– Application transaction logs, database journals, file system journals
6161
What is Operationally Restored?
Most restores are at the file and volume level– Frequency ranges from several per day to monthly
Full system restores are rare
Most common restores– Email– Files– Application data
6262
EMC Backup-to-Disk Solution: Total Solution with SAN and / or LAN Use SAN and / or LAN
to centralize all backup
Provide higher service levels
Do full backups and incrementals to disk
Regularly copy from disk to tape and move offsite
Extend the life of remaining tape infrastructure
Weekly full backupsNightly incremental backups
Copy from disk to tape and move
offsite
CLARiiONwith ATA
Server with Backup App
Tape
SAN
Data Center
Primary Data
NS600G
Server with Backup App
WAN
6363
Operational Backup-to-Disk Process
Primary Data
Tape
SAN
CLARiiONwith ATA
Application data backed up to disk
Restores from disk
Data migrated to tape
Tape vaulted on / offsite
1 3 42
6464
Introducing CLARiiON Disk LibraryPlug and play
– Supports existing backup environment– Appears as tape library (Fibre Channel attached)– Stores data in native tape format– Complete compatibility with existing operation
High performance – Single stream performance up to 80 MB/s– Up to 425 MB/s sustained performance
Cost effective, reliable, and highly scalable – Data compression (up to 3:1)– Capacity:
• DL300: 12.5 TB / 37.5 TB• DL700: 58 TB / 174 TB
– Built on proven CLARiiON ATA technology
Creates native tape for offsite storage– Policy-based implementation– Automatic single process for moving data from backup
environment to native tape– Offloads creation of second copy from backup server
Application / Backup Hosts
CLARiiON Disk Library
Standard Tape
Library
SAN
6565
CLARiiON Disk Library Implementation
Physical TapeLibraries / Drives
CLARiiONDisk Library
IMPORTEXPORT
LAN
SAN
BackupServer 0010011111
1100110010
BackupServer 0010011111
1100110010
BackupServer 0010011111
1100110010
Storage and tape devices can be connected to the CLARiiON Disk Library appliance via Fibre Channel
00101111001010
Emulated tape
Physical tape
Data flow
000111110010
0110101011111010010010
0110101011111010010010
6666
Backup to Disk and Disk Libraries
Disk-to-disk replication– Highest service levels
• Fastest possible backup and recovery• Least impact to production systems
– Highest flexibility• Backup images can be repurposed for testing,
reporting, and more
Tape Library Emulation and backup to disk– Moderate service levels
• Faster backup and recovery than tape– Moderate flexibility
• Disk Libraries support integration with legacy tape environment
• CLARiiON with ATA supports integration with existing CLARiiON primary storage
Tape Libraries– Lowest service levels– Least flexibility
SE
RV
ICE
LE
VE
LS
HIGH
LOW
CLARiiON
CLARiiONDisk Library
Tape
A New Service-Level Option
6767
Example: Storage Management
Storage ManagementStorage Management
81%utilization
78%
utilization
72%
utilization 60%utilization
NetworkedStorage
SAN
NAS
DAS
???
• Data gathering from multiple sources?
• Incomplete, uncorrelated information?
• Different tasks, tools for each vendor?
• Complex and time consuming?
• Making assumptions and mistakes?
Plan and Provision Monitor and Report Device Management
Consistent View of Storage Environment
After
Before
6868
Why Storage Management
Easier to meet service levels– Increase application availability– Expedite problem isolation and resolution– Improve time-to-provision
Helps drive down storage environment costs– Reduce IT staffing costs– Proactive storage and SAN management– Automated provisioning– Current, consistent, correlated information– Increase storage utilization; reclaim capacity
Facilitates compliance with new regulations– Common and consistent storage management information and
processes
6969
Enabling ILM—EMC’s Offering
InformationInfrastructureManagement
Storage ManagementControlCenter Family
Visual FamilyReplication Manager Family
Information and Content ManagementStructured Information
ManagementDatabaseXtender
Enterprise Content ManagementEnterprise Document Mgmt Web Content Mgmt
Digital Asset Mgmt Records Mgmt/ComplianceCollaboration ApplicationXtender
Data MovementData Migration Tools
SAN CopyOnCourse
Intelligent Data ManagementAVALONidm
DiskXtender Family
Protection and RecoveryRemote Replication
SRDF FamilyMirrorViewRepliStor
Celerra Replicator
Local ReplicationTimeFinder Family
SnapViewCelerra SnapSure
Backup / RecoveryNetWorker
AvailabilityPowerPath
AAMCoStandby AAdvanced
Tiered StoragePlatformsSymmetrixCLARiiON
SANConnectrix
NASCelerra
NS SeriesNetWin
Tape Emulation
CLARiiON Disk Library
CASCentera
Tape ADIC Scalar Series
ATATechnology
ServicesPartners
7070
Summary
After completing this session you should:
Be familiar with the components of a computer and how they work to retrieve and store data
Be familiar with RAID technology and understand the different RAID protections
Be familiar with Storage Area Networks and Network Attached Storage as connection options for storage
Be familiar with the different data protection options that are available
Understand how all the pieces fit into the Information Lifecycle Management vision
7171
Closing Slide