17285832 developing a storage strategy for the future

18
De v eloping a S t orage S t rateg y f or t he Fu t ure an Storage eBook

Upload: richabhatiajim

Post on 09-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 1/18

Developing a StorageStrategy for the Future

an Storage eBook

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 2/18

1

contents

This content was adapted from Internet.com'sEnterprise Storage Forum, Enterprise IT Planet,and InternetNews Web sites. Contributors:Richard Adhikari, Judy Mottl, Henry Newman,Drew Robb, Jennifer Schiff and Paul Shread.

Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

2 The Data Pileup: SaveMoney or Save Data?Judy Mottl 

4 Three Acronyms That CouldChange the Storage WorldHenry Newman

6 Sorting Out YourStorage OptionsJennifer Schiff 

10 Choosing the Right High-Performance File SystemDrew Robb

13 Managing Storage in a

 Virtual WorldDrew Robb

16 The Trouble with VirtualDisaster RecoveryRichard Adhikari 

Developing a Storage Strategy for the Future[ ]

10

2

13

16

4 6

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 3/18

Given how cheap storage has become, it's under-

standable enterprises are expanding arrays tohouse growing data. But stocking up on hardware

and software to hold more and more information is a costlymisstep, according to a Gartnerreport.

New regulations and legal con-cerns are likely prompting IT tokeep every bit and byte of data

 just in case some litigation issuearises, and since storage costsare decreasing, the urge to push

another box into play can betempting.

The problem is that data growthwill very quickly outpace the sav-ings in storage, according toWhit Andrews, an analyst atGartner.

"It's time for companies to mod-ernize storage strategies and understand how informa-tion access technology can be a good tool for making

sure they need what they're keeping," Andrews toldInternetNews.com.

According to Gartner, the highest reported price in the

first quarter of 2008 for managed storage was $12.50per gigabyte per month, and the lowest was $0.29 pergigabyte per month for archive storage needs.

A survey of Gartner clientsreported that none expected itsstorage budget to decrease in2008, and that 67 percentexpected the budget for storagehardware to increase. Of thosepolled, 64 percent also expectedstorage software costs to increase

as well.

Just a quick look at backup stor-age provides a clear view of howstorage costs are decreasing.Prices dropped by about 30 per-cent from 2006 to 2008, accord-ing to Gartner.

But then cheap storage isn't real-ly cheap when additional management costs andincreased power and cooling costs are factored in.

Enterprises that choose to retain everything run the riskof significant future costs, Gartner reported. Also, the

2 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

The Data Pileup: SaveMoney or Save Data?

By Judy Mottl

A survey of Gartner clients reported that none expected its storage budget to decreasein 2008, and that 67 percent expected the budget for storage hardware to increase.

Jupiterimages

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 4/18

longer information is saved, the harder it is to discernvalue, according to the survey.

Information-Access ToolsCompanies need to create a clear distinction on whichdata should be saved on primary storage and whatdata should be housed on cheaper secondary storage,as the costs vary greatly in terms of hardware and soft-ware.

Gartner provided a scenario, using a rough estimate of $5 per gigabyte for backup storage and a generationrate of 10 gigabytes per employee per year. A 5,000-worker company faces annual costs of $1.25 million forfive years of storage with those financials. Cutting the

amount of data by 80 percent could save about $1 mil-lion for five years and lower the organization's liability,noted the report.

Information-access technologies include a wide rangeof tools such as enterprise search, content analytics andsocial search. Integration and deployment typicallyrequire some expertise, such as an information archi-tect, to get the tools in place and working well,Andrews explained. And it's not cheap, either, as prod-ucts can range from $10,000 to millions for advancedapplications.

Still, companies can offset the costs through storagesavings, as well as benefits from improved business

processes.

The first step, according to Gartner, is to initiate anddevelop a content valuation process. "This is determin-ing what's important to keep and how a companydecides what to keep," explained Andrews.

"It means establishing criteria on what data is to bestored, where, and why," he added. "Cheap storage isexpensive when it's storing data that doesn't need tobe stored," he noted.

A good best practice is establishing a content-valuationpolicy on legacy data and making sure what's storedrequires that storage investment.

While some enterprises are using information-accesstechnologies, the majority is not at this point, accordingto the research firm. But sooner or later companies willrealize the waste taking place in storage and the costsof retaining data that has no value, Andrews said.He adds, "Is what you're storing on tape valuable atthis point, because if it's not, then you don't need it."I

3 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 5/18

Alot of claims have been made lately of disruptive

storage technologies, but saying a particular compa-ny is disruptive is a long way from Clayton

Christensen's original definition. Very few individual compa-nies have changed the industry, and one big reason is thateveryone wants a standards-based product, and standardsrequire multiple companies to create them. Once a productis created that might be a disrup-tive technology, lots of otherplayers jump into the mix.

Clearly, disruptive technologiesare not an everyday event, nor

are they easy to predict. Let'sexamine some technologiesthat might significantly changeenterprise storage and disruptthe market. I won't adhere tothe strict definition, but I amgoing to suggest some tech-nologies that if adopted couldchange the enterprise storage market. As I said, I thinkvery few companies are going to be able to create anew technology market from a technology without astandard that others can use. Even Microsoft, for exam-

ple, supports all types of standards, from SATA (T13)and FC/SCSI (T11) to IETF standards. No company canbe an island today.

So without further ado, here are three things that I think

will be truly disruptive to the enterprise storage market.FCoE

Fibre Channel over Ethernet (FCoE) is my No. 1 pick fora technology that could change enterprise storage indramatic ways.

Today, any higher performanceor higher reliability storagedata moves over FibreChannel. Fibre Channel hasbeen around for 10 years or so

as the de facto storage medi-um in the enterprise. iSCSI, inmy opinion, has never takenreasonable market sharebecause of the overhead bothfor CPU and packetization (theTCP/IP encapsulation uses asignificant part of the packetfor small I/O requests).

If FCoE happens, Fibre Channel connectivity to storagewill be a thing of the past and we will have one net-

work fabric for communications and storage. Even thisyear, as FC interface-based disk drives are beingreplaced by SAS, FC chipsets shipped are declining. FC

4 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

Three Acronyms That CouldChange the Storage World

By Henry Newman

Fibre Channel over Ethernet (FCoE) is my No. 1 pick for a technology thatcould change enterprise storage in dramatic ways.

Jupiterimages

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 6/18

chipsets never achieved the cost factor that Ethernetchipsets achieved because FC was never considered acommodity technology — it was always a higher-priced

storage interconnect. Every computer from your laptopto a large SMP server has Ethernet built in. That is nottrue and has never been true for Fibre Channel.

FCoE will reduce costs in a number of ways:• Cost per port: Although 10GbE is likely a bit higherthan 4Gbit FC in the cost per Gbyte/sec, that trendwill not last long. I suspect this will be changing, andso does most of the industry.

• Personnel: Today you have a storage networkinggroup and an IP networking group in most large

organizations. They are separated, as the peoplemust deal with different technologies, training, patch-es, pricing, and so on. Having a single group of peo-ple that can do the same things will save money.

• In my opinion, much of the Fibre Channel commu-nity sees the writing on the wall, otherwise theywould not have such broad participation in the FCoEcommunity and standards.

FCoE, when deployed, will change the storage net-working world. The first steps will be the host side con-nections and switches and RAID controllers, and thenwill come the other peripheral devices such as tapedrives. FCoE means that Ethernet gets a larger marketand, in my opinion, it will likely mean the end of theline for InfiniBand, as the combined FC and Ethernetmarket is just far too large a commodity market.

OSDI have been writing about object-based storage for sev-eral years now, and I am a big proponent of T10 OSD,given the problems I see regularly with fragmentation.

OSD has a long way to go before it could be disrup-tive. There is not as much momentum behind OSD asthere is for FCoE. I think part of the problem is that theproblems OSD solves are not as easily understood asthe problems that FCoE solves, and because OSD issolving bigger, more complex problems, it requires alarger infrastructure change such as file systems, drives,storage controller changes, and disk drives. I stillbelieve that OSD solves many of the bigger problemsthat most sites face for the management of the life of 

data from creation, to backup/archiving, restoration,deletion, and everything in between, including dataprotection and security. I believe OSD is coming to a

system near you, but it is going to take some time.

pNFSI am a big proponent of this technology, and it hassome broad implications.

In today's world we have SAN storage and NAS stor-age. Everyone knows that SAN-based storage is fasterthan NAS for lots of good reasons, not the least of which is that the NFS protocol was not really designedto deliver high-performance streaming or I/O. NFS wasdesigned to solve a different problem.

When NFSv4.1 is implemented and released, the abilityto have SAN performance on NAS equipment couldbecome a reality. Of course, the NAS equipment wouldneed to be redesigned to deliver SAN performance,and most NAS equipment is not designed that way, asNFS is the bottleneck, but this would allow a mergingof the technologies. In addition, many environmentsare going to shared file systems for clusters of systems.NFSv4.1, if it lives up to its billing, would allow highperformance access from many nodes to a file system.

Of course, you will need a high performance file systemto support the high-speed access, and that could be aproblem for some vendors, but the tools are there. Ibelieve NFSv4.1 will be disruptive, as it will merge theSAN and NAS world over time together (yet anotherargument in favor of an IP-based storage world). NASvendors are going to have to build faster hardware andbetter file systems, and SAN vendors are going to haveto team with file system vendors to develop joint prod-ucts. This will all be very interesting, and I believe itcould also help OSD, as larger, higher performance filesystems likely will have more of the issues that are

solved by OSD.

I am very skeptical of claims by vendors that their tech-nology is disruptive, as I have seen far too many suchclaims never pan out, but we've covered a few tech-nologies here that could turn out to be genuinely dis-ruptive, and the implications for storage networks arevery interesting. I

5 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 7/18

DAS, SAN, NAS... RAID, MAID, solid state tech-nologies, grid storage, hard disk drive storage,tiered storage, tape storage... active storage,

archival storage, remote storage, disaster recovery...

self-healing disk drives, virtualization, de-duplication,thin provisioning…

It would take pages just tolist all the types, makes, andmodels of enterprise stor-age options currently on themarket. Then add a list of the features and benefits of each one and it's almostenough to make a storageadministrator in search of a

new, additional, or supple-mental storage system longfor the days when a storagesolution was whatever camewith your server. Almost.

To make it easier for you tocut through at least some of storage-decision-making clutter and make aninformed purchasing decision,EnterpriseStorageForum.com spoke with a few stor-age analysts to gather advice to narrow down thenumber of choices and help you find a storage solu-tion that's right for your enterprise.

What's Your Problem?Before you even talk to a vendor, "you have to deter-mine what problem is it that your enterprise is tryingto solve... and define your pain points and require-ments," said Ashish Nadkarni, principal consultant atGlassHouse Technologies.

 You also need to know whatit is you are actually storing,that is how much and whatkind of data (e.g., file-based,block-based, structured orunstructured), said MarkPeters, an analyst withEnterprise Strategy Group.

Other good questions to askyourself and the people whowill be using the storage, hesaid, include: How do youplan on utilizing this storagesystem? Is it for active stor-age or backup or archiving— or remote storage or dis-

aster recovery? What applications are you running?Do you want the system to be automated? Do youneed it be scalable? How important are speed andperformance?

"You need to start from what you want rather thanwhat a vendor or group of vendors is trying to tell you

6 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

Sorting Out Your Storage OptionsBy Jennifer Schiff 

Jupiterimages

EnterpriseStorageForum.com spoke with a few storage analysts to gather adviceto narrow down the number of choices and help you find a storage solution that's

right for your enterprise.

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 8/18

[you need]," said Peters, who added that "the chal-lenge for the user is to actually know and define whatit is they want."

To aid in that process, Greg Schulz, founder of andsenior analyst at Storage IO, highly recommendeddrawing up a list divided into three columns or cate-gories. In the first column should be those featuresand functionality you must have; in the second, thosethings you want or need to have; and in the third thefeatures that would be nice to have.

For Schulz, must-haves include "availability, reliability,some level of performance, some level of capacityand scalability," specifically "RAID 1, RAID 5, RAID 6,

failover, redundant controllers, ease of management,tiered storage, different types of drives (fast drivesand slow drives) and tape." Yes, even tape, which allthree analysts said isn't going away any time soon —and is actually a good, economical, "green" storagesolution.

Things that fall into Schulz's want-to-have or nice-to-have bucket include de-duplication, thin provisioning,and snapshots, features that have generated a lot of buzz and may be very helpful but aren't absolutelyessential to storing data.

Above all, said all three analysts, stay focused on theessentials. If you happen to find a solution that meetsall of your must-have requirements and can also pro-vide you with some of your want-to-have or nice-to-have features — at the right price — then go for it.

Remember, "it's what you want out of a solution, notwhat a company wants to sell you," stressedNadkarni. For example, if a certain amount of capacityis a must-have requirement, focus on that. If compli-ance is your main issue, make sure the solution youchoose has a good track record when it comes to

compliance. If you are looking for a disaster recoverysolution, stay focused on that. And be sure to validatevendor claims by checking with customers and review-ing test results (for things like performance) if a com-pany is new.

Watch for the Warning SignsWhile the analysts we spoke with believed goodenterprise solutions far outweigh the bad, it's still pos-

7 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

"Cloud computing" has been ill-defined and over-

hyped, yet storage vendors have been quick to trot

out their own "cloud storage" offerings and end

users are wondering whether there's significant

cost savings in these services for them, particularly 

in tough economic times.

"Cloud-speak" can be downright confusing. A recent

Storage Networking World conference track pro-

claimed that clouds "are an evolving approach to

providing users transparent IT services through ashared infrastructure of pools of systems and serv-

ices." Clouds "provide a vision of a frictionless econ-

omy enabled by lowering the barrier for entry and

reducing the penalty for failure." And clouds "are a

 vehicle to deliver infinite resources, a commitment

proportional to need, and cost-effective economies

of scale," albeit with a few caveats on existing infra-

structure, manageability, and security.

Surprisingly, Gartner considers the amorphous

nature of the term to be good news: "The very con-

fusion and contradiction that surrounds the term

'cloud computing' signifies its potential to change

the status quo in the IT market," the IT researchfirm said earlier this year. Gartner perhaps didn't

help matters any by defining cloud computing as a

"style of computing where massively scalable IT-

related capabilities are provided 'as a service' using 

Internet technologies to multiple external cus-

tomers."

 John Webster, principal IT advisor at Illuminata,

simplifies matters by advising users to "think of the

cloud as the Internet," delivering services and com-

puting resources.

Oddly enough, storage vendors developed some of 

the earliest cloud services, although it took another decade for the economics of Internet-based storage

to make sense.

"The application's storage is whatever sits up there

on that 'ethereal thing,'" said Webster. There are dif-

ferent ways to access that storage, whether as an

external service or setting up your own "cloud"

inside your enterprise firewall.

continued 

The Cloud Offers Promise

for Storage UsersByMarty Foltyn

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 9/18

sible to make a bad choice, particularly if you ignoreyour must-have list, base your decision on marketinghype, are too emotionally involved with a vendor or

brand, act too quickly, or go with the low-ball quotewithout taking into account the total cost of owner-ship and whether the system actually addresses mostof your needs.

So what are some easy ways to avoid making a badstorage decision?

"When you hear the words revolutionary, the only, thefirst, or the fastest or the most reliable, the alarm bellsshould be going off," said Schulz. If vendors makeclaims about having the best or the fastest perform-

ance, "have them back it up by showing you [usingtest results from organizations like SPEC, MicrosoftESRP, FPC, or TPC] and by comparing their perform-ance to others.... And make sure it's an apples-to-apples comparison, not apples to oranges."

Another safeguard is to test out all the systems youare considering — or at least see one in action at acustomer site — and to speak with customers who'vebeen using that solution for at least a few months.

"Don't settle for a WebEx demo," said Schulz. "Getyour hands on a system if you can. Ask questions. Askfor references... but ask to hear about a story that did-n't quite go well, though the customer still ended upbuying."

The truth is, he said, that "every vendor out there willhave problems at some point or another. And anyvendor that tells you they've never had a problem,they've never had an interruption, that's an alarm bell.All vendors have issues. All technologies have issuesat some point in time. What separates the vendors ishow they respond to those issues. How do they pre-vent them from recurring? How do they manage

them? And then also have they improved their tech-nology?"

Speaking of which, because technologies get updat-ed or replaced all the time, before you buy anything,"grill the vendors on what their product road map is,"said Nadkarni. "For example, if they just released aproduct a year ago, then there's a very good chancethat in the next year they're not going to have any-thing very drastic coming out that will replace that

8 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

 While cloud storage is not applicable for tier-one,

mission-critical data due to the nature of the infor-

mation (e-mail, databases, transactional data), pri- vate clouds can serve as community storage pools

for enterprise backup and archival data. Cloud stor-

age can also be a viable option for static data result-

ing from applications such as digital content and

distribution, video surveillance, or streaming 

media.

Internal clouds can also offer advantages to users

concerned with security issues who are more com-

fortable with their staff managing data within a cor-

porate intranet than over the Internet.

Especially with new regulations such as the Federal

Rules of Civil Procedure (FRCP) on depositions anddiscovery, using cloud storage in a highly redun-

dant way helps make sure enough copies are lying 

around based on a policy set. It can also free up

expensive human resources now targeted to sorting 

and cataloging data.

Security experts note, however, that for regulated

data moving into any kind of a cloud, security best

practices in the areas of encryption, key manage-

ment and general storage security apply. As out-

lined in the SNIA Storage Security Best Current

Practices Technical Proposal, organizations should

"ensure appropriate service-level objectives for vir-

tual storage: 1) match the availability objective for the 'storage cloud' to the application requirements;

and 2) match the confidentiality and privacy 

requirements for the 'storage cloud' to the types of 

information stored."

Companies might also be wise to examine their stor-

age and retention procedures in light of tracking 

down data relevant to an e-discovery request. If 

organizations are already having trouble finding 

data, cloud computing could potentially create more

places one has to look. And storage on external

clouds or third-party facilities could also be includ-

ed in any FRCP requests for backups and disaster 

recovery copies. I

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 10/18

product. [But] if the product has been in the marketfor a few years, the vendor may be [coming out] witha brand new, completely redesigned product that's

going to replace any and all products," which couldpose a problem, he said. That's why it pays to see aproduct road map, to help you determine if the sys-tem you are considering is going to need to beupgraded or replaced sooner rather than later.

As an example, Nadkarni points to modular storagearrays. Vendors, he said, have been moving "awayfrom the old loop-based backend drives to a point-to-point system. [But] there are a lot of arrays out therein the market that have loop-based drives — andthey're all being replaced, slowly, by point-to-point-

based drives. So if you're buying a modular storagearray, definitely check if that storage array is due for arefresh, because once that point-to-point drive comeout, [your] loop-based one is going to be obsolete,and you're going to have a disruptive upgrade to gofrom one to another."

That's why establishing a good rapport with a vendoris important — and why you should talk to customers,to see if that vendor will be there for you when youneed help, not just during installation.

"We tend to get so embroiled in needs and feeds andspeeds that the down-to-earth relationships can getmissed," said Peters. "Reputation, references, and

(where relevant) experience are crucial... to storagesystem choice."

And if you do not feel comfortable making an impor-tant storage decision on your own, get help, in theform of an independent consultant.

Take Your TimeAbove all, be patient when choosing a critical storagesystem. "Patience is a virtue," said Nadkarni, who saidhe thinks that phrase should be an operational guide-line. "Never hurry into a large decision. If you are

proactive about how you manage your [storage] envi-ronment, you will know ahead of time what you needto do — and to purchase — to keep it running.

"If you are under the gun to make a decision quickly,chances are you're going to make a mistake," he said."But when you have time on your side, you can makesure all your i's are dotted and your t's are crossed —and be more assured that you're making the rightdecision." I

9 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 11/18

There are a lot of high-performance file systems out

there: Sun QFS, IBM GPFS, Quantum StorNext, RedHat GFS and Panasas, to name a few. So which is

best? It depends on who you ask and what your needs are.

"We typically compete with NetApp OnTap or OnTapGX, EMC, IBM GPFS, HPPolyserve or Sun's opensource research projectcalled Lustre," said LenRosenthal, chief marketingofficer of Panasas Inc."Although we have

replaced systems runningSun's QFS, we have neverreally competed with themin sales situations."

Rosenthal claims thatQuantum StorNext and HPPolyserve can only dealwith a maximum of 16 clus-tered NFS servers, so theydon't tend to compete in scale-out NAS bids. Similarly,he said that IBM GPFS and Sun Lustre, which are both

parallel file systems like Panasas PanFS, are mainly usedby universities and government research organizations

for scratch storage, as they don't provide high enough

I/O rates or a sufficient range of data managementtools such as snapshots.

Tough talk indeed from Panasas. So how do its rivalsrespond to these claims?

Todd Neville, GPFS offeringmanager at IBM, said theGPFS installation base isdiverse, including HPC,retail, media and entertain-ment, financial services, life

sciences, healthcare, Web2.0, telco, and manufactur-ing. Neville is also dismis-sive of the I/O rate claims.

Greg Nuss, director of thesoftware business line atQuantum, is more emphat-ic, stating that the state-ment by Panasas about

StorNext's capabilities is completely false.

"Each node in a StorNext cluster can act as NFS server,each presenting the common file system namespace atthe back end," he said. "Today our stated node sup-

10 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

Choosing the Right High-Performance File System

By Drew Robb

There are a lot of high-performance file systems out there: Sun QFS, IBMGPFS, Quantum StorNext, Red Hat GFS and Panasas, to name a few.

Jupiterimages

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 12/18

port is 1,000 nodes and we support both SAN-attachedas well as LAN-attached nodes into the cluster. Wehave practical installations in the 300 to 400 node

range deployed today. We don't typically run intoPanasas in the market because StorNext is not typicallydeployed in scale-out NAS configurations, but rather inhigh-performance workflow and archive configura-tions."

HP, meanwhile, also took umbrage about the Panasasclaims. The company said that HP Scalable NAS doesnot have an architectural limit on the number of NASFile Services server nodes that a customer can use intheir clusters.

"The stated 16 server node limit is a test limit only,"said Ian Duncan, director of marketing for NAS for HPStorageWorks. "HP has a number of NAS File Servicescustomers using clusters with more than 16 servernodes."

Duncan said Panasas, Sun QFS, IBM GPFS, andQuantum StorNext are not true symmetrical file sys-tems, but are cluster file systems based on masterservers — whether for metadata operations, lockingoperations, or both — which are relatively easy toimplement as an extension of traditional, single-nodesystems. However, Duncan believes they suffer fromperformance and availability limitations inherent in themaster server's singular role.

"As servers are added, the load on the master serverincreases, undercutting performance and subjectingmore nodes to loss of functionality in the event of amaster server's failure," said Duncan. "By contrast, the4400 Scalable NAS File Services uses the HP ClusteredFile System (CFS), which exploits multiple, independentservers to provide better scalability and availability,insulating the cluster from any individual node's failureor performance limitation."

With that out of the way, let's take a closer look atsome of these file systems.

Panasas PanFSThe Panasas PanFS parallel file system is an object-based file system designed for scale-out applicationsthat require high performance in both I/O and band-width. Unlike NFS or CIFS, which Panasas also sup-

ports, PanFS uses the parallel DirectFLOW protocol,which is the foundation of the upcoming pNFS (ParallelNFS) standard, which is the major advance in the

upcoming NFS version 4.1. The key benefit to Panasasparallel storage is said to be superior application per-formance.

Where NFS servers require that all I/O requests gothrough a single NAS filer head, PanFS enables paralleltransfer of data directly from the clients or server nodesinto the storage system. With Panasas, the NAS head isremoved from the data path and is no longer the I/Obottleneck. Case in point: Panasas parallel storage isinstalled with the world's highest performance comput-er system in the world, the Roadrunner system at Los

Alamos National Lab in New Mexico. It generates closeto 100 GB/s to a single shared file system.

"As a result of this architecture, Panasas parallel stor-age systems scale to thousands of users/servers, tens of Petabytes and can generate over 100GB/s in band-width," said Rosenthal. "Other key features include itssoftware-based RAID architecture that enables parallelRAID reconstructions that are 5X to 10X faster thanmost storage systems."

PanFS also includes Panasas Tiered Parity technology,which automatically detects and corrects unrecoverablemedia errors, which is important during reconstructions.Finally, this file system is optimized for use with manysimulation and modeling applications.

Note, though, that Panasas systems are designed forfile storage, not block storage. Therefore, it is typicallynot installed for transaction-oriented applications suchas ERP, order entry or CRM. Instead, it tends to bedeployed in applications where a large number of usersor server nodes need shared access to a common poolof large files.

HP File ServicesHP claims superiority by pushing symmetry over paral-lelism. The product is aimed at medium-sized cus-tomers who need to seamlessly increase applicationthroughput far in excess of traditional NAS productsand easily grow storage capacity online without servicedisruption. HP StorageWorks 4400 Scalable NAS FileServices includes an HP StorageWorks 4400 Enterprise

 Virtual Array with dual array controllers and 4.8 TB of 

11 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 13/18

storage, three file serving nodes, management andreplication software, and support for Windows or Linux.With three file serving nodes and dual array controllers,

the 4400 Scalable NAS File Services does not have asingle point of failure.

Downsides?

"The 4400 Scalable NAS File Services is less suitablefor high-performance computing applications thatrequire more than 6 GB/sec of throughput," saidDuncan.

Quantum StorNextStorNext is certainly the platform of choice for anyone

using Apple. Further, in media rich environments whereApple, Windows, and other systems must interact,StorNext appears to have the market cornered. Forexample, StorNext is commonly used in demandingvideo production and playback applications because of its ability to handle the large capacity and frame ratesof high-definition content. How does it do beyond thatniche?

"The key differentiators between StorNext and othershared file systems are our tight level of integrationwith the archive tier (StorNext/StorageManager) along

with the robust tape support, as well as the broad OSplatform support," said Nuss. "No other file system cansupport varieties of Linux, Unix, Apple and Windowswithin a single cluster environment."

The StorNext file system is a heterogeneous, shared filesystem with integrated archive capability. It enables sys-tems to share a high-speed pool of images, media,content, analytical data and other files so they can beprocessed and distributed rapidly, whether SAN or LANconnected. According to Nuss, it excels at both high-performance data rates and high capacity in terms of 

the file size as well as number of files in the file system.

IBM GPFSThe General Parallel File System (GPFS) from IBM hasbeen out for a few years.

"GPFS is a high-performance, shared disk, clustered filesystem for AIX and Linux," said John Webster, an ana-lyst at Iluminata Inc.

Originally designed for technical high performancecomputing (HPC), it has since expanded into environ-ments that require performance, fault tolerance and

high capacity such as relational databases, CRM, Web2.0 and media applications, engineering, financialapplications and data archiving.

"GPFS is built on a SAN model where all the serverssee all the storage," said Neville. "To allow data accessfrom systems not attached to the SAN, GPFS providesa software simulation of a SAN, allowing access to thedata using general purpose networks such asEthernet."

Data is striped across all the disks in each file system,

which allows the bandwidth of each disk to be used forservice of a single file or to produce aggregate per-formance for multiple files. This performance can bedelivered to all the nodes that make up the cluster.GPFS can also be configured so that there are no sin-gle points of failure. On top of the core file service fea-tures, GPFS provides functions such as the ability toshare data between clusters and a policy-based infor-mation life cycle management (ILM) tool where data ismigrated among different tiers of storage, which caninclude tape.

In addition, GPFS can be used at the core of a file-serv-ing NAS cluster where all the data is served via NFS,CIFS, FTP, or HTTP from all nodes of the cluster simul-taneously. Further nodes or storage devices can beadded or removed from the cluster as demandschange. The IBM Scale Out File Services (SoFS) offer-ing, based on GPFS, includes additional functionality.

"As file-centric data and storage continues to expandrapidly, NAS is expected to follow the trend of HPC,Web serving, and other similar industries into a scale-out model based on standard low-cost components,which is a core competency for GPFS," said Neville. I

12 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 14/18

Demand for storage has been growing rapidly for

some time to meet ever-expanding volumes of data.And it seems that the more common virtualized

servers become, the more storage is required. Together, thetwo trends — data growth and virtualization — are becom-ing a potent combination forstorage growth.

"Storage capacity continues togrow at a rate of nearly 60 per-cent per year," said BenjaminWoo, an analyst at IDC. "2008is likely to represent an inflec-

tion point in the way applica-tions and storage will be inter-faced. And virtual servers willemerge as the killer applica-tion for iSCSI."

Are virtual machines (VMs)accelerating storage growth?According to Scott McIntyre, vice president of softwareand customer marketing at Emulex, VMware is typicallygiven a large storage allocation than normal. This actsas an extra reserve to supply capacity on demand to

various virtual machines as they are created. In fact, VMware actually encourages storage administrators toprovision far more storage than is physically present, forexample, giving each of 20 VMs a 25 percent share of 

capacity. And it also makes it easier to provision away

an awful lot of storage.

In theory, this is supposed to make storage more effi-cient by improving utilization rates. But could it inad-

vertently be doing the oppo-site?

"VMware virtualized environ-ments do not inherently needmore storage than their physicalcounterparts," said Jon Bock,

 VMware's senior product mar-

keting manager. "An importantand relevant point is that cus-tomers do often change theway they use and manage stor-age in VMware environments toleverage the unique capabilitiesof VMware virtualization, andtheir storage capacity require-

ments will reflect that."

What seems to be happening is that companies areadapting their storage needs to take advantage of the

capabilities built into virtual environments. For exam-ple, the snapshot capability provided by VMware's stor-age interface, VMFS (virtual machine file system), isused to enable online backups, to generate archive

13 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

Managing Storage ina Virtual World

By Drew Robb

Jupiterimages

What seems to be happening is that companies are adapting their storage needsto take advantage of the capabilities built into virtual environments.

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 15/18

copies of virtual machines, and to provide a knowngood copy for rollback in cases such as failed patchinstalls, virus infections, and so on. While you can do a

lot with it, it also requires a lot more space.

Solving Management HeadachesPerhaps the bigger problem, however, is the manage-ment confusion inherent in the collision of virtualservers and virtual storage.

"The question of coordinating virtualized servers andvirtualized storage is a particularly thorny issue," saidMike Karp, an analyst with Enterprise ManagementAssociates. "The movement toward virtualizing enter-prise data centers, while it offers enormous opportuni-

ties for management and power use efficiencies, alsocreates a whole new set of challenges for IT man-agers."

 Virtualization, after all, is all about introducing anabstraction layer to simplify management and adminis-tration. Storage virtualization, for example, refers to thepresentation of a simple file, logical volume, or otherstorage object (such as a disk drive) to an application insuch a way that allows the physical complexity of thestorage to be hidden from both the storage administra-tor and the application.

However, even in one domain — such as servers — this"simple layer" can get pretty darn complicated. Justtake a look at what it does to the traditional art of CPUmeasurement using as an example an IBM microparti-tion in an AIX simultaneous multi-threaded (SMT) envi-ronment that consists of two virtual CPUs in a sharedprocessor pool. This partition has a single process run-ning that uses, let's say, 45 seconds of a physical CPUin a 60-second interval. When you come to measuresuch an environment, it presents some challenges. Theresults can be different, for example, if SMT is enabled

or disabled, and if the processor is capped oruncapped.

The CPU statistic %busy represents the percentage of the virtual processor capacity consumed. In this exam-ple, it might come out as 37.5 percent. Now takeanother CPU measurement, this time by LPAR (LogicalPartition) known as %entc. This represents the percent-age of the entitled processor capacity consumed and itcomes out as 75 percent. Take another metric,

%lpar_pool_busy, which is percentage of the processorpool capacity consumed. It comes out at only 18.75percent. Or %lpar_phys_busy — the percentage of the

physical processor capacity consumed. It scores 9.38percent. And there are other metrics that might showcompletely different results.

"A capacity planner might look at one score and thinkutilization is low, whereas another takes a different viewand sees an entirely different picture," said Jim Smith,an enterprise performance specialist at TeamQuestCorp. of Clear Lake, Iowa. "So who's right? It's not aneasy question to answer with virtualized processors.Each answer is correct from its own perspective."

Finding the Root CauseTo make things more challenging, there is the ongoingtrend of marrying up virtual servers with virtual storage.That means having to manage across two abstractionlayers instead of one. Now let's suppose somethinggoes wrong. How do you find out where the problemlies? Is it on the application server, on the storage, onthe network or somewhere in between?

"Identifying the root cause of the problem that poten-tially could be in any one of several technologydomains (storage, servers, network) is not a problem for

the faint of heart and, in fact, is not a problem that isalways solvable given the state of the art of the currentgeneration of monitoring and analysis solutions," saidKarp. "Few vendors offer solutions with an appropriateset of cross-domain analytics that allow real root causeanalysis of the problem."

EMC — majority owner of VMware — starts to lookpretty smart now for its acquisition of Smarts a littlewhile back. It is heading down the road of being ableto provide at least some of the vitally needed cross-vir-tualization management. And NetApp is heading down

the same road with the acquisition of Onaro.

"Onaro extends the NetApp Manageability Softwarefamily, as SANscreen's VM Insight and Service Insightproducts help minimize complexity while maximizingreturn," said Patrick Rogers, vice president of solutionsmarketing at NetApp. "These capabilities make Onaroa key element in NetApp's strategy to help customersimprove their IT infrastructure and processes."

14 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 16/18

For virtual machine environments, VM Insight providesvirtual machine-to-disk performance information tooptimize the number of virtual machines per server. For

large-scale virtual machine farms, this type of cross-domain analytics assists in maintaining application avail-ability and performance. SANscreen Service Insightmakes it easier to map resources used to support anapplication in a storage virtualization environment. Itprovides service level visibility from the virtualized envi-ronment to the back-end storage systems.

Meanwhile, the management of multiple virtualizationtechnologies is coming together under the banner of enterprise or data center virtualization. This encom-passes server virtualization, storage virtualization, and

fabric virtualization.

"IT managers are increasingly considering the prospectof a fully virtualized data center infrastructure," saidEmulex's McIntyre. "One of the characteristics of enter-prise data centers is the existence of storage area net-works. There is a high degree of affinity between SANsand server virtualization, because the connectivityoffered by a SAN simplifies the deployment and migra-tion of virtual machines."

SAN-based storage can be shared between multipleservers, enabling data consolidation. Conversely, a vir-tual storage device can be constructed from multiplephysical devices in a SAN and be made available toone or more host servers. Not surprisingly then, notonly are storage devices being virtualized, but increas-ingly there is interest in virtualizing the SAN fabric itself in order to consolidate multiple physical SANs into onelogical SAN, or segment one physical SAN into multi-ple logical storage networks.

Emulex, for example, is providing the virtual plumbingto handle some of the connectivity gaps between stor-age and server silos. Emulex LightPulse Virtual HBA

technology virtualizes SAN connections so that eachvirtual machine has independent access to its own pro-tected storage.

"The end result is greater storage security, enhancedmanagement and migration of virtual machines and theability to implement SAN best practices such as LUNmasking and zoning for individual virtual machines,"said McIntyre. "In addition, Virtual HBA Technologyallows virtual machines with different I/O workloads toco-exist without impacting each other's I/O perform-ance. This mixed workload performance enhancement

is crucial in consolidated, virtual environments wheremultiple virtual machines and applications are allaccessing storage through the same set of physicalHBAs."

No doubt over time, more and more of the pieces of the virtual plumbing and a whole lot more analytics willhave to be added to the mix to make virtualizationfunction adequately in an enterprise-wide setting. Untilthen, get ready for an awful lot of complexity in thename of simplification.

"It is absolutely necessary to understand the topology,in real time — or at the very least, in near real-time —in order both to identify problems and to manage theentire environment proactively as a system and pre-empt problems," said Karp. "In a best-case scenario, aconstantly updated topology map would be availablefor each process being monitored." I

15 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 17/18

As enterprises virtualize their data centers to cut costs

and consolidate their servers, they may be settingthemselves up for big trouble.

According to the latest disaster recovery researchreport from Symantec, basedon surveys of 1,000 IT man-agers in large organizationsworldwide, 35 percent of anorganization's virtual serversare not included in its disasterrecovery (DR) plans.

Worse yet, not all virtualservers included in an organi-zation's DR plan will be backedup. Only 37 percent of respon-dents to the survey said theyback up more than 90 percentof their virtual systems.

When companies virtualize, they need to overhaul theirbackup and DR plans, Symantec says; the survey foundthat 64 percent of organizations are doing so.

"That's no surprise, because virtualization has had ahuge impact on the way enterprises do disaster recov-ery," said Symantec senior product marketing managerfor high availability and disaster recovery Dan

Lamorena.

So why are virtual servers being left out of DR plans, or,if they're included, aren't being backed up? That'sbecause enterprise IT just does not have the right tools

to back up virtual servers,according to Symantec.

The biggest problem for 44 per-cent of North American respon-dents was the plethora of differ-ent tools for physical and virtualenvironments. There are so

many that IT doesn't know whatto use and when.

Another 41 percent complainedabout the lack of automatedrecovery tools. Much of the dis-aster recovery process is manu-al, although VMware recently

unveiled a tool to automate the run book.

Another 39 percent of respondents said the backuptools available are inadequate.

Hewlett-Packard, IBM, CA, and smaller vendors such asManageIQ, Avocent, and Apani offer tools to manageboth the virtual and physical environments. And com-

16 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Developing a Storage Strategy for the Future[ ]

The Trouble with Virtual Disaster Recovery

By Richard Adhikari

Jupiterimages

When companies virtualize, they need to overhaul their backup and DR plans,Symantec says; the survey found that 64 percent of organizations are doing so.

8/8/2019 17285832 Developing a Storage Strategy for the Future

http://slidepdf.com/reader/full/17285832-developing-a-storage-strategy-for-the-future 18/18

panies like Hyperic are bringing out new tools.

However, virtual server management tools, being rela-

tively new, are not as sophisticated as their counter-parts for the physical environment. Also, they have notbeen around long enough for users to be familiar withthem. For example, provisioning, or setting up, virtualmachines from physical ones and vice versa can also bea problem, and tools for this have only recentlyemerged.

"Virtualization makes some aspects of backup and dis-aster recovery more difficult," said Symantec seniorproduct marketing manager for NetBackup Eric Schou."IT shops are still struggling with the steep learning

curve."

Porting over solutions from the physical environmentwon't work, Schou said. "IT shops need to get solu-tions that are finely tuned for virtualization," he added.

Failing DRJudging from the results of the survey, IT is still not asfamiliar with DR as it should be. DR testing is a mess.

A whopping 30 percent of respondents said their DRtests failed. That's better than the 50 percent failure

rate in 2007, but it's still pretty scary.

For 35 percent of the respondents, the tests failedbecause "people didn't do what they were supposedto do," Lamorena said. This means that much of recov-

ery is still a manual process, and companies must beginlooking at automation, he said.

Another cause is that tests are not run frequentlyenough. That's because "when you run a test, it dis-rupts employees and customers," Lamorena said. Headded that 20 percent of the respondents said theirrevenue is hurt by DR tests, so "the tests cause thesame pain to their customers as if they had a real disas-ter."

Finally, the survey found that top-level executive

involvement in DR planning has fallen. "Last year, theC-level involvement on disaster recovery committeeswas 55 percent; this year, it's 33 percent," Lamorenasaid. C-level executives are CIOs, CTOs and CEOs.

Lamorena finds the reduction in top-level involvementdisturbing because it could lead to more problems withDR. "That's a huge drop, and we've been thinkingabout this day and night," he said. "What's alarming is,companies may be getting a little lax and don't thinkthey'll be affected by a disaster." I

17 D l i S S f h F I S B k © 2008 J i di C

Developing a Storage Strategy for the Future[ ]