cio may 15 2007 issue

53

Upload: sreekanth-sastry

Post on 26-Mar-2016

226 views

Category:

Documents


6 download

DESCRIPTION

Technology, Business, Leadership

TRANSCRIPT

Page 1: CIO May 15 2007 Issue

Alert_DEC2011.indd 18 11/17/2011 9:32:41 AM

Page 2: CIO May 15 2007 Issue

From The ediTor

Why is it that, at what seems like the end of a long day, I still find CIOs slogging it away

in office? When I asked a CIO from Mumbai this, he quipped: “The only thing that is certain

in a CIO’s life is that no two days end in quite the same way.”

A bit of the issue I figure stems from IT’s legacy as a support function in organizations.

If the CIO role is that of just another business unit head, albeit with a different set of skills,

why should he feel the heat more? How is the CIO’s role strategic if he’s in office at 8.30 a.m.

and leaves at 2.30 a.m. through the five months of a big-bang SAP rollout — Saturdays,

Sundays and other holidays be damned?

Some CIOs feel that with an increasing dependency on technology in enterprises, business

has begun to view applications like today’s gadgets, looking for quick, low-cost deployment.

(‘Why can’t you develop the app in-house’ is a common refrain that CIOs seem to face.) The

result: endless rounds of meetings, crises, firefighting and extended days.

Another IT leader I was speaking with

recently shared his concern that some of

his peers are beginning to feel the heat and

possibly heading for burnout. He was also

categorical that the stress levels that many

CIOs in India face are a result of their

playing technically-skilled entertainers

providing a high energy show, juggling multiple roles and hats in the process.

Given this, it was refreshing to speak to a CIO in Bangalore who doesn’t show up in office

on weekends and at least thinks in terms of work-life balance. The issue is more one of

availability than a 24x7 physical presence, he feels, adding that the key to beating stress as

a CIO is effective delegation and good governance.

He’s currently in the process of setting up a program management office to oversee

projects, liaise with stakeholders and oversee vendors. This, he feels, is going to help him

stop dealing with daily operational issues, and instead allow him to focus on those of a more

strategic nature like working on the application roadmaps of his organization. And, give him

time to practice for the Bangalore Marathon.

Let me ask a final question: do you find your role stressful? Write in and let me know

your thoughts.

How is the CIO’s role strategic if he’s in office at 8.30 a.m. and leaves at 2.30 a.m., month after month?

Why does a CIO’s day never seem to end?

Vijay Ramachandran,[email protected]

Are You Working Tonight?

VOl/2 | ISSUE/13� m A Y 1 5 , 2 0 0 7 | REAL CIO WORLD

Content,Editorial,Colophone.indd2 2 5/10/2007 10:40:33 PM

Page 3: CIO May 15 2007 Issue

IOVIRTUALIZA NT. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . ..

. . . . . . .. . . . . . .

. . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

101VOl/2 | ISSUE/13� m A Y 1 5 , 2 0 0 7 | REAL CIO WORLD

Cover Story 30 | Virtualization 101Keep pace with ever-increasing storage requirements without skipping a service level. Feature by Steve Norall

iSCSI INTHE RIS G STAR

ENTERPRISE &LEANANME

1001001000

1100100100100010010010010

00100 00000011010010 0100100100100 0100010

00 010000010001000100010000001000000100101001001001001000100100100100001010100101010100101001010010101001001000010100100101101001001001101010110110111001100110001100110111101 11101110111111011111111100111111101110110 0101000110100110111011 11000101 010

101010010010 0100111 0010100 10010 01000 1100100 10 0100010 01 00 1001 0 0 010 0 0 00 0 00 1 1 01 0 01 0 0 1

0 0

THE

OF

THREE

RSPILLATADA

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

Columns

26 | data Showdown In the land where data is multiplying uncontrollably, you can either outlaw it or play lawmaker. Column by Mark Hall

28 | moving to File Virtualization If you are going to be at the frontlines, this is what you need to know. Column by Bert Latamore

content

3 8

5 2

4 4

may 15 2007‑|‑Vol/2‑|‑issue/13

Features

38 | iSCSi: The rising enterprise Star iSCSI won’t replace Fibre Channel anytime soon. But for SMBs and remote offices, the low price and overhead are just right. Feature by Mark Leon

44 | The Pillars of data Apart from being the very driver of enterprise storage, data is the lifeblood of your business. Here’s how to keep it healthy and safe, while ensuring its availability to users. Feature by Paul F. Roberts, Peter Wayner & Doug Dineley

52 | Lean and mean How do you tame your storage costs? By reining in server sprawl and corralling those space and energy requirements. Here’s how three cowboys did it. Feature by Stacy Collett

CO

VE

r:

Ill

US

tr

at

IOn

by

by

an

Il t

SPECIALSTORAGE

100001 1

3 0

Content,Editorial,Colophone.indd8 8 5/10/2007 10:40:35 PM

Page 4: CIO May 15 2007 Issue

content (cont.)

Trendlines | 20 IT Report | No Space for Data Industry | Tape Library Sales Slip Virtualization | Fostering Virtualization Innovation | Bacteria to Store Data IT Management | Upgrades Worries Outweigh Integration

Intelligence | Top 10 Blogs Virtualization | Virtualize to Cut Costs Budget | 7 Ways to Smaller Bills Wireless | Unplug Your Backups

Essential Technology | 68 Storage innovation | Stacking IT Up By Mary K. Pratt

Storage | The Trouble With Storage Management By Mario Apicella

From the Editor | 2 Are You Working Tonight? | Why does a CIO’s day never seem to end? By Vijay Ramachandran

Inbox | 16

2 6

dEparTmEnTs

NOW ONLINE

For more opinions, features, analyses and updates, log on to our companion website and discover content designed to help you and your organization deploy It strategically. Go to www.cio.in

c o.in

Govern E-GOVERNANCE CONTRARIAN | 64In spite of all the interest surrounding e-governance in India, a vast majority of these projects fails. Professor Rahul De’, who holds the Hewlett-Packard chair on ICT for Sustainable Economic Development at IIM Bangalore, takes a critical look at the reasons behind the failures, and offers suggestions to improve the success rate. Interview by Balaji Narasimhan

1 0 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Content,Editorial,Colophone.indd10 10 5/10/2007 10:40:43 PM

Page 5: CIO May 15 2007 Issue

ManageMent

Publisher&editor n. bringi Dev

Coo louis D’Mello

editorial

editor-in-ChieF Vijay ramachandran

exeCutiveeditor bala Murali Krishna

bureauhead-north Sanjay Gupta

sPeCialCorresPondents balaji narasimhan

Kanika Goswami

seniorCorresPondent Gunjan trivedi

ChieFCoPYeditor Kunal n. talgeri

seniorCoPYeditor Sunil Shah

design&ProduCtion

CreativedireCtor Jayan K narayanan

designers binesh Sreedharan

Vikas Kapoor; anil V.K.

Jinan K. Vijayan; Sani Mani

Unnikrishnan a.V.

Girish a.V.

MM Shanith; anil t

PC anoop; Jithesh C.C.

Suresh nair, Prasanth t.r

PhotograPhY Srivatsa Shandilya

ProduCtion t.K. Karunakaran

t.K. Jayadeep

Marketingandsales

vP,intl’&sPeCialProjeCts naveen Chand Singh

vPsales Sudhir Kamath

brandManager alok anand

Marketing Siddharth Singh

bangalore Mahantesh Godi

Santosh Malleswara

ashish Kumar, Kishore Venkat

delhi nitin Walia; aveek bhose;

neeraj Puri; anandram b;

Muneet Pal Singh;

Gaurav Mehta

MuMbai Parul Singh, Chetan t. rai,

rishi Kapoor

jaPan tomoko Fujikawa

usa larry arthur; Jo ben-atar

singaPore Michael Mullaney

uk Shane Hannam

events

generalManager rupesh Sreedharan

Managers Chetan acharya

Pooja Chhabra

AdVerTiSer index

All rights reserved. No part of this publication may be reproduced by any means without prior written permission from the publisher. Address requests for customized reprints to IDG Media Private Limited, 10th Floor, Vayudooth Chambers, 15–16, Mahatma Gandhi Road, Bangalore 560 001, India. IDG Media Private Limited is an IDG (International Data Group) company.

Printed and Published by N Bringi Dev on behalf of IDG Media Private Limited, 10th Floor, Vayudooth Chambers, 15–16, Mahatma Gandhi Road, Bangalore 560 001, India. Editor: N. Bringi Dev. Printed at Rajhans Enterprises, No. 134, 4th Main Road, Industrial Town, Rajajinagar, Bangalore 560 044, India

avaya 4 & 5

Canon IBC

EmC 59

Emerson 11

Fujitsu 35

GE 25

HP 1

IBm Split Cover, 18 & 19

Krone 9

Lenovo BC

microsoft IFC, 14, 15, 23, 32 & 33

Oracle 13

Seagate 61

Siemon 3

Wipro 6 & 7

This index is provided as an additional service. The publisher does not assume any liabilities for errors or omissions.

abnashsingh

Group CIO, Mphasis

alaganandanbalaraMan

Executive VP (It & Corporate Development), Godfrey

Phillips

alokkuMar

Global Head-Internal It, tata Consultancy Services

anwerbagdadi

Senior VP & CtO, CFC International India Services

arunguPta

Customer Care associate & CtO, Shopper’s Stop

arvindtawde

VP & CIO, Mahindra & Mahindra

ashishk.Chauhan

President & CIO — It applications, reliance Industries

C.n.raM

Head–It, HDFC bank

Chinars.deshPande

CIO, Pantaloon retail

dr.jaiMenon

Director (It & Innovation) & Group CIO, bharti tele-Ventures

ManishChoksi

Chief-Corporate Strategy & CIO, asian Paints

M.d.agrawal

CM–It, refineries, bharat Petroleum Corporation limited

rajeevshirodkar

VP-It, raymond

rajeshuPPal

Chief GM It & Distribution, Maruti Udyog

ProF.r.t.krishnan

Professor, Corporate Strategy, IIM-bangalore

s.goPalakrishnan

President, CEO and Joint MD, Infosys technologies

ProF.s.sadagoPan

Director, IIIt-bangalore

s.r.balasubraMnian

Group CIO, ISG novaSoft

satishdas

CSO, Cognizant technology Solutions

sivaraMakrishnan

Executive Director, PricewaterhouseCoopers

dr.sridharMitta

MD & CtO, e4e

s.s.Mathur

GM–It, Centre for railway Information Systems

sunilMehta

Sr. VP & area Systems Director (Central asia), JWt

v.v.r.babu

Group CIO, ItC

AdViSorY BoArd

VOl/2 | ISSUE/131 � m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Content,Editorial,Colophone.indd12 12 5/10/2007 10:40:44 PM

Page 6: CIO May 15 2007 Issue

The three data points

May 1, 2007) rightly signify the evolving world of business technology. However, the inference that these are threats to the existence of CIO is a stretch.

The first two data points — that top management expects CIOs to be more business savvy and, second, that business managers are getting more clued into IT — clearly point to the evolution of the IT function in the organization. We have come a long way from operating as a backend support function. Today, it is one of the core business functions, far more integrated into the business processes and sometimes creating the new business processes.

The outsourcing phenomenon is not restricted to IT. Even manufacturing is being contracted out. Fundamentally, principles of core versus context are in play, which are equally applicable to the IT function. The core IT functions (vision, strategy, business alignment, emerging technology evaluation, advising businesses on relevant technologies and opportunities to leverage them, leveraging organizational synergy, facilitating knowledge management, managing change, IT governance, IT portfolio management, vendors and partners management, project and program management) will continue

to remain in-house. The CIO’s role thus is evolving into a new profile more akin to a core business function and becoming an essential part of top management.

It may be easier to appreciate the CIO role if we do not view it as an evolution from the earlier EDP head, but rather as an emerging business function managing information technology in the new economy — dynamic, global, uncertain, ruthless, enabled and driven by information technology.Arvind TAArvind TAArvind T wdAwdA E

VP & CIO, Mahindra & Mahindra

In reference to your editorial (Threatened Existence, May 1, 2007), I believe that the role of the CIO has just started evolving. Over half a decade ago, this role was virtually unheard-of. With the passage of time, the role is becoming more complex and the question of its disappearance is nowhere on the horizon.

It is a fact that CIOs are expected to be more business savvy. This is especially so with CIOs who tread the technical path in the technology sector. They cannot expect the business and top management to pat their backs or applaud them for a brilliant technical feat. Technologically, research-oriented CIOs will have to shape up to the business realities or the environment will forcibly shape them.

There is another class of CIOs who have come without any technological background. They have become de facto CIOs due to an HR decision or

management’s inability to get a correct person from the market. Such CIOs need to be tech savvy. The CIO role is a techno-business role. It will not disappear due to outsourcing.

Though business is getting tech savvy, this is true at the user level. In today’s competitive world, selection of technology which maps business makes or mars business.

It would be suicidal to outsource this. A consultant can give you a superficial view of which technology to use, but a techno-business CIO who knows the business can make a great difference to the organization.

Even selection of the outsourcing consultant can make a difference. That is also the CIO’s job.

Who will play the strategic role of application of technology? Obviously, the CIO. I do not think we can outsource that. If we can, then we may as well outsource the MD or the chairman. Business is more complex than this. The CIO role will evolve and grow, not diminish.BhushAn AkErkAr

Executive Director, IS & Technology

AC Nielsen, South Asia/India

An Evolving BreedThe three data points you mentioned in your editorial (Threatened Existence,

to remain in-house. The CIO’s role thus is evolving into a new profile more akin to a core business function and becoming an essential part of top management.

It may be easier to appreciate the CIO role if we do not view it as an evolution from the earlier EDP head, but rather as an emerging business function managing information technology in the new economy — dynamic, global, uncertain, ruthless, enabled and driven by information technology.Arvind T

VP & CIO, Mahindra & Mahindra

reader feedback

What Do You Think?

We welcome your feedback on our articles, apart from your thoughts and suggestions. Write in to [email protected]. Letters may be edited for length or clarity.

editor@c o.in

Who will play the strategic role of

applying technology? Obviously, the cIO. I do not think we can outsource that. If we can, then we may as well outsource the

Md or the chairman.

Vol/2 | ISSUE/131 6 M A y 1 5 , 2 0 0 7 | REAL CIO WORLD

Inbox.indd 16Inbox.indd 16Inbox.indd 16Inbox.indd 16

Page 7: CIO May 15 2007 Issue

businesses of all sizes, agencies, governments and associations will be responsible for the security, privacy, reliability and compliance of at least 85 percent of that same digital universe. In 2006, just the e-mail traffic from one person to another (excluding spam) accounted for 6 exabytes (or 3 percent) of the world’s data.

Notably, TheInfoPro’s Wave-9 Survey of companies showed about 70 percent of corporate data are duplicates. TheInfoPro did not survey small companies or small home offices, the ranks of which represent 700,000 companies with revenue of Rs 900 crore or less. But Robert Stevenson, managing director at TheInfoPro, estimates those small companies have from 500GB to 1TB of data today, and that they are experiencing the same exponential data growth as larger companies.

“That’s why there’s so much excitement around data de-duplication technology,” says Stevenson. De-duplication technology eliminates copies of data, storing only one unique version.

According to IDC, most of the data today is being created by three major analog to digital conversions: film to digital image capture, analog to digital voice and analog to digital TV.

For example, two-year-old video-sharing website YouTube hosts 100 million digital video streams a day, and more than a billion digital songs a day are shared over the Internet in MP3 format. The CIO of Chevron, John E. Bethancourt, says his company accumulates 2TB of data every day.

There is so much data out there that 20 percent of Fortune 1000 firms hand the responsibility for storing all that data off to a third-party managed storage provider, according to TheInfoPro.

Images captured by more than 1 billion devices in the world, from digital cameras and camera phones to medical scanners and security cameras, comprise the largest component of the digital universe. They are replicated over the Internet on private organizational networks, by PCs and servers, in data centers, in digital TV broadcasts, and on digital projection movie screens, according to IDC.

—By Lucas Mearian

(From Page 17)

tr

en

dt

re

nd

LI

LIn

es

ne

s

Vol/2 | IssuE/132 0 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

...and Most of it Is Personal

FosteringVirtualization

FosteringVirtualization

FosteringV I r t u a L I z a t I o n the first wave of network-based storage virtualization fell apart. It was too new, too untested and required companies to install it in one of the most sensitive parts of the corporate infrastructure: in front of expensive storage arrays and behind mission-critical, highly visible applications. this was a strategy destined for failure.

Fast forward to today, and network-based storage virtualization is hanging tough, growing up and spawning a second wave that this time should gain some momentum.

the reasons for this growing momentum are practical. almost every storage manager in any size shop manages tens or hundreds of servers with multiple terabytes behind them. and with 1tb disk drives now a reality, the amount of storage they will manage is certain to increase.

yet, as storage capacities increase, it can cause storage yet, as storage capacities increase, it can cause storage ymanagers to take their eye off the ball. Configuring and installing new capacity and decommissioning old storage arrays is a time-consuming task and results in routine tasks like data migrations becoming a logistical nightmare. 10-gigabit Ethernet and isCsI complicate this scenario.

Many corporate servers are still not san connected since it is still too expensive to connect them. 10-gigabit Ethernet and isCsI remove the cost barrier, but with more server and storage network connections, storage complexity also increases. unfortunately, low-cost network connectivity does not translate into simpler and lower costs of storage management. If anything, it has the opposite effect.

so, why will network-based storage virtualization take off this time? as companies san-connect more of their servers using lower-cost Ethernet connections, their storage provisioning and data migration problems will become more acute. this will force many companies to re-evaluate network-based storage virtualization’s value proposition and why it now makes sense, whereas just a few years ago, its risks outweighed its rewards.

—by Jerome Wendt

SPECIALSTORAGE

100001 1

Page 8: CIO May 15 2007 Issue

tr

en

dt

re

nd

LI

LIn

es

ne

s

I . t . M a n a g e M e n t storage users value interoperability and value upgrades more than integrating new hardware. that was among the highlights of the third annual survey “storage Interoperability: so What’s the Problem,” by the storage networking Industry association (snIa) End user Council (EuC).

a major focus of the survey dealt with the struggle by users to identify interoperability issues in multi-vendor storage environments. according to the EuC, 62 percent of survey respondents said they would pay more for an interoperable solution — an average of 7.6 percent more — while 23 percent would avoid a product that did not offer interoperability, according to norman owens, a member of the EuC governing board.

the issue of upgrades was a point of contention for many attendees at snW. End users worry more about upgrades than integrating new hardware, said the report the survey showed that only half of vendor-recommended fixes correctly solve a problem. a quarter of respondents said that a quarter of respondents said that afixes actually make a problem worse.

among the barriers to storage cited in the EuC survey were interoperability and integration, heavy reliance on vendor road maps, support between operating system and hardware vendors, significant certification and testing, and vendor/product consolidation. In terms of testing, 30 percent of respondents said that testing occurs if time permits; only 17 percent always test.

reliance on a vendor road map was a nonissue for blake Golliher, storage architect

at Facebook, which runs a social networking site. Galliher says users should push vendors to improve their products to better meet users’ specific needs.

“I’m not new at this,” says Golliher. “you you yknow what to expect from a vendor. It’s a guideline and something you have to deal with. you can influence a vendor road map you can influence a vendor road map yif you can convince them you’ll buy a lot of their storage if they put this feature in [their product]. they’ll do it if other users [follow suit].”

upgrades however, can be a different matter. “Invariably, the flawless, smoothless upgrades users wouldn’t notice they do notice, even if it’s an online upgrade. you just you just ycan’t take [a system] down,” he says. “I’ve been burned before.”

— by brian Fonseca

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 2 1Vol/2 | IssuE/13

Upgrades Worries Outweigh Integration

I n n o V a t I o n A Japanese university recently announced scientists there have developed a new technology that uses bacteria DNA as a medium for storing data long-term, even for thousands of years.

Keio University Institute for Advanced Biosciences and Keio University Shonan Fujisawa Campus announced the development of the new technology, which creates an artificial DNA that carries up to more than 100 bits of data within the genome sequence, according to the JCN Newswire.

The universities say they successfully encoded “e= mc2 1905!” — Einstein’s theory of relativity and the year he enunciated it — on the common soil bacteria, Bacillius subtilis.

While the technology would most likely first be used to track medication, it could also be used to store text and images for many millennia, thwarting the longevity issues associated with today’s disk and tape storage systems — which only store data for up to 100 years in most cases.

The artificial DNA that carries the data to be preserved makes multiple copies of the DNA and inserts the original as well as identical copies into the bacterial genome sequence. The multiple

copies work as backup files to counteract natural degradation of the preserved data, according to the newswire.

Bacteria have particularly compact DNA, which is passed down from generation to generation. The information stored in that DNA can also be passed on for long-term preservation of large data files, the scientists said.

—By Lucas Mearian

BacteriaTo Store Data

Ill

us

tr

at

Ion

by

MM

sh

an

Ith

SPECIALSTORAGE

100001 1

5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM5/10/2007 10:07:52 PM

Page 9: CIO May 15 2007 Issue

tr

en

dt

re

nd

LI

LIn

es

ne

s

Vol/2 | IssuE/132 2 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

I n t e L L I g e n c e Ever felt the need to do a quick catch up the Ever felt the need to do a quick catch up the Ever felt the need to do a quick catch up the Ever felt the need to do a quick catch up the Ever felt the need to do a quick catch up the network storage industry? network storage industry? network storage industry? then you probably know that it’s impossible hen you probably know that it’s impossible hen you probably know that it’s impossible hen you probably know that it’s impossible hen you probably know that it’s impossible to ignore the blogs of analysts, vendors and consultants. to ignore the blogs of analysts, vendors and consultants. to ignore the blogs of analysts, vendors and consultants. to ignore the blogs of analysts, vendors and consultants. to ignore the blogs of analysts, vendors and consultants. a round up.a round up.aDave’s Blog (blogs.netapp.com/dave) Dave (blogs.netapp.com/dave) Dave (blogs.netapp.com/dave) Dave hitz founded itz founded itz founded network appliance in 1992 and is now executive VP, responsible for strategy ppliance in 1992 and is now executive VP, responsible for strategy ppliance in 1992 and is now executive VP, responsible for strategy ppliance in 1992 and is now executive VP, responsible for strategy ppliance in 1992 and is now executive VP, responsible for strategy and future direction of the and future direction of the and future direction of the san/nas company. company. company. hhhitz blogs about everything from netapp’s financial results in the last quarter to how pp’s financial results in the last quarter to how pp’s financial results in the last quarter to how pp’s financial results in the last quarter to how pp’s financial results in the last quarter to how VMware is changing the data center. VMware is changing the data center. VMware is changing the data center. Hu Yoshida’s Blog (blogs.hds.com/hu) (blogs.hds.com/hu) (blogs.hds.com/hu) hu yoshida is Coshida is Coshida is Coshida is Coshida is Cyoshida is Cy to of hitachi Data systems. he blogs on 20-year-old storage architectures, storage e blogs on 20-year-old storage architectures, storage e blogs on 20-year-old storage architectures, storage e blogs on 20-year-old storage architectures, storage e blogs on 20-year-old storage architectures, storage performance, and capacity and utilization. performance, and capacity and utilization. performance, and capacity and utilization. Mark’s Blog (marksblog.emc.com) Mark (marksblog.emc.com) Mark (marksblog.emc.com) Mark lewis, executive VP and ewis, executive VP and ewis, executive VP and ewis, executive VP and ewis, executive VP and chief development officer at EMC, is writing one of the most useful and chief development officer at EMC, is writing one of the most useful and chief development officer at EMC, is writing one of the most useful and chief development officer at EMC, is writing one of the most useful and chief development officer at EMC, is writing one of the most useful and entertaining blogs about storage, even if it is EMC-style. entertaining blogs about storage, even if it is EMC-style. entertaining blogs about storage, even if it is EMC-style. entertaining blogs about storage, even if it is EMC-style. entertaining blogs about storage, even if it is EMC-style. Steve’s IT Rants (esgblogs.typepad.com/steves_it_rants) (esgblogs.typepad.com/steves_it_rants) (esgblogs.typepad.com/steves_it_rants) (esgblogs.typepad.com/steves_it_rants) (esgblogs.typepad.com/steves_it_rants) steve Duplessie, senior storage analyst and founder of the Enterprise Duplessie, senior storage analyst and founder of the Enterprise Duplessie, senior storage analyst and founder of the Enterprise Duplessie, senior storage analyst and founder of the Enterprise Duplessie, senior storage analyst and founder of the Enterprise strategy Group (EsG), writes a no-holds-barred, equal-opportunity-G), writes a no-holds-barred, equal-opportunity-G), writes a no-holds-barred, equal-opportunity-G), writes a no-holds-barred, equal-opportunity-G), writes a no-holds-barred, equal-opportunity-offender blog on data loss, offender blog on data loss, offender blog on data loss, brocade’s stock-backdating troubles or rocade’s stock-backdating troubles or rocade’s stock-backdating troubles or rocade’s stock-backdating troubles or rocade’s stock-backdating troubles or EMC’s recent reorganization. EMC’s recent reorganization. EMC’s recent reorganization. StorageMojo (storagemojo.com) (storagemojo.com) (storagemojo.com) storage consultant torage consultant torage consultant robin harris is arris is arris is a former sun employee. un employee. un employee. this blog is the most frequently updated of his blog is the most frequently updated of his blog is the most frequently updated of his blog is the most frequently updated of his blog is the most frequently updated of the bunch and covers the widest variety of topics.the bunch and covers the widest variety of topics.the bunch and covers the widest variety of topics.

Inside System StorageInside System StorageInside System Storage (www-03.ibm.com/developerworks/blogs/ (www-03.ibm.com/developerworks/blogs/ (www-03.ibm.com/developerworks/blogs/page/Insidepage/Insidepage/Insidesystemstorage) tony Pearson is the manager of brand ony Pearson is the manager of brand ony Pearson is the manager of brand tony Pearson is the manager of brand tmarketing strategy for Imarketing strategy for Imarketing strategy for IbM system storage. torage. torage. he blogs about file-area e blogs about file-area e blogs about file-area networks, Inetworks, Inetworks, IlM for iPods and where the storage industry is headed,M for iPods and where the storage industry is headed,M for iPods and where the storage industry is headed,Storage Thoughts Storage Thoughts Storage Thoughts (storagethoughts.blogspot.com/2006/10/(storagethoughts.blogspot.com/2006/10/(storagethoughts.blogspot.com/2006/10/seeing-whats-next-in-storage-industry.html) Ken Gibson is a past seeing-whats-next-in-storage-industry.html) Ken Gibson is a past seeing-whats-next-in-storage-industry.html) Ken Gibson is a past director of storage engineering at director of storage engineering at director of storage engineering at sun. hhhe writes about how isCsssI is good for now, who is using object-based storage and the performance good for now, who is using object-based storage and the performance good for now, who is using object-based storage and the performance of nasnasnas versus san systems.DrunkenData.comDrunkenData.comDrunkenData.com (drunkendata.com) outspoken analyst Jon toigo oigo oigo toigo topines on a host of storage and data management topics. opines on a host of storage and data management topics. opines on a host of storage and data management topics. topics on opics on opics on topics on tthe blog range from Microsoft’s the blog range from Microsoft’s the blog range from Microsoft’s unified Data nified Data nified Data storage server, to the IP erver, to the IP erver, to the IP wars, to data archiving.wars, to data archiving.wars, to data archiving.Duncan Campbell’s Blog Duncan Campbell’s Blog Duncan Campbell’s Blog (h20325.www2.hp.com/blogs/(h20325.www2.hp.com/blogs/(h20325.www2.hp.com/blogs/campbell) Duncan Campbell is head of worldwide marketing for campbell) Duncan Campbell is head of worldwide marketing for campbell) Duncan Campbell is head of worldwide marketing for hP’s storageWorks division. torageWorks division. torageWorks division. he mostly blogs about e mostly blogs about e mostly blogs about hP storage topics P storage topics P storage topics Microsoft Storage Community Blogs Microsoft Storage Community Blogs Microsoft Storage Community Blogs (www.microsoft.com/(www.microsoft.com/(www.microsoft.com/technet/community/en-us/storage/default.mspx) technet/community/en-us/storage/default.mspx) technet/community/en-us/storage/default.mspx) ttthis is a collection of blogs written by storage managers. collection of blogs written by storage managers. collection of blogs written by storage managers. yyyou’ll find stuff ou’ll find stuff ou’ll find stuff you’ll find stuff yyou’ll find stuff yyou’ll find stuff yon Windows on Windows on Windows storage server and the compliance features in 2007 erver and the compliance features in 2007 erver and the compliance features in 2007 Microsoft office.ffice.ffice.

— by Deni Connor

TToooooooopp10 BBlololololololololololololololologgggggggggggssssssssssss

V I r t u a L I z a t I o n St o r age virtualization technologies, showing signs of maturity, have become an appealing option for some large companies looking to make better use of installed physical resources to keep up with escalating storage demands.

In interviews at the Storage Networking World conference, several IT managers talked about how storage virtualization is cutting technology and management costs.

Gary Berger, VP of technology solutions at Banc of America Securities Prime Brokerage, says his company began looking at virtualization because it found it was unable to fully utilize a fragmented IT environment running in multiple data centers. The firm’s problems, Berger says, were caused by silos of direct-attached storage, coarse whole-disk allocation issues and over-provisioning. Prior to implementing virtualization, the

company’s storage systems failed four or five times a month, says Berger. But the virtualized system hasn’t failed since it was installed a year ago, he adds.

Since it adopted the virtualized model, Berger says, his company has cut its storage administration costs by 95 percent and the need for more storage capacity by 50 percent. He also credits virtualization’s service delivery and disaster recovery replication capabilities with further reducing IT costs.

UC Davis Health System implemented a virtualized storage system late last year as part of an effort to comply with HIPAA. Alejandro Lopez, storage manager for technical support and information and communication services, says the regulations require the academic medical center to use a system that can securely house and move medical images and records.

Last December, it began using the native capabilities in a newly-installed Hitachi Data Systems TagmaStore Universal Storage Platform to virtualize tape on the mainframe and storage systems, says Lopez. The IT operation can now store 4TB of cardiology images and PDF documents, previously held on optical disc systems, in the virtual AIX environment, he adds.

“By moving to cheaper storage [through virtualization], we saved about 40 percent in costs,” says Lopez.

Lopez says he discussed the project with a number of users at SNW and concluded that several didn’t fully understand the concept of virtualization. “I think there’s a misconception by people who think of virtualization as a solution instead of as a tool,” he says.

— By Brian Fonseca

Virtualize toCut CostsSPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Trendlines.indd 22Trendlines.indd 22Trendlines.indd 22Trendlines.indd 22Trendlines.indd 22Trendlines.indd 22Trendlines.indd 22

Page 10: CIO May 15 2007 Issue

tr

en

dt

re

nd

LI

LIn

es

ne

s

B u d g e t Most storage vendors claim their technology cuts power and cooling needs. These have at least a grain of truth, since reducing data volume also reduces energy costs. Here are eight ways to save.

1 Information life-cycle management (ILM) is as much a management philosophy as a technology and involves moving

data to less-expensive systems it as its value falls. Many vendors provide software and services to help customers implement ILM.

2 Another common data center strategy — virtualization — makes more efficient use of both servers and storage by combining

physical devices into logical 'pools' that can be more completely utilized than separate devices. This can cut power demands by reducing the number of physical servers and storage. But it can also increase power demands by using very dense racks of servers and storage that max out the power capacity of data centers long before they are physically full.

3 De-duplication and compression can yield a 20-to-1 reduction in storage needs by storing only the differences between old and

new copies, or by reading data as it is written to the backup device and storing only the unique data.

4 By allocating as much 'logical' capacity on drives as apps are expected to require but allocating physical capacity as the apps

actually need it, storage servers from some vendors postpone the need for more drives. This approach, also known as thin provisioning, gives storage managers more time to juggle capacity and buy drives.

5 More possible power savings come from MAID (massive array of idle disks) storage. This technology spins up a drive only when

data is required from it. But the extra time required to spin up the drive can make MAID unfeasible for apps such as transaction processing, which require high performance.

6 Still other vendors tackle overall storage management costs, and therefore power issues, with combinations of specially designed

hardware and software. One claims its file server, which can serve as a front-end for multiple SAN or NAS storage systems, generates one-third the heat and provides twice the performance of competing high-end NAS hardware.

7 Some vendors have built temperature sensors into storage arrays and other hardware to direct cooling to where it’s needed most.

One claims that its cooling system can save 25 percent to 40 percent of a data center’s energy use.

—Robert L. Scheier

W I r e L e s s Mention wireless storage, and people tend to think of accessing storage using network-attached storage (nas) over Wi-Fi-based networks. but there are several other forms of wireless networks that can be used in conjunction with accessing storage or moving data on a local or remote basis. In addition to Wi-Fi, other wireless network transports to support storage applications include microwave, free space optics and WiMax, along with emerging wireless usb.

Wireless usb is a technology designed to operate at good performance for consumer and soho environments while addressing cabling management issues. Wireless usb is based on usb 2.0, which operates at up to 480Mbit/sec. at distances of three meters or 120Mbit/sec. at 10 meters.

Wireless usb addresses short-distance cabling complexities as opposed to being a general-purpose network like Wi-Fi. Wireless usb has industry heavyweights it to leverage 'ultra-Wideband' (uWb) to reduce the number of radio transmitters and to reduce cost while pushing volume.

Jeff ravencraft of the Wireless usb industry forum sees the sweet spot for Wireless usb in consumer and sohoenvironments for attachment of usb-based storage, MP3, digital cameras and other peripherals. security features of wireless usb include encryption, along with mechanisms to ensure affinity between a computer and peripherals.

according to rajeev bhardwai, director of Cisco’s storage product marketing, the reasons for using wireless networking transport technologies include cable management, last-mile issues or limited availability of other networking bandwidth.

some considerations pertaining to wireless storage: security including encryption and access controlPerformance with an emphasis on latency and bandwidthFull-duplex or half-duplex modes of operations to balance performance vs. costWhat performance is supported over what distances shared transport that supports multiple protocols or technology specific to certain functions technology transparency with apps and networking technology transparency with apps and networking tcomponents

—by Greg schulz

t o S m a l l e r B i l l S7 Ways

Unplug YUnplug YUnplug

ooYoY uur BUnplug

BUnplug

aaccacaaca kuukuk ppupuupu ss

Vol/2 | IssuE/132 4 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Trendlines.indd 24Trendlines.indd 24Trendlines.indd 24Trendlines.indd 24Trendlines.indd 24Trendlines.indd 24Trendlines.indd 24

Page 11: CIO May 15 2007 Issue

Moving to File Virtualization If you are going to be at the frontlines, this is what you need to know.

As the network has become the center of the IT infrastructure, it has created benefits that are helping to drive productivity and new capabilities in the workplace. One of those areas

is storage, where network-centricity is creating a revolution in file access and management called ‘file virtualization’. Here are seven things you need to know about the new trend.

1. Virtualization Works Across DevicesEver since Eniac, 60 years ago, file access and management has focused on the physical layer. For the first time, file virtualization has raised this to the next logical level, the name space, and moved it off the server and onto the network. Basically, says Brad O’Neill, senior analyst at Taneja Group, virtualization allows both end-users and database managers to look across all databases — flat file and relational, network-attached storage (NAS) and storage-area network (SAN) — on the network.

Instead of creating folders containing files from one server, for instance, the database administrator can create name spaces based on logical business subjects and assign files from multiple servers, even those running different operating systems and database software. For example, says O’Neill, a CFO can organize, access, combine and manipulate any of the data he is authorized to access without caring where the files are physically located.

2. It Simplifies ManagementFile virtualization also simplifies file management. Because it works over all the resources on the network, it makes tasks like de-duplication much easier. It can also provide a key to increasing server utilization, particularly in high data growth environments.

Bert Latamore EXECUTIVE DECISIONS

Ill

us

tr

at

Ion

MM

sh

an

Ith

Vol/2 | IssuE/132 6 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

SPECIALSTORAGE

100001 1

Coloumn - Moving to file virtual26 26 5/10/2007 9:48:31 PM

Page 12: CIO May 15 2007 Issue

And it can simplify problems inherent in combining two IT infrastructures after a merger. When a storage server needs to be taken down for upgrade or repair, the data admin can do so without disrupting users. It also makes tiered file management much easier, allowing administrators to assign files to different levels of servers based on their access characteristics.

3. Virtualization Eliminates Geographical IssuesIn the traditional infrastructure, data had to be fairly close to users. This became a problem as business became global. It’s not uncommon, for instance, for work groups to span the world, with people on multiple continents collaborating on a project. However, the need to provide duplicate copies of data to each site greatly complicates this collaboration. Virtualization eliminates this issue. As long as the data resides somewhere on the corporate network, it can be added to the name space for any work group that needs it, and anyone with the security authorization can use it.

4. There Are Two ApproachesTwo totally separate technologies have been developed to provide forms of file virtualization, each with its own trade-offs. The older, and more mature, of these is the platform-based approach, typified by a Distributed File System (DFS). This, says O’Neill, is a software-based technology that lives above the native file system and acts as a proxy to present a common name space for multiple files on different servers to users.

As the more mature technology, DFS and the vendors who supply it are more stable. However, it’s also an older technology and doesn’t provide support across heterogeneous server populations. For instance, DFS works only with Microsoft Windows servers. It also doesn’t support real-time management activities such as de-duplication. And DFS-based approaches can have problems providing high-performance access and coordination across large geographic areas. “So, about four years ago, some smart entrepreneurs began looking at this problem from a network-centric viewpoint.” O’Neill says. Their approach is to put a physical device in the data stream on the network in front of the NAS and SAN servers. That box becomes the access point for all the end clients.

This Global Unified Namespace (GUN) technology provides the full benefits of file virtualization, across heterogeneous populations of file servers, combining NAS and SAN access, and across any geographies. But, it’s less mature, and the vendors are small. Second, it puts a single point of failure, the physical file virtualization box, directly in the data stream. If the box fails, it can take down all data access until a spare takes over. So, users need to decide which technology meets their overall needs.

5. The Market Is ConfusedAnother issue with file virtualization is confusion in the marketplace. “Users are confused by the proliferation of vendors with products that all have some overlaps,” O’Neill decided that what the marketplace needs is a reference architecture that the vendors can work to, so that their products can become less confusing to users and interface more smoothly with each other. To accomplish this, he created the File Area Network (FAN) model, and he has obtained buy-in from all the major vendors. Over time, this should simplify the marketplace, making it easier for users to enter.

6. The Technology Is Limited to File SystemsIt’s important to understand that file virtualization is limited to unstructured data in file systems. It can’t work with e-mail, IM or formatted documents in word processing or PDF-type formats, for example. Thus, promoters of this technology in the enterprise need to manage end-user expectations carefully so they understand what they can and cannot expect from it.

7. Users Need to Plan Carefully Before Moving to File VirtualizationO’Neill provides four specific recommendations for users interested in file virtualization:

First, start by defining the problem you want to solve. Is it an end-user file management problem, or are you interested in improving the infrastructure on your NAS or file server environment, or both? “How you answer that question points you to the solution,” O’Neill says. Second, decide how comfortable the organization is with putting a new device in the data path. That can determine if the enterprise will accept the newer virtualization technology. Thirdly, once the technology is selected, introduce it on a project-by-project basis. Lastly, develop a refined total cost of ownership and ROI story. That becomes important to sell the investment to the CFO. It’s easier to quantify a positive impact on server utilization rates — if utilization goes up, you can save X rupees by delaying the purchase of more servers. “If you architect this correctly and get this technology, you get very strong ROI. So the rewards often far outweigh the issues,” says O’Neill. CIO

Send feedback on this column to [email protected]

Bert Latamore EXECUTIVE DECISIONS

It's relatively easy to quantify a positive impact on server utilization rates — if utilization goes up, you can save X rupees by delaying the purchase of more servers.

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 2 7Vol/2 | IssuE/13

Coloumn - Moving to file virtual27 27 5/10/2007 9:48:31 PM

Page 13: CIO May 15 2007 Issue

Data ShowdownIn the land where data is multiplying uncontrollably, you can either outlaw it or play lawmaker.

A s part of the IT team at the Virginia State Police department, Lt. Pete Fagan’s job is to ensure that criminal investigators, police officers in the field and other authorities get the most accurate,

timely and detailed crime-related information possible. Crime never stops, so the data volumes are huge, dynamic and getting bigger every day. Criminal records often stretch back decades and include source material from countless systems in myriad formats. Multimedia content is a growing necessity. Metadata, a new necessity, is gobbling up precious disk space. Storage capacity, as you might imagine, is an ongoing problem for the department.

But That’s Not the Half of ItFagan’s goal is to record every request for information on the police department’s storage-area network. Not just a log notation of the request, but the entire transaction as it happened. According to the lieutenant, the system will store everything the requestor sees at the moment the request is fulfilled. If the requestor gets to view dozens of JPEGs, MP3 files, videos and other capacity-hogging content, Fagan wants all that information stored separately from the original data store.

Fagan isn’t a data pack rat. He’s doing his job with the tools at hand. For him, knowing exactly what was known and when can prove crucial to the criminal justice process. Just having the most recent files won’t do. Because those criminal files are always in flux, Fagan needs a storage system that can function as a time machine to retrieve not just a facsimile of the past, but the actual files and data from another time. And he needs to store those exact past experiences for more than 300 million requests per year.

Mark Hall MAKING I.T. WORK

Ill

us

tr

at

Ion

un

nIk

rIs

hn

an

a.V

Vol/2 | IssuE/132 8 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

SPECIALSTORAGE

100001 1

Coloumn - Data Showdown.indd 28 5/10/2007 9:47:22 PM

Page 14: CIO May 15 2007 Issue

Fagan’s time machine is emblematic of an impending storage crisis facing IT. More broadly, will it be a harbinger of loftier roles for CIOs? Or will it — and projects like it — ultimately condemn IT executives to being perceived as technologists with tactical, not strategic, value to the business?

Content ExplosionThe Virginia State Police department is in the midst of updating its 1TB storage infrastructure with new gear from Fujitsu to handle the estimated 11TB of data capacity that will be needed in the near future. While 11TB pales in comparison with the petabyte of data stored on average by Fortune 1000 companies, the department’s growth rate is much faster than that of larger organizations. That puts it in the middle of the inexorable march to the global zettabyte storage requirement. According to an IDC study, 161 exabytes of digital information were created and stored worldwide last year. (An exabyte, you’ll recall, is 1 billion gigabytes.) IDC projects that by 2010, global data creation and storage will reach 988 exabytes, a mere 12 billion gigabytes shy of a zettabyte.

When Lucas Mearian of Computerworld.com (a sister website to CIO) reported on the IDC study, he wrote, “The data explosion means the role of IT managers will expand considerably.” The IDC report was funded by EMC, and when I chatted about it with EMC executive Chuck Hollis, he speculated that a savvy CIO can take this growth as an opportunity to become “less of a technologist and more of an informationist.” IT leaders, he

argued, can emerge as the primary arbiters of information standards and processes — much like CFOs raised themselves to powerful executives sitting at the right hand of the CEO.

Wish It Were TrueEven if IDC is right about storage demands skyrocketing in the next three years, you won’t see CIOs casting a strategic eye on their companies’ information policies or dictating data-retention policies to their business units. Instead, they’ll be running around with their hair on fire trying to keep up with the ever-increasing amounts of information pouring into their shrinking corporate SANs. Sure, the CIO can outlaw certain file types or act as a trusted adviser who shows business leaders how to approach information life-cycle management. But a storage capacity crisis won’t enhance IT’s reputation. It will undermine it.

The only way CIOs will improve their standing during a global storage shortage is to ensure that their businesses don’t suffer from it. I don’t care how clever a CIO thinks a new information management scheme might be or how powerful he thinks it will make him. Such an approach will be meaningless to people like Lt. Fagan, who need data to do their jobs. And I wouldn’t want to be the IT executive who tries to force a new storage policy on Fagan. After all, the lieutenant does carry a gun. CIO

mark Hall is a Computerworld editor at large. Send feedback on this column

to [email protected]

Mark Hall MAKING I.T. WORK

Web ExclusiveResources

Best Practices for SoftwareFind out how to reduce legal liability, ensure IT compliance, cut costs.

Improving IT ComplianceFrequency of audits, time allocated to compliance by IT and IT spending

Download more web exclusive whitepapers from www.cio.in/resource

Features

Profiling Spam There’s a new way to spot spam and it takes a cue from detective work.

CEOs Rate IT: Steady But UncreativeIf your CEO is happy with your work, then you should be happy too, right?

Read more of such web exclusive features at www.cio.in/features

N E W S | F E A T U R E S | C O L U M N S | T O P V I E W | G O V E R N | E S S E N T I A L T E C H N O L O G Y | R E S O U R C E S

Log In Now! CIO.in

R E A L

WORLD

Columns

True ColorsCharacter is an essential element of leadership. Here’s how to develop yours and let it shine.

Lessons for the Mentor You too can get extra resources while learning to help young IT professionals shine.

Read more of such web exclusive columns at www.cio.in/columns

Coloumn - Data Showdown.indd 29 5/10/2007 9:47:23 PM

Page 15: CIO May 15 2007 Issue

Trendline_Nov11.indd 19 11/16/2011 11:56:19 AM

Page 16: CIO May 15 2007 Issue

IOVIRTUALIZA NT. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . ..

. . . . . . .. . . . . . .

. . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

101Keep pace with ever-increasing storage requirements without skipping a service level.

Vol/2 | ISSUE/133 0 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Feature - 01 Virtualization Unde30 30 5/10/2007 10:38:56 PM

Page 17: CIO May 15 2007 Issue

IOVIRTUALIZA NT. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . ..

. . . . . . .. . . . . . .

. . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

101 After some years of false starts and false hopes, storage virtualization, also known as block virtualization, is finally proving its worth. All the major vendors have embraced

it, most notably IBM, EMC, and HDS (Hitachi Data Systems); the solutions themselves have improved; and customers — typically large shops managing large SANs with intense data availability requirements — understand how to deploy it and where to get good ROI. No longer a technology in search of a problem, storage virtualization offers a way to address a wide range of storage management woes.

Storage virtualization creates an abstraction layer between host and physical storage that masks the idiosyncrasies of individual storage devices. When implemented in a SAN, it provides a single management point for all block-level storage. To put it simply, storage virtualization pools physical storage from multiple, heterogeneous network storage devices and presents a set of virtual storage volumes for hosts to use.

In addition to creating storage pools composed of physical disks from different arrays, storage virtualization provides a wide

By Steve Norall

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 3 1Vol/2 | ISSUE/13

Ill

US

tr

at

Ion

by

bIn

ES

h S

rE

Ed

ha

ra

n

SPECIALSTORAGE

100001 1

Feature - 01 Virtualization Unde31 31 5/10/2007 10:38:56 PM

Page 18: CIO May 15 2007 Issue

range of services, delivered in a consistent way. These stretch from basic volume management, including LUN (logical unit number) masking, concatenation, and volume grouping and striping, to data protection and disaster recovery functionality, including snapshots and mirroring. In short, virtualization solutions can be used as a central control point for enforcing storage management policies and achieving higher SLAs.

Perhaps the most important service enabled by block-level virtualization is non-disruptive data migration. For large organizations, moving data is a near-constant fact of life. As old equipment comes off lease and new gear is brought online, storage virtualization enables the migration of block-level data from one device to another without an outage. Storage administrators are free to perform routine maintenance or replace aging arrays without interfering with applications and users. Production systems keep chugging along.

Virtualization can also help you achieve better storage utilization and faster provisioning. The laborious processes for provisioning LUNs and increasing capacity are greatly simplified — even automated — through virtualization. When provisioning takes 30 minutes instead of six hours and capacity can be re-allocated almost on the fly, you can make much more efficient use of storage hardware. Some shops have increased their storage utilization from between 25 and 50 percent to more than 75 percent using storage virtualization technology.

Four Architectural Approaches

In a virtualized SAN fabric, there are four ways to deliver storage virtualization services: in-band appliances, out-of-band appliances, a hybrid approach called SPAID (Split Path Architecture for Intelligent Devices), and controller-based virtualization. Regardless of architecture, all storage virtualization solutions must do three essential things: maintain a map of virtual disks and physical storage, as well as other configuration metadata; execute commands for configuration changes and storage management tasks; and of course transmit data between hosts and storage. The four architectures differ in the way they handle these three separate paths or streams — the metadata, control, and data paths — in the I/O fabric. The differences hold implications for performance and scalability.

An in-band appliance processes the metadata, control, and data path information all in a single device. In other words, the metadata management and control functions share the data route. This represents a potential bottleneck in a busy SAN, because all host requests must flow through a single control point. In-band appliance vendors have addressed this potential scalability issue by adding advanced clustering and caching capabilities to their products. Many of these vendors can point to large enterprise SAN deployments that showcase their solution’s scalability and performance.

An out-of-band appliance pulls the metadata management and control operations out of the data path, offloading these to a separate compute engine. The hitch is that software agents must be installed on each host. The job of the agent is to pluck the metadata and control requests from the data stream and forward them to the out-of-band

Virtualization

3 4 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

3 Benefits of Storage Virtualization

Storage virtualization solutions create an abstraction layer between hosts and physical storage, masking the idiosyncracies of individual storage devices and presenting a homogenous pool of virtual storage.

Virtual disks can be created, resized, and assigned to hosts in a fraction of the time it takes to provision physical storage.

Perhaps the greatest benefit of storage virtualization is the ability to migrate data from old equipment to new gear, or from one storage tier to another, without bringing systems offline and disrupting applications and users.

Virtualization brings a central management point and standard set of services to heterogenous storage devices, simplifying tasks such as mirroring and replication.

Easy Storage Provisioning

Non-disruptive Data Migration

Simpler Storage Management

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 01 Virtualization Unde34 34 5/10/2007 10:38:57 PM

Page 19: CIO May 15 2007 Issue

appliance for processing, freeing the host to focus exclusively on transferring data to and from storage. The sole provider of an out-of-band appliance is LSI Logic through its acquisition of StoreAge. The StoreAge product can be adapted to both out-of-band or SPAID usage.

A SPAID system leverages the port-level processing capabilities of an intelligent switch to offload the metadata and control information from the data path. Unlike an out-of-band appliance, in which the paths

are split at the host, SPAID systems split the paths in the network at the intelligent device. SPAID systems forward the metadata and control path information to an out-of-band compute engine for processing and pass the data path information on to the storage device. Thus, SPAID systems eliminate the need for host-level agents.

Typically, SPAID-based virtualization software will run in an intelligent switch or a PBA (purpose built appliance). SPAID controllers must be deployed in conjunction with an intelligent switch.

Array controllers have been the most common layer where virtualization services have been deployed. However, controllers typically have virtualized only the physical disks internal to the storage system. This

is changing. A twist on the old approach is to deploy the virtualization intelligence on a controller that can virtualize both internal and external storage. Like the in-band appliance approach, the controller processes all three paths: data, control, and metadata.

Competition among all of these vendors continues to heat up. SPAID holds a lot of promise for scalability, but the products aren’t there yet. EMC is playing catch-up to the in-band solutions in both features

and functionality. And although IBM is reasonably far up the feature and function curve, it still could do more. Most of the vendors have the basics covered, but many do not have advanced capabilities such as thin provisioning and continuous data protection.

File Virtualization

Just as block virtualization simplifies SAN management, file virtualization eliminates much of the complexity and limitations associated with enterprise NAS systems. We all recognize that the volume of unstructured data is exploding, and that IT has little visibility into or control over that data. File virtualization offers an answer.

File virtualization abstracts the underlying specifics of the physical file servers and NAS devices and creates a uniform namespace across those physical devices. A namespace is simply a fancy term referring to the hierarchy of directories and files and their corresponding metadata. Typically with a standard file system such as NTFS, a namespace is associated with a single machine or file system. By bringing multiple file systems and devices under a single namespace, file virtualization provides a single view of directories and files and gives administrators a single control point for managing that data.

Many of the benefits will sound familiar. Like storage virtualization, file virtualization can enable the non-disruptive movement and migration of file data from one device to another. Storage administrators can perform routine maintenance of NAS devices and retire old equipment without interrupting users and applications.

File virtualization, when married with clustering technologies, also can dramatically boost scalability and performance. A NAS cluster can provide several orders of magnitude faster throughput (MBps) and IOPS than a single NAS device. HPC (high performance computing) applications, such as seismic processing, video rendering, and scientific research simulations, rely heavily on file virtualization technologies to deliver scalable data access.

Three Architectural Approaches

File virtualization is still in its infancy. As always, different vendors’ approaches are optimally suited for different usage models, and no one size fits all. Broadly speaking, you’ll find three different approaches to file virtualization in the market today: platform-integrated namespaces, clustered-storage derived namespaces, and network-resident virtualized namespaces.

Platform-integrated namespaces are extensions of the host file system. They provide a platform-specific means of abstracting file relationships across machines on a specific server platform.

Vendor Product Archchitecture

dataCore Software Sansymphony In-band appliance

EMC Invista SPaId (brocade and Cisco switches)

FalconStor Software IPStor In-band appliance

hitachi data Systems tagmaStore tagmaStore t In-band array controller

IbM San Volume Controller In-band appliance

Incipient Incipient network

Storage Platform SPaId (Cisco switches)

lSI logicVirtualization Manager Storeage Storage

out-of-band appliance (Q ogic) or SPaId

SAN VirtualizersMost storage virtualization solutions today take the in-band, appliance-based approach. The split-path Most storage virtualization solutions today take the in-band, appliance-based approach. The split-path Most storage virtualization solutions today take the in-band, appliance-based approach. The split-path Most storage virtualization solutions today take the in-band, appliance-based approach. The split-path Most storage virtualization solutions today take the in-band, appliance-based approach. The split-path Most storage virtualization solutions today take the in-band, appliance-based approach. The split-path Most storage virtualization solutions today take the in-band, appliance-based approach. The split-path Most storage virtualization solutions today take the in-band, appliance-based approach. The split-path architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.architecture is catching on, while HDS virtualizes internal and external storage in the array controller.

ut-of-band appliance (Qlogic) or SP

ut-of-band appliance ut-of-band appliance ut-of-band appliance ut-of-band appliance ut-of-band appliance out-of-band appliance (Q

ut-of-band appliance ut-of-band appliance (Q

ut-of-band appliance ogic) or SPogic) or SP

ut-of-band appliance ut-of-band appliance ut-of-band appliance ut-of-band appliance ogic) or SPogic) or SP

out-of-band appliance ut-of-band appliance ut-of-band appliance out-of-band appliance

Vol/2 | ISSUE/133 6 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Virtualization SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Page 20: CIO May 15 2007 Issue

Virtualization

These types of namespaces are well suited for multi-site collaboration, but they tend to lack rich file controls and of course they are bound to a single file system or OS.

Clustered storage systems combine clustering and advanced file system technology to create a modularly expandable system that can serve ever-increasing volumes of NFS and CIFS requests. A natural outgrowth of these clustered systems is a unified, shared namespace across all elements of the cluster. Clustered storage systems are ideally suited for high performance applications and to consolidate multiple file servers into a single, high- availability system.

Network-resident virtualized name-spaces are created by network-mounted devices (commonly referred to as network file managers) that reside between the clients and NAS devices. Essentially serving as routers or switches for file-level protocols, these devices present a virtualized namespace across the file servers on the back end and route all NFS and CIFS traffic as between clients and storage. NFM devices can be deployed in-band. Network-resident virtualized namespaces are well suited for tiered storage deployments and other scenarios requiring nondisruptive data migration.

File and block storage virtualization may be IT’s best chance of alleviating the

pain associated with the ongoing data tsunami. By virtualizing block and file storage environments, IT can gain greater economies of management and implement centralized policies and controls over heterogeneous storage systems. The road to adoption of these solutions has been long and difficult, but these technologies are finally catching up to our needs. You will find the current crop of file and block virtualization solutions to be well worth the wait. CIO

Reprinted with permission. Copyright 2007. InfoWorld.

Send feedback about this feature to [email protected]

A: I think it’s across the world that people are not taking to virtualization and I think it is because there isn’t one common tool that will provide all the services to meet the requirements that an organization has.

Q: Why aren’t more CIOs taking up virtualization?

A: I don’t think [nationalized] banks are aware of the benefits of

virtualization. Especially the administrative offices — not really the branches — have a lot of servers and I think top-level awareness is not there.

Q: Why aren’t more CIOs taking up virtualization?

A: I think it is a question of maturity — of the storage industry. People are waiting and watching. I think it will happen and have good ROI.

Q: Why aren’t more CIOs taking up virtualization?

A: Yes, big time! It is an advantage in a company like ours, where we set a

goal of storage capacity not growing more than 65-70

percent this year. I don’t think we can set such a cap

without virtualization.

Q: Do you see ROI in virtualization?

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 3 7Vol/2 | ISSUE/13

— Subrat Kumar Kunungo, Staff manager-IT, Qualcomm

— Harish Pai, assistant Gm (systems), Kurlon

— Sanjay Kumar Kulkarni, System manager, Wipro — Manohar Hedge, Senior manager-IT, Karnataka Bank

Ph

ot

oS

by

Sr

IVa

tS

a S

ha

nd

Ily

a

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 01 Virtualization Unde37 37 5/10/2007 10:39:10 PM

Page 21: CIO May 15 2007 Issue

Trendline_Nov11.indd 19 11/16/2011 11:56:19 AM

Page 22: CIO May 15 2007 Issue

iSCSI INTHE RIS G STAR

ENTERPRISEiSCSI won’t replace Fibre Channel anytime soon. But for SMBs and remote offices, the low price and overhead are just right.

Vol/2 | ISSUE/133 8 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Feature - 02 iSCSI Th rising en38 38 5/10/2007 10:18:15 PM

Page 23: CIO May 15 2007 Issue

iSCSI INTHE RIS G STAR

ENTERPRISE

By Mark Leon

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 3 9Vol/2 | ISSUE/13

Ill

US

tr

at

Ion

by

bIn

ES

h S

rE

Ed

ha

ra

n

Fibre Channel was definitely not top of mind when Chris Brown hit the wall on disk space and, in mid-2005, decided to go shopping for a SAN. Brown is IT manager for DeltaValve,

a division of Curtiss-Wright Flow Control. “I have an IT staff of two,” he explains, “and we do not have the resources to support Fibre Channel.”

Instead, Brown opted for the convenience and low cost of an IP-based network storage system. To that end, he bought five NSM 150s from LeftHand Networks as building blocks for a new iSCSI SAN. Each unit comes with four 250GB drives, so he had a cluster of five terabytes of new storage that, unlike a Fibre Channel SAN, would require little in the way of specialized skills to maintain.

The value proposition of iSCSI storage has always been its simplicity and low cost compared to Fibre Channel. All you need is capacity on a Gigabit Ethernet network — no special training in an esoteric protocol, just a little education on top of basic networking skills. An iSCSI SAN can also smooth the path to data replication and disaster recovery, especially over long distances. And if speed is an issue, 10GbE (10 Gigabit Ethernet) is already here, if somewhat pricey.

Drue Reeves, research director with the Burton Group, calls the iSCSI SANs from

SPECIALSTORAGE

100001 1

Feature - 02 iSCSI Th rising en39 39 5/10/2007 10:18:16 PM

Page 24: CIO May 15 2007 Issue

such vendors as LeftHand, DataCore, and FalconStor “software-only targets. They are relatively cheap,” he explains, “because they run on standard hardware. They are powerful because you can cluster them and add storage virtualization on top, so a LUN [logical unit number] can fail over to another target, and the user never knows.”

It was clear that iSCSI had arrived when Microsoft put an initiator in Windows Server 2003. What was not so clear was where iSCSI would go. Fibre Channel still rules the SAN market, and Microsoft didn’t follow suit with an iSCSI target until last year.

But last fall iSCSI got a huge boost when VMware, the hottest name in virtualization,

added iSCSI support. To get the most out of virtualization, you need a SAN — and now you can do it without Fibre Channel.

The Virtualization Connection

DeltaValve’s Brown, who isn’t afraid of getting under the hood, quickly saw

the potential of his new SAN. “I moved everything — SQL Server, Navision, SharePoint, Exchange Server, and an Oracle database that runs our PLM (Product Lifecycle Management) System — onto them,” he says.

Brown’s willingness to tinker also took him deep into the world of virtualization. “About three months after we got the SAN running, I brought VMware into the mix. Virtual storage from the SAN and server virtualization from VMware go hand in hand.”

Brown has two host VMware servers (both homegrown, Quad AMD Opteron-powered boxes) and runs eight virtual servers on each. “The beauty,” he says, “is

that if one host goes down, we can use the other host to mount the same volume and be up in a matter of minutes.”

Thane Morgan, director of information technology for the town of Fishers, a suburb of Indianapolis, had virtualization on his mind from the start. He also looked at LeftHand but did not like the way that software vendor made the hardware

decision for you. “Being dependent on their [LeftHand’s] control of hardware drives my cost up,” he says.

So Morgan bought SANMelody software from DataCore that is hardware-agnostic. “For about $60,000 (Rs 27 lakh), I got two brand-new, dual-core Dell 1950s to be my new app servers. Then I was able to load SANMelody onto the best two of my old servers to create my SAN.”

Hanging SATA drive cages off the converted SAN boxes gave him six terabytes of storage. “I am licensed for 16 terabytes with DataCore,” Morgan says. “With LeftHand, for the same amount of money, I would have been stuck at two four-terabyte nodes without my new app server boxes.”

He bought the hardware and software and did prototyping last summer, so when VMware announced support for iSCSI in September, he was ready. “We got it all running in December, and, finally, my server is completely decoupled from the hardware.”

Clearly, Morgan is excited by the power of combining server virtualization with SAN technology. “If I had virtual servers and no SAN, it would be easy to back up and restore the server on another machine if, for example, I needed to do maintenance. But this still takes time and probably means taking some applications offline. When you add the SAN, you can do the same thing with no interruption of service.”

Consultants such as Jamie Anderson, president of Emergent Networks, a consultancy and VAR, are finding the combination of iSCSI and virtualization is enough to convince hesitant clients to make the leap to shared storage.

Anderson cites a recent engagement with a small bank. “This is a new bank,” he says. “Without iSCSI they would not have considered a SAN. But we just put in an EMC Clariion and two virtual servers. Right now they only have 800GB of data, but they are in good shape to grow, and there was almost no learning curve since it is all Ethernet based.”

Anderson adds that virtualization was one of the drivers behind Chief Manufacturing’s selection of an iSCSI SAN. “We did this about 18 months ago,” he says. “We use iSCSI to mount local drives to Exchange and SQL Server.”

iSCSI

Vol/2 | ISSUE/134 0 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

The Ascent of iSCSIEven with a compound annual growth of 74 percent, iSCSI will net a small slice of the SAN market through the end of the decade.

2005 2006 2007 2008 2009 20100

2,000

4,000

6,000

8,000

10,000

12,000

296568

1,259

2,172

3,440

4,824

9,0549,952

10,429 10,755 10,986 11,062

Fibre Channel iSCSI

Mill

ions

of D

olla

rs

Note: Worldwide Disk Storage Systems 2006-2010 Forecast Update, Nov. 2006

Source: IdC

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 02 iSCSI Th rising en40 40 5/10/2007 10:18:16 PM

Page 25: CIO May 15 2007 Issue

Easing the Pain

Virtualization is a key iSCSI selling point, but it’s just one of several. As director of professional services for Intelenet Communications, a hosting firm, Jeff Stein is always searching for ways to streamline provisioning and maintenance. To that end, he started looking at iSCSI about three years ago. “We did not find it suitable for our managed server offering at the time,” he says.

Fast-forward to today, and Stein is an iSCSI fan. “The technology got flatter, and prices dropped,” he says. “iSCSI made it possible for us to go to a diskless managed server. Our OS and our customers’ data now reside on the SAN, and we deliver about 1,000 servers via our iSCSI environment. We have cut our provisioning and response times by a factor of six going to an iSCSI SAN.”

He had already provisioned Fibre Channel SAN infrastructure as a special service for contract customers, but Intelenet’s generic offering was all DAS. “We looked at extending Fibre Channel to all our customers,” Stein explains, “but it would have cost 10 times as much as iSCSI. Then there is the skill-set issue. No member of our 24/7 staff is afraid to get on these [iSCSI] devices. I don’t think this would be true for Fibre Channel.”

Intelenet built its SAN on data arrays from EqualLogic, an iSCSI storage hardware vendor. EqualLogic also ships with virtualization software running on clustered storage.

EqualLogic went head to head with the Clariion CX Series from storage giant EMC in a final, proof-of-concept test at Safeway Insurance Group. Mike Leather, network services manager there, says, “Even after sending out three teams, the EMC folks could not show me the performance stats I needed, so we went with the PS line from EqualLogic.”

Leather needed better performance on his primary business system, a Microsoft SQL Server cluster. The I/O to the system’s DAS was the bottleneck, which immediately suggested a SAN solution. “We looked at Fibre Channel, but iSCSI is so much cheaper,” Leather says, “and you don’t need to buy any HBAs or Fibre Channel switches.”

“We had it up and running by mid-October,” Leather says, “but did not go live until February 2006. We were extra-cautious since our whole business runs off this system.”

Since then he has migrated storage for other systems to the SAN, including secondary SQL databases and Microsoft Exchange. In summer 2006, he started a new project, replication, for which he bought a new PS 3600 and EqualLogic replication software. “We now have two equal architectures, and we are replicating our database every five minutes.”

Right now both of these are at the home site, but in April, Leather says one will move more than 800 miles to a secondary site. “This will be iSCSI over 10MB metro

Ethernet,” he says. “Robust disaster recovery. By the second half of 2007, we plan to have full two-way replication, two fully redundant hot sites. By the end of the year, we plan to have load balancing as well.” In that same time frame, Leather also plans to start testing virtualization.

So far, Leather has had no major problems, but he recommends that before you consider iSCSI, you look carefully at one thing. “If you have not made major improvements to your network infrastructure in the last three years, you had better do it. We were OK since we had put in Gigabit Ethernet with a new switch a few years back.”

The right infrastructure helped sell Dan Brinegar, IT administrator for beer distributor House of LaRose, on iSCSI. “We had moved into a new building and so were able to design our own network, all Gigabit Ethernet,” Brinegar says. “We also bought Catalyst 4500 switches that allow routing.”

The routing let Brinegar segment his iSCSI traffic from his production network. This is an important consideration as iSCSI’s greatest strength, the fact that it can run over your standard IP lines, can also be a weakness if storage traffic drags down mission-critical systems.

Brinegar uses IPStor software from FalconStor, which transforms a high-end Linux box into a RAID array with four terabytes of storage. “Our main goal was CDP, continuous data protection, and I had a limited budget of about $40,000 (Rs 18 lakh). iSCSI was really the only option given my performance requirements.”

He now has continuous backup of a data warehouse, ADP payroll, and a fleet-maintenance system. “We have a lot of

Novell, but that is still on Fibre Channel,” Brinegar says. “NetWare 5.1 just does not like iSCSI. When we upgrade to 6.5, we will probably convert that to iSCSI as well.”

“This [iSCSI] really changed the way we do things. Before, we had a separate tape drive for every server. My life is a whole lot easier now,” Brinegar says.

The Need for Speed

At the high end, Fibre Channel is still SAN king, and speed is obviously one of the main reasons. Affordable Ethernet is still pretty much a one-gig horse, whereas Fibre Channel runs at four gig, with eight on the way.

There are ways to squeeze more performance out of iSCSI. Intelenet’s Stein, for example, plugs iSCSI HBAs from QLogic into his servers. This is one of the technologies that got 'flatter', in his opinion, making iSCSI both fast and cheap enough to move his datacenter from DAS to SAN.

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 4 1Vol/2 | ISSUE/13

iSCSI

That iSCSI can run over standard IP lines, is among ITS greATeST STrengThS buT CAn AlSo be A weAkneSS if storage traffic drags down mission-critical systems.

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 02 iSCSI Th rising en41 41 5/10/2007 10:18:16 PM

Page 26: CIO May 15 2007 Issue

“You can enhance iSCSI with HBAs,” Reeves says. “You can also add, on the target side, TOE (TCP offload engine) cards or DMA (direct memory access) cards. On the initiator side, you can get NICs with iSCSI initiators built in.”

Anderson of Emergent Networks chose to use QLogic iSCSI HBAs for the engagement

with Chief Manufacturing. “We could have stuck with the Ethernet ports, and Microsoft’s iSCSI Initiator [the QLogic HBAs come with a separate iSCSI Initiator],” he says, “but we went this route in order to offload the TCP/IP from the processor onto the iSCSI cards.”

Now, Anderson is not sure the performance boost from that is worth the

extra expense and hassle. “Some of the Ethernet cards from Intel and Broadcom now come with TCP/IP offload cards, so, in my opinion, the lines between iSCSI HBAs and standard Ethernet interfaces are blurring.”

Meanwhile, there is no doubt that cheaper 10GbE is on the way, and iSCSI is one of the drivers. In January, Bell Micro, a distributor of storage and computing technologies, signed a distribution agreement with Chelsio, a provider of 10GbE Ethernet adapters and ASIC solutions.

Also in January, Brocade Communications bought Silverback Systems, a company that makes network processors to help to accelerate the speed and performance of storage traffic in networked storage environments. Brocade cited Silverback’s technology and expertise in iSCSI as a main reason for the acquisition.

Stephanie Balaouras, analyst with Forrester Research, thinks all this 10GbE action will make the SAN market very interesting. “It [10GbE] is too expensive right now,” she says. “I expect the cost to come down to affordable levels in three to four years.”

Don’t, however, expect to see Fibre Channel beat a hasty retreat. “Storage buyers tend to be the most conservative,” Balaouras says, “so even if iSCSI is competitive in price and performance, you won’t see people ripping out their Fibre Channel.”

And Reeves says there are other issues. “Don’t forget that the high-end storage arrays are still built for Fibre Channel. Sure, EMC says they support iSCSI on their Symmetrix line, but this is essentially an add-on. I think these vendors will eventually embrace iSCSI, but they are going to protect their high profit margins on expensive Fibre Channel equipment for as long as they can.” CIO

Reprinted with permission. Copyright 2007. InfoWorld.

Send feedback about this feature to [email protected]

Vol/2 | ISSUE/134 2 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Where is iSCSI headed? Good question. a recent blogstorm (infoworld.com/5072) with a recent blogstorm (infoworld.com/5072) with aposts from well-known names at EMC, Equallogic, and netapp among others leaves

the answer in doubt. true, shipments of iSCSI gear continue to climb steadily, but conventional true, shipments of iSCSI gear continue to climb steadily, but conventional tanalyst wisdom dictates that iSCSI’s slice of the San market may remain quite thin.

So where will iSCSI find its niche? Clod barrera, distinguished engineer and chief technical strategist for IbM Storage, toes the party line when it comes to iSCSI versus Fibre Channel. “iSCSI is a good complement to Fibre Channel,” he says. “If your servers cost less than $25,000 (rs 11.25 lakh) each, then iSCSI is probably a good choice for your San. otherwise you are most likely going to want Fibre Channel.”

barrera expects to see a new generation of multi-protocol switches that will support hybrid SanS — Fibre Channel locally and iSCSI across long distances. “look at what Cisco is doing here. and there are other unannounced products.”

and barrera sees other interesting possibilities on the horizon: “When 10GbE gets cheap arrera sees other interesting possibilities on the horizon: “When 10GbE gets cheap enough, I think, with some customers the iSCSI/enough, I think, with some customers the iSCSI/

Fibre Channel debate will get really hot.”Fibre Channel debate will get really hot.”John Fanelli, VP of marketing for John Fanelli, VP of marketing for lefthhand networks, takes that line of

thinking a step further: “What we are thinking a step further: “What we are seeing is the market moving away seeing is the market moving away

from Fibre Channel to iSCSI.” Fanelli also suggests that such trends as server virtualization make clustered storage solutions built around an iSCSI network more desirable.

ashish nadkarni, principal consultant at Glasshouse,

also believes interest in server virtualization will push iSCSI virtualization will push iSCSI

adoption. nadkarni adds that as more powerful adkarni adds that as more powerful processors become popular, the CPU overhead in processors become popular, the CPU overhead in

processing the processing the ttCP/IP stack — a common argument CP/IP stack — a common argument against iSCSI — will lessen as an issue.

although a sudden surge in iSCSI popularity seems highly unlikely, just about everyone agrees that iSCSI deployments will continue to gain market share — although whatever progress iSCSI makes will depend partly on complementary technologies, such as virtualization and clustered storage. iSCSI’s star is rising, albeit slowly.

— M.l. and Mario apicella

The iSCSI TrajectoryExperts debate where iSCSI will find its niche.

Ill

US

tr

at

Ion

by

MM

Sh

an

on

by

MM

Sh

an

Ith

iSCSI SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Page 27: CIO May 15 2007 Issue

Trendline_Nov11.indd 19 11/16/2011 11:56:19 AM

Page 28: CIO May 15 2007 Issue

THE

OF

THREE

RSPILLATADA

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

Apart from being the very driver of enterprise storage, data is the lifeblood of your business. Here's how to keep it healthy and safe, while ensuring its availability to users.

Vol/2 | ISSUE/134 4 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

SPECIALSTORAGE

100001 1

Feature - 04 The Three Pillars o44 44 5/10/2007 10:13:22 PM

Page 29: CIO May 15 2007 Issue

THE

OF

THREE

RSPILLATADA

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

111

1000 1 000001111110101001101110011000110001 0 0011001100 1 00001100011000011000000000 0 10 1 001010 0 10 1 00011111110011110010100 1 001110101101011010111101010010 1 00110101010010 1 00101

Ill

US

tr

at

Ion

by

bIn

ES

h S

rE

Ed

ha

ra

n

By Paul F. RoBeRts, PeteR wayneR & DouG DIneley

Data is your most precious asset. Regulations governing retention, security, and retrieval now carry severe

penalities for mishandling. And brave new IT architectures — in which siloed applications give way to service-oriented ones that span the enterprise — demand consistent, constantly available data independent of the software people originally used to create it.

Here, we examine the three pillars of data: security, quality and availability. The intent is to foster best practices that ensure your data receives the attention it deserves. No one can achieve zero defects. But the advice here could bring you one step closer.

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 4 5Vol/2 | ISSUE/13

Feature - 04 The Three Pillars o45 45 5/10/2007 10:13:22 PM

Page 30: CIO May 15 2007 Issue

For DuPont, Gary Min may have seemed a model employee. A research chemist at DuPont’s research

laboratory in Circleville, Ohio, Min was a naturalized US citizen with a doctorate from the University of Pennsylvania who had worked for DuPont for 10 years, even earning a business degree from Ohio State University with help from his employer. During that time, he had moved up the ranks within the company, taking on various responsibilities on research and development projects within its Electronic Technologies business unit. He specialized in the company’s Kapton line of high-

performance films, which are used, among other places, in NASA’s Mars Rover.

But Min’s veneer of respectability began to crack on December 12, 2005, when he told his employer he would be leaving his job. According to a civil complaint filed by DuPont against Min, a company search the next day revealed that Min had recently been an avid user of the company’s electronic document library, accessing almost 23,000 documents between May and

December 2005, including more than 7,300 records in the two weeks prior

to his giving notice. Alarmingly, Min had strayed from his area of specialization, rummaging through sensitive documents related to Declar, a DuPont polymer that competed directly with PEEK, a product made by Min’s future employer, Victrex.With Min indicating he would relocate to a Shanghai office of Victrex, DuPont appealed

to both law enforcement and the civil courts that it was worried its

former researcher was absconding with a treasure trove of trade secrets for Victrex and perhaps other Chinese companies.

DuPont is not alone. The broad outlines of the Min case — his Chinese nationality, his links to companies operating in that country, and the broad scope of his attempted intellectual-property heist from DuPont — are in keeping with what the FBI says is an epidemic of state-sponsored economic espionage. By one estimate, there are as many as 3,000 front companies in the United States whose sole purpose is to steal secrets and acquire technology for China’s booming economy.

Welcome to the brave new world of enterprise security, circa 2007. It’s a world where the troubles of yesteryear — loud and stupid Internet worms and viruses such as MSBlaster, Sobig, or SQL Slammer — seem trivial. In their place are rogue insiders with legitimate credentials, armed with Trojans and rootkits controlled from afar that may lurk for years without detection, bleeding companies of sensitive information. It’s a world in which premeditated

plunder of specific data, rather than the mere breaching of the perimeter, is the point of network intrusions. And that means companies, more than ever, must monitor and secure data to prevent it from falling into the wrong hands.

Higher Value, Freer Flow

“This is a problem of the evolving value of data,” says Marv Goldschmitt, vice president of business development at Tizor, a data auditing and protection firm. “Data has taken on a value beyond what it originally had, and individuals don’t know how to deal with that,” he says. Moreover, the migration of almost all intellectual property and critical data to purely digital form, as well as the interconnectedness of corporate networks with each other and the Internet, stand in the way of discovering when data has been pilfered or that anything has gone awry, Goldschmitt says.

Security experts are painfully aware that clamping down on insider threats and data leaks is an order of magnitude more difficult than stopping malware. And while recognition of the data-security problem is spreading fast within enterprises, very few have taken steps to lock down their sensitive data and intellectual property.

“In our experience, most firms are far from addressing it,” says Phil Neray, vice president of marketing at Guardium, a database threat and security monitoring firm. “These companies have hundreds of systems installed around the world but very few installed to protect intellectual property.”

“The risk level is still very high,” says Steve Roop, vice president of products and

Vol/2 | ISSUE/134 6 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Secure your enterpriSe dataregulations and a fear of banner headlines put the focus on

Ill

US

tr

at

Ion

S b

y p

c a

no

op

performance films, which are used, among other places, in NASA’s Mars Rover.

But Min’s veneer of respectability began to crack on December 12, 2005, when he told his employer he would be leaving his job. According to a civil complaint filed by DuPont against Min, a company search the next day revealed that Min had recently been an avid user of the company’s electronic document library, accessing almost 23,000 documents between May and

December 2005, including more than 7,300 records in the two weeks prior

to his giving notice. Alarmingly,

with PEEK, a product made by

With Min indicating he would relocate to a Shanghai office of Victrex, DuPont appealed

to both law enforcement and

data, not network security.

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 04 The Three Pillars o46 46Feature - 04 The Three Pillars o46 46Feature - 04 The Three Pillars o46 46

Page 31: CIO May 15 2007 Issue

marketing at Vontu, one of a slew of smaller DLP (data-leak prevention) firms.

According to data accumulated from Vontu risk assessments on customer networks, approximately 2 percent of all sensitive or confidential files are exposed to theft by unauthorized personnel, and around one of every 400 e-mails that leave a company exposes sensitive data — either sent to an unauthorized recipient or sent to an authorized recipient in an insecure form that can be sniffed or otherwise stolen.

Companies usually overlook that exposed data because their security posture is still focused on network perimeters, not on what might be going on behind the firewall or even over secure connections with business partners and suppliers, says Paul Stamp, an analyst at Forrester. “The perimeter around data is shrinking. Between joint ventures and collaborative [business to business] stuff and remote users, the perimeter has become highly porous.”

Exposure via business partners and third-party contractors is a top concern at Communications Data Services (CDS), a subscription service bureau that’s part of Hearst, says Paul McCarthy, director of information services. In its databases, CDS maintains files (including credit card numbers) for 155 million active subscribers to publications such as Better Homes and Gardens, U.S. News and World Report, Vogue, and Readers’ Digest. Much of that sensitive data comes to CDS through channels that can be difficult to police, such as agents and third-party contractors, as well as over the phone and via the Web, McCarthy says.

regulatory imperatives

Securing critical data that may be used in a variety of contexts is a daunting prospect for any enterprise. But the harsh reality of regulations such as Sarbanes-Oxley and the PCI (Payment Card Industry) data security standard are helping set priorities for enterprises that might otherwise remain in denial.

In particular, Sarbanes-Oxley’s requirement that companies audit the access of privileged users to sensitive data — and PCI’s requirement to track user

identity information whenever credit card data is touched — are pushing companies to home in on where sensitive data resides and how it is being used, Goldschmitt says.

At CDS, PCI and Sarbanes-Oxley prompted the company to take a close look at all of its processes for handling subscriber data, McCarthy says. In addition to doing its own SAS (Statement on Auditing Standard) 70 audits of internal security controls, CDS is regularly audited by third parties.

Increasingly, audits are forcing enterprises such as CDS to push security measures closer to where data resides, whether on laptops, in databases, or in shared directories, Stamp says. It’s a simple prescription but one that’s difficult to implement because most companies start out with a hazy understanding of what their sensitive data is, let alone where it resides on their networks.

“Companies wake up and realize, ‘We don’t know anything!’” Goldschmitt says. “We’ve had companies come to us and say, ‘We have 20,000 data servers and absolutely no idea which of them have sensitive data on them’.”

Zeroing in and Locking down

When the panic subsides, the hard work of discovery begins. Fortunately, enterprises have more data security tools at their disposal today than ever before.

Most companies in the DLP space, including Vontu and Tizor, can audit network activity to find sensitive data such as credit card numbers, magnetic-stripe data, or intellectual property on database and file servers, and monitor user access to that data. Firms such as PointSec — now part of CheckPoint — and startup Provilla can perform similar audits at the desktop level, monitoring file copying to portable storage devices, as well as e-mail and Internet-based file transfers.

Once that key data has been identified, DLP firms offer various strategies for securing it — from tagging key intellectual property with signatures that raise alarms whenever they pass outside of the company’s control to blocking USB

ports to prevent data transfer to portable devices. None of those approaches is sufficient to protect data without larger organizational changes, experts say.

“There are really cultural changes that need to occur,” Guardium’s Neray says. “You’ve got to focus on insiders and trust — trust and verify.”

Companies need to define security policies that cover critical data and educate employees about acceptable behavior. “If you’ve got an SAP application, your company might access

the database 22,000 times a day as part of your normal business processes. But if someone’s using Microsoft Excel and bogus credentials to access SAP, that’s a violation of policy,” Neray says, adding that traditional perimeter defenses and identity- and access-management products also play a vital role in data security. In particular, companies should use their identity-management platforms and strict policies to link specific IP addresses to specific users, rather than allowing shared credentials to muddy the waters should a forensic examination need to take place. “The problem is you’ve got applications like SAP and Oracle eBusiness Suite, which have privileged credentials to access the database, and those are widely available in the IT environment. Developers are using them, [database administrators], and the help desk,” he says.

Enterprises also need to build practical, bottom-up policies that actually get enforced, rather than imposing

Data Management

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 4 7Vol/2 | ISSUE/13

Enterprises need bottom-up, practical policies that get enforced, rather than imposing, top-down security policies that get ignored.

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 04 The Three Pillars o47 47 5/10/2007 10:13:24 PM

Page 32: CIO May 15 2007 Issue

When I was a programmer at an investment bank, my desk was next to

the department of 'data integrity', a small group with the thankless job of making sure that the databases held accurate records of stock transactions. The bank’s computers could process millions of transactions in seconds, but a mis-typed key or a missing value could jam the entire assembly line for data.

At the time, the bank didn’t want insight or truth in their databases — they just wanted the books to balance. It was almost as if data integrity were an afterthought. That view has changed. Data integrity has become a hot topic in many IT departments. The CEO who used to be impressed by the website with forms for customers to fill out is now wondering why the data is such a mess. The marketing group wants real leads backed by real data, not a bit dump filled with inconsistency and inaccuracies.

A number of software vendors is tackling the problem by offering tools and packages that treat data as more than a pile of bits. After all, the problems of data quality exist because bits can never be perfect reflections of the underlying information.

Scrubbing data clean

These systems often have a sophisticated gloss but are typically practical tools designed to help an IT shop remove the most glaring and expensive problems. So, the solutions generally take the form of plain old if-then-else statements. The systems scrub, or cleanse,

the data by applying rules that remove all possibilities for false duplication.

One of the oldest and most common apps for data quality software is address 'cleansing', the process whereby a company takes a mailing list and ensures that all of the addresses are current, valid, and as complete as possible. Pitney Bowes Group 1 Software helped the US Postal Service develop the technology for parsing and correcting. It aggregates rules for understanding addresses into a modular application that can recognize errors, correct them, and add the most complete ZIP code. It can distinguish between the two identical abbreviations in 'St. Paul’s St.' and understand that 'Saint Pauls Street' is the same road.

unrealistic, top-down security policies that just get ignored, Stamp says. “Once you have a handle [on] where your data is and where it’s going, you can start shoring up your infrastructure from the ground up.”

Building Barriers

Some of those measures can be straightforward. Companies seeking to protect data on laptops and other mobile devices have been a boon to top-tier data encryption vendors such as RSA and PGP.

Even at PKWare, makers of PKZip, simple encryption features that work across diverse platforms have helped drive sales. Data security now accounts for half of the company’s business, compared with just 20 percent three years ago, says Todd McLees, vice president of marketing.

As CDS has discovered, start with the obvious and build from there. The company used a layered approach to get a handle on external security — with standard security measures such as firewalls, VPNs, and SSL encryption — then added configuration control technology from Tripwire. More recently, McCarthy says, CDS has deployed outbound filtering technology from Palisade Systems that can do packet-level inspection and spot data such as credit card numbers that might be traversing the company’s network or leaving the company over FTP or HTTP.

CDS has gone further than tackling sensitive data as it flows among authorized employees inside the company. It also has determined the behavior of hundreds of companies that contract with the magazines CDS works with, many of which pay far less attention to data security — and may send spreadsheets or CDs with sensitive subscriber data to the company.

Nonetheless, the threat of a Gary Min-style rogue insider looms large. The goal, McCarthy says, is to put up enough barriers that it becomes almost impossible for a lone insider to do significant damage.

“You want to reduce it to the point where nobody can act alone and do something,” McCarthy says, “where you need a conspiracy of persons to make it happen.”

Vol/2 | ISSUE/134 8 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

improVe your data

Data Management

making sure that the databases held accurate records of stock transactions. The bank’s computers could process millions of transactions in seconds, but a mis-typed key or a missing value could

At the time, the bank didn’t want insight or truth in their databases — they just wanted the books to balance. It was almost as if data integrity were an afterthought. That view has changed. Data integrity has become a hot topic in many IT departments. The CEO who used to be impressed by the website with forms for customers to fill out is now wondering why the data is such a mess. The marketing group wants real leads backed by real data,

Ensuring data quality is always hard, but new tools are making the hard, but new tools are making the toughest task in It a bit easier. a bit easier.

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

5/10/2007 10:13:27 PM5/10/2007 10:13:27 PM

Page 33: CIO May 15 2007 Issue

After early success with cleaning up addresses, Group 1 is now working to open up its tools so that they can help other parts of the enterprise. Navin Sharma, director of product management, explains that one big opportunity is in straightening out customer records, consolidating them when necessary. Its latest offering helps the sales force straighten out mistakes: when a new customer record arrives, Sharma explains, “We standardize it, we validate it and complete it. Is this customer already in the master data hub? Do I already have information? If so, I want to synchronize all of my systems with the latest information; otherwise, I want to add him as a new customer.”

Such cleansing processes can be complicated. Jeff Jonas, chief scientist at IBM’s Entity Analytics Solutions, says, “There are some risks if one overcleans the data — especially if trying to decide which incorrect values can be discarded — because you may end up dropping useful data.” At IBM, they avoid throwing out any data by venturing a best guess, not a permanent decision, about which values are 'clean'.

Business makes the call

Getting the input to make decisions about what is correct, or clean, is getting easier, because many of the new products have simple user interfaces that enable everyone in the enterprise to pitch in, a process that takes the weight off the shoulders of the IT department. Karen Hsu, principal product manager for data quality at Informatica, says her company is working to open up its tools to the people at all levels of the corporation.

“What we’ve heard from customers is, ‘I’m asked to look into why a customer name isn’t correct and that isn’t my expertise’,” Hsu says. “So we’ve let the business take on the responsibility. Those types of rules are things that the business can create and monitor continually. If there was a missing part, they would be notified by a dashboard rather than waiting for IT to do it.”

Informatica’s latest offering, like many in the space, offers a visual programming language that can create rules and workflows for cleansing data. They make it easier for non-programmers to add rules and tweak

the existing ones to cope with changing business conditions.

IBM has its own data quality solutions, WebSphere Product Center and Customer Center, which are designed to help customers create a single, correct version of the truth so that data can be used in a variety of applications without inconsistencies.

The structure and role for such tools is changing rapidly. The original tools were designed to work in the background to remove inaccuracies by parsing information, applying rules, and matching disparate sources. New versions from vendors work within an SOA, providing answers immediately, a process that allows developers to eliminate ambiguities or inaccuracies before they occur.

compliance alert

Vendors are also building dashboards that flag problems and let managers drill down into the data set to examine them. One of the biggest new applications for such tools is regulatory compliance. Software to ensure data quality can reduce workloads and prevent companies from inadvertently ignoring the law. Kathleen Hondru, VP of marketing at Innovative Systems, says her company is helping clients in banks and insurance companies scrutinize client lists and look for matches against government watch lists. The company’s matching engine can screen against all of the possible variations on a name and associate all of the potential 'aliases' with the original record.

This application is a good example of how tool vendors offer systems that do more sophisticated matching operations than can be easily accomplished with traditional relational databases. The tools preprocess

information, and ensure that the matching is faster, simpler and more consistent.

These applications of different kinds of computer science research show that the domain is just beginning to enter the mainstream of the IT world. IT managers now ask whether data cleansing can help them produce more accurate ones. The compliance officers who once asked for simple tracking and alarm bells are now wondering whether better tools can provide more comprehensive oversight.

the Future of Quality

Better tools for a variety of data quality applications are in the works. Theresa DeRycke is a so-called data therapist for

CRMfusion, a company that specializes in data quality solutions for on-demand CRM. “Once the data is cleaned up, you have to think about maintaining it,” she says. “I think the next hot topic is execution of the data — territory management. How do we divvy up all the clean data?”

One company, Silver Creek Systems, is taking automation of data matching to the next level with semantic technology. Its DataLens solution separates such complex data as product information into content groups, standardizes it, and creates taxonomies to minimize human intervention.

But, it’s important to note that humans can never be taken out of the equation. Contradictory or incomplete data strewn around in various databases is the ugliest problem in IT. Reconciling and normalizing all that data is hard, tedious work. There’s no silver bullet, but new solutions are going a long way toward enabling enterprises to create a single version of the truth without driving IT insane.

Data Management

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 4 9Vol/2 | ISSUE/13

don't ever take humans out of the equation. Contradictory or incomplete data strewn around in various databases is the ugliest problem in IT.

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49Feature - 04 The Three Pillars o49 49

Page 34: CIO May 15 2007 Issue

Vol/2 | ISSUE/135 0 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Ask an expert about data availability and how to ensure it, and the conversation quickly turns to

human error. Not that IT mistakes are the leading cause of unplanned downtime. Gartner identifies software failures as the chief culprit, and 'operator error' as the second most common cause, ahead of hardware outages and site disasters. But of all of these major causes, human error is the one that IT can really do something about.

IT folks close to the action generally agree with Gartner’s ranking, although some suggest that it may even have underestimated the role of mistakes.

Software failures often result from configuration errors, and sometimes they

arise as the result of improper testing: an incompatibility isn’t discovered because an application was tested on a different system configuration than the one in production, for example, or performance testing didn’t give the app the workout it would get in real life.

Even many hardware failures can be laid at the feet of IT malpractice. If systems aren’t cooled properly, if they’re improperly racked, or if the

procedure for starting them up and shutting them down isn’t followed correctly, equipment life is shortened and premature failures can result. Even for dumb hardware, it pays to read the manual.

But whether it’s software testing practices, hardware maintenance procedures, or the plain old boneheaded mistake lurking in the dark, the question is what to do about it.

Goofproofing

If you’ve recently suffered from a blunder-induced outage, you might be tempted to ask, "Why me?" Mauricio Daher, a principal consultant with the storage services provider GlassHouse Technologies, can tell you: not enough red tape. In Daher’s line of work, which is helping large IT organizations prepare for disaster and recover from outages, he’s seen his fair share of glitches attributable to human error.“Out of those,” he says, “it is mostly, ‘Gee, somebody reconfigured a LUN [logical unit number] that was actually a production LUN but they thought it was something

else.’ These are simple things that I see happening again and again because of the nature of my business.”

You might think human error is an equal-opportunity affliction, but these sorts of slips just don’t happen in better-run enterprises, Daher points out. “By the time you get to a point where you can input those commands, you’ve been through so many bits of red tape that it’s impossible to make a mistake,” he says. “That type of mistake really doesn’t happen in a mature organization, because there are so many safeguards.”

Daher and GlassHouse use the CMM (Capability and Maturity Model) to evaluate datacenters. Essentially, CMM is a model for process improvement that measures maturity level on a five-point scale. When Daher assesses an IT organization, he is looking for standard operating procedures, whether they have SLAs in place, how they measure against those SLAs, and whether there is accountability at various points in the personnel chart. Training, documentation, and standardization are the essential ingredients of process success. Falling short on the CMM scale has more to do with a lack of discipline than a shortage of skills.

“At one end [of the CMM scale], you might have some superstars who do a really good job of managing [the datacenter], and they’re indispensable, but unfortunately they haven’t documented fully, and if one of those guys gets hit by the proverbial bus, you’re in trouble,” Daher says. “And the other extreme is a fully documented environment where everything is automated, and if something’s not automated, there is a manual procedure in place that runs like clockwork.”

Which of those descriptions hits closest to home? Choosing a well-known standard such as ITIL (Information Technology Infrastructure Library) is helpful in that new hires already versed in it will get up to speed in your environment faster, although Daher notes that many successful datacenters had similarly rigorous practices in place years before ITIL became fashionable. The key is that your internal standards be rigorous, well documented, and drilled into everyone in the organization. And those standards should extend down to simple tasks such as configuring a switch and even to the naming conventions used for your zone sets.

improVe data aVaiLaBiLityFor those striving to avoid system downtime, change is enemy no. 1.

Software failures often result from configuration errors, and sometimes they

arise as the result of improper testing: an incompatibility isn’t discovered because an application was tested on a different system configuration than the one in production, for example, or performance testing didn’t give the app the workout it would get in real life.

Even many hardware failures can be laid at the feet of IT malpractice. If systems aren’t cooled properly, if they’re improperly racked, or if the

enemy no. 1.

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 04 The Three Pillars o50 50Feature - 04 The Three Pillars o50 50

Page 35: CIO May 15 2007 Issue

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 5 1Vol/2 | ISSUE/13

Data Management

ironing out the process

For Tom Ferris, manager of servers and storage for an international financial institution that prefers to remain nameless, the success of his company’s high-availability initiative depends as much on implementing standardization and controls as it does on traditional disaster-recovery planning.

Most of the problems his group experiences are, he says, due to inadequate testing, misconfiguration, or other mistakes, and they are revamping their processes to address them. “A lot of the emphasis of the high-availability program is on putting the technology in place for redundancy and fail-over capabilities and that type of thing, but in my mind that doesn’t really get you high availability,” he says. “Most of the outages we’ve experienced, and if you look at what the analysts say, most of the outages in general, are not caused by the technology, they’re caused by people making changes.”

The high-availability program dovetails with a utility computing initiative also going on at the company, giving Ferris and his group an opportunity to change the processes for application provisioning and administration in a way that serves both. The goal is to move away from dedicated servers for each application to a shared infrastructure model, in which the application owners will purchase a set of services — compute, storage, availability, and so on — from the IT group.

Each of the IT services will be available in gold, silver, and standard service levels. Before deploying an application, the owners will need to determine how much computing resource it needs, how much storage it needs, and the level of availability it requires, all of which will determine whether the app is deployed on a stand-alone machine, into a cluster with local fail-over, or into a cluster that supports both local fail- over and fail-over to a business continuity site 30 miles away.While each service level maps to a specific standard configuration, the administrative model will be consistent across all three tiers. The consolidated infrastructure dramatically lowers hardware costs,

especially for high-availability configurations and, as Ferris notes, especially if you are faced with different groups having their own separate test and dev, staging, and production servers.

“Especially when you get into high availability,” he says, “[having all of your apps running on their own servers] becomes very unwieldy. If you can take all of your Oracle databases and combine them on, let’s say, a three-node cluster, like we’re doing, you can house a lot of databases there. You don’t have to have 15 separate database servers, and based on the requirements of the application

you can configure the database for the type of fail-over you need pretty easily, because you’ve already got your cluster built.”

One key element is standardizing on configurations for production servers and ensuring that the servers in test and development match it. A central group responsible for release management will usher any new code or changes into production, making sure they are bundled up from test and development, put into staging, run through a checklist of tests, and finally promoted into production.

“In the staging and production environments, the application developers and application owners won’t have administrative access anymore,” Ferris explains. “They might not even have administrative access in test and development.” If they do, Ferris says, the environment would be closely managed to ensure that the configurations in testing

match those of production servers. The IT group uses BladeLogic to manage those configurations and control releases, and to run compliance reports to check for variance from standard configurations. The controls help prevent mistakes from impacting production servers, and the standard system images help speed up provisioning — a benefit that extends to disaster recovery. “We’ve packaged the configuration of [our] Veritas cluster server, the baseline OS, and the Oracle database into a reusable configuration that makes it easy to rebuild the environment from scratch,” says Ferris. “You can set variables for IP addresses, so it’s easy to re-create a multitier application in a new environment.”

investing in availability

While many of the associated costs are coming down, keeping datacenters running will always require significant investment in the people that maintain them, not to mention the time and effort poured into improving the processes by which the whole infrastructure is managed. Training, standards, and careful management of changes will only increase in importance as applications continue to become more complex and more interdependent.

You might find a good lesson in the famous case of the missing NetWare server that ran for four years after being sealed behind a wall by construction workers: the best thing you can do for a system is to leave it alone. Of course, that’s not possible for most business applications, especially in these days of rapid change. But if you can’t build a wall, you can at least start laying down some red tape. CIO

Reprinted with permission. Copyright 2007. InfoWorld.

Send feedback about this feature to [email protected]

datacenters require significant

investments in improving the processes

by which the whole infrastructure

is managed.

StorageSpecial

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Feature - 04 The Three Pillars o51 51 5/10/2007 10:13:29 PM

Page 36: CIO May 15 2007 Issue

&LEANANME

1001001000

1100100100100010010010010

00100 00000011010010 0100100100100 0100010

00 010000010001000100010000001000000100101001001001001000100100100100001010100101010100101001010010101001001000010100100101101001001001101010110110111001100110001100110111101 11101110111111011111111100111111101110110 0101000110100110111011 11000101 010

101010010010 0100111 0010100 10010 01000 1100100 10 0100010 01 00 1001 0 0 010 0 0 00 0 00 1 1 01 0 01 0 0 1

0 0

How do you tame your storage costs? By reining in server sprawl or by corralling space and energy requirements. Here’s how three cowboys did it.

Vol/2 | ISSUE/135 2 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

SPECIALSTORAGE

100001 1

Feature - 03 Lean and mean.indd52 52 5/10/2007 10:15:55 PM

Page 37: CIO May 15 2007 Issue

&LEANANME

1001001000

1100100100100010010010010

00100 00000011010010 0100100100100 0100010

00 010000010001000100010000001000000100101001001001001000100100100100001010100101010100101001010010101001001000010100100101101001001001101010110110111001100110001100110111101 11101110111111011111111100111111101110110 0101000110100110111011 11000101 010

101010010010 0100111 0010100 10010 01000 1100100 10 0100010 01 00 1001 0 0 010 0 0 00 0 00 1 1 01 0 01 0 0 1

0 0

MONEY-SAVERAtlantic Health, a non-profit health care system in

New Jersey, took a hard look at its storage systems last year when faced with the cost and space challenges of adding a replicated hot site about 20 miles away from its Morristown headquarters.

When Pat Zinno, director of infrastructure services and support, assessed how his mix of SAN and captive storage systems were being used, the results surprised him.

By Stacy collett

No matter how vital stored data might be, storage managers still have to abide by the mantra “Do more with less.”

So with growing amounts of data and rising power costs, companies are turning to new technologies that help consolidate hardware, reduce space and power demands, and lower costs. Here’s how a few storage champions are using the latest technologies to get lean and mean.

Standardize your storage equipment, streamline your storage-

area network, eliminate direct-attached storage, and deploy a

tiered-storage strategy.

Approach 1:

Ill

US

tr

at

Ion

by

bIn

ES

h S

rE

Ed

ha

ra

n

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 5 3Vol/2 | ISSUE/13

Feature - 03 Lean and mean.indd53 53 5/10/2007 10:15:56 PM

Page 38: CIO May 15 2007 Issue

While the SAN storage operated efficiently, at 98 percent utilization, less than half of the 30TB available on locally attached storage was being used. “A handful of servers were always getting hammered with data. They were running out of disk space, and right next to it, there’s a server with 200GB of free space, but I can’t use it because it’s captive in another box,” Zinno recalls.

The financial picture wasn’t pretty, either. Though the cost of local disk space was cheap, at about Rs 2.25 per MB, Zinno still ended up spending Rs 6.75 crore to get 11TB of usable space. With the more efficient SAN storage, even at double the cost of local, “it was actually $402,000 (Rs 181 lakh) cheaper than our locally attached storage when you look at the cost per usage,” Zinno explains.

Atlantic Health took drastic measures to overhaul its entire storage system. First, Zinno created a dedicated storage team to oversee all current and future storage needs. Next, the health care system standardized on EMC devices, streamlined its SAN, eliminated direct-attached storage and deployed tiered storage — most of which is consolidated within a single cabinet.

In this tiered-storage structure, data is classified as mission-critical, business-critical or business-important and stored

on an EMC Symmetrix DMX-3 system. This forms the basis for Atlantic Health’s recovery time objectives and recovery point objectives during a disaster. The mission-critical systems, such as patient registration, medical charts, emergency room systems and Microsoft Exchange, are all directly attached to the DMX via Fibre Channel. Business-critical data, such as financial management, payroll and intranet data, are called up via iSCSI using a network-attached storage gateway. Storage classified as business-important also uses iSCSI and is backed up to disk using EMC NetWorker software.

Making choices about what data goes where based on its value has allowed the health care facility to reduce its storage acquisition costs.

Zinno also chose an EMC Centera archiving platform and an e-mail archiving component to connect with the DMX system. “From a performance standpoint, we shrunk the database size of Exchange data down to what’s locally on the server, so our Exchange system is performing better,” he says. And the majority of the mail that people never accessed now sits out on a much cheaper Centera disk. Zinno is also working with business units to determine what other noncritical data can be moved to the Centera for long-term storage.

The result: Zinno hasn’t had a full year to measure results, but some immediate benefits are already visible. “We have gained the ability to start doing disk-to-disk backups instead of using tapes. Our early estimates are showing a 35 percent reduction in the time it takes to complete a backup,” he says.

While Zinno says he expects to achieve the cost and space savings he had forecast, he sees even more cost benefits down the road.

With storage consolidated into a single cabinet, Zinno says he can allocate the storage needed for a specific application and business unit and then accurately charge back the cost to the business unit. “It gives

Storage Strategy

5 4 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Ill

US

tr

at

Ion

S b

y p

c a

no

op

The Lean Storage Machine

Data stores are getting fat. In fact, the amount of stored data worldwide is heading toward a zettabyte in the

next 10 years. a storage crisis looms, say experts. In a recent survey we did, It managers estimated that their companies’ storage capacity will jump a hefty 43 percent on average in the next 12 months.

burgeoning storage demands, coupled with the rising cost of power, are forcing companies to be ruthless about keeping their storage systems lean. Some are re-jiggering their overall storage structures, and others are employing new technologies. In both cases, the goal is to consolidate hardware, reduce space and power demands, and lower costs.

conxerge, for example, cut costs with a San shared-disk system that uses less power and fewer disks and servers. the company’s power consumption has dropped by 70 percent — an annual savings of rs 94,500 per rack. at Fotolog, storage servers can expand volumes as needed, saving rs 27 lakh in administrative costs a year.

lean storage is catching on. among the respondents to our survey, 62 percent said they are taking steps to reduce costs related to their companies’ overall storage hardware footprints. and 53 percent said power and cooling requirements are a moderate to major consideration when buying data storage equipment.

So, what does this mean for It? columnist Mark hall argues that despite what some in the industry say, the impending storage crisis won’t elevate the status of It executives. What It must do, he says, is make sure workers have as much information as they need.

Whether It reputations will be polished or tarnished is yet to be determined. but following the examples of companies like conxerge and Fotolog and saving thousands in storage costs is a sure way to come out of the capacity crunch looking like a star.

— S.c.

Feature - 03 Lean and mean.indd54 54 5/10/2007 10:15:57 PM

Page 39: CIO May 15 2007 Issue

us a better way to paint the picture of where money is spent in the business rather than one big IT storage purchase,” Zinno says.

SPACE-SAVERIntellidyn, a marketing database company

in Hingham, Massachusetts, relies on millions of intricate information records to deliver customized consumer data to its clients. Last year, technical director Rajeev Kumar Gagneja realized that the agency’s three master databases and vast client data warehouses were outgrowing the company’s storage system.

“Our storage requirement has gone from 500GB to almost 40TB of data in less than

five years. Two years from now, I estimate that it will be a petabyte,” Gagneja says. Additional servers would require additional hardware, more contract maintenance workers and more physical space at the agency’s Secaucus, New Jersey, data center.

“The real estate in the data center is extremely expensive. So we really needed to manage the rack space,” plus power and cooling costs, he adds.

Highly available clustered SANs had been implemented to address scalability (2TB to 30TB). But current configurations wouldn’t allow hard disk drives to be mixed within the same storage frame, which prevented Gagneja from moving data as needed.

To save physical space, consolidate stored data and streamline its storage infrastructure, Intellidyn implemented Hitachi Data Systems’ AMS500 modular storage system. It has 4Gbit/second Fibre Channel port connectivity in a three-tiered storage model for the high-availability clustered servers. The system was tooled on Sun servers, running Sun Solaris 8, with Veritas Storage Foundation software.

First-tier storage, configured as RAID 5, is used for client data and information marts using Fibre Channel running 15,000 rpm for Intellidyn’s customer data. The second tier is configured with RAID 5 and mid-performance Fibre Channel for client data warehouses that provide historical data snapshots and time-series analysis. Near-line backups to disk are handled by third-tier RAID 6 Serial ATA drives.

The results: “I’m doing the same thing in one rack versus what I would’ve done with three racks. Rack space savings over a three-year period is almost $60,000 (Rs 27 lakh) per rack,” Gagneja says. Add to that about $85,000 (Rs 38.25 lakh) in power savings and another $85,000 for support fees, he notes.

Gagneja plans to add virtualization components for disk-based backup this year. “The key factor in the whole solution was the design,” Gagneja says. In addition to being scalable enough to handle Intelli-dyn’s growing data requirements, he says, “it has to address the tiered-storage model with integration for future virtualization technologies that we will be rolling out.”

ENERGY-SAVERPhilip Skeete never gave much thought

to the energy-saving features touted by some storage devices. That is until Skeete, president and CEO of Conxerge, a Houston-based managed services provider, decided to move the company’s data storage to a new collocation facility in Dallas.

“The cost of power became more expensive than the cabinet price and the bandwidth put together,” Skeete says. The collocation

facility charges as much as Rs 900 per amp. With the average storage cabinet requiring at least 30 amps, “you were looking at $600 (Rs 27,000) of power alone per cabinet” each month, Skeete recalls. Multiply that by 50 servers in 10 cabinets, and energy costs could easily reach Rs 1.35 lakh per month.

The company couldn’t afford to run underutilized servers in the facility, so Skeete began to look for ways to fully utilize its storage systems. He went with a SAN shared-disk system from EqualLogic, using blade servers from IBM, HP and Appro International, to reduce the number of drives being used and lower the overall power usage.

The result: “If you consider even with the high-density systems we’re using, those blade servers each have two drives per blade, 50 blades in a cabinet — that’s 100 drives. [With shared disks] we can cut that down to 28 drives to provide storage for that many servers,” Skeete says.

What’s more, the storage array itself turned out to have very low power consumption during internal tests — about 2 amps, “which is probably about the same as a high-end desktop computer,” he adds.

Skeete says he can also boot an entire cabinet of blades from a single EqualLogic array. “We get a much better level of data protection with snapshots and data replication,” Skeete says. “We can also deploy new blades faster than any other method by simply cloning an existing server. This takes literally seconds, and the server being cloned doesn’t even need to be shut down.”

Less power and fewer disks and servers has cut Conxerge’s power consumption 40 percent to 70 percent, he says, for an annual energy savings of Rs 94,500 per rack. CIO

Reprinted with permission. Copyright 2007. Computer

World. Send feedback about this feature to [email protected]

Storage Strategy

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 5 5Vol/2 | ISSUE/13

Implement modular storage with 4Gbit/second Fibre Channel port connectivity in a three-tiered

storage model.

Approach:

Deploy a SAN shared-disk system using blade servers to reduce the number of drives being used, and decrease the overall power usage.

Approach:

SPECIALSTORAGE

100001 1

Feature - 03 Lean and mean.indd55 55 5/10/2007 10:15:58 PM

Page 40: CIO May 15 2007 Issue

IT costs today are escalating due to increasing demands for data. Applications and storage environments that companies depend upon have become critical drivers of business processes and the decisions that impact organizational growth and profitability.

Quite naturally, CIOs are under great pressure to justify IT investments based on business value. CIO research shows that many IT heads have issues with either

The Search for Storage ROI

The desire to quantify storage benefits could be the difference between a CIO driving enterprise benefits — with management — and him merely overseeing storage implementations in the organization.

5 6 M A y 1 5 , 2 0 0 7 | CIO CUSTOM PUBLISHING

Associate Sponsors Executive Sponsor

Event Report.indd 56 5/10/2007 10:50:54 PM

Page 41: CIO May 15 2007 Issue

justifying investments on storage or deriving ROI from existing storage deployments.

To explore this, CIO India organized panel discussions on the topic ‘Deriving Storage ROI’ as part of the CIO Focus-Storage series of events in Bangalore, Delhi and Mumbai. The over-riding sentiment was that ROI measurement needs to be built into an enterprise’s storage strategy — although the panel in Delhi had a different view on the need for an IT organization to justify storage investments from that of the other two cities.

The Delhi chapter agreed that storage constitutes a part of business practices today, so there mustn’t be a question mark on returns. Justification would be required if storage is perceived as part of IT upgrades, noted Ajay Khanna, CIO of Eicher Motors.

Further, the panel agreed on the scope of roles within an enterprise vis-à-vis justifying costs. Typically, ROI is of concern to the CFO. To the CEO of a company, it will be a business issue and as long as it aids growth, efficiency and business returns, storage expenditure would be completely justified in his eyes. “It has to be a business justification. ROI cannot be used for IT purposes,” said S.R. Balasubramanian, executive VP of ISG Novasoft.

On the contrary, the Mumbai and Bangalore chapters were in favor of a CIO justifying storage investments. While all three panelists in Mumbai agreed that ROI, even if unquantifiable, is the best way to justify purchase of storage solutions, the Bangalore panel took the discussion forward by mooting methods to measure ROI in some detail.

“ROI is karma of the past life — it just sticks to you,” said Bushan Akerkar, executive director-IS & IT of AC Nielsen. Hence,

it’s better to deal with it, he asserted. After all, everyone puts money into a business to make profits out of the investment, so it shouldn’t surprise anyone that the heads of finance would want to know the monetary returns.

“Even an NGO needs to justify its expenditure. So, I don’t see anything wrong in having to give an ROI for storage investment if my CFO wants one,” said Akerkar. “There exists a plethora of metrics to measure the value that we derive. The fact is: attempts to justify our investments are natural. If there’s no attempt, the laws of economics are not holding good for you,” he opined.

Jethin Chandran, head-IT, infrastructure planning & PMO of Wipro Technologies, concurred at the Bangalore event. “Whether you like it or not, any IT investment needs to have justification. This is to ensure that you are spending the right amount on storage,” he said.

In Mumbai, Sunil Mehta, senior VP & area systems director of JWT, provided anecdotal insights. He explained how storage in the advertising industry can mean anything from boxes of show reels in VHS form or CDs or DVD formats, which in most cases require physical space. Storage solutions are, therefore, an integral part of the system.

JWT has an audit system in place wherein threshold limits have been set up for storage, he said. Once those are crossed, additional storage is acquired. This process in itself is justification or ROI in terms of costs of additional storage. So, justification and ROI exist in such industries, but they are built into the system

CIO CUSTOM PUBLISHING | M A y 1 5 , 2 0 0 7 5 7

Manish Bapat, EMC's national manager-NAS & CAS, gave an overview of storage strategies to deal with burgeoning volumes of data. Later, Rajesh T. Nair, a consultant from CA, put the spotlight on regulatory compliances as external drivers of storage.

Considering that it’s in support of a business project, should IT justify storage investments?

— Vijay Ramachandran Editor-in-Chief

CIO

Event Report.indd 57 5/10/2007 10:50:58 PM

Page 42: CIO May 15 2007 Issue

rather than being a one-time exercise. The regular audit and updating save on other costs, too.

COmpIlIng DaTa On BenefITSPrior to the panel discussions in Bangalore and Mumbai, Susmita Shukla, senior manager (IT & SDE-PEIL) of Philips Electronics India, made a presentation on ‘Justifying ROI for Storage’ at each event. She spoke at fair length on storage consolidation and application server consolidation as two approaches for IT organizations to be cost-effective vis-à-vis storage. Later, as a panelist, she asserted that it is in the best interests of a CIO to provide data on ROI.

In the absence of backup, for instance, a CIO becomes answerable if and when there is a data crash. “If we give the CEO and CFO justification in terms of returns and also in terms of downtime in case of a crash, they can realize the urgency,” said Shukla, highlighting the need for ROI data.

Her ROI model was created precisely for this reason, she said. “When I joined Phillips, I saw rows upon rows of servers, but not enough backup storage. When I approached the management, the reaction was ‘Sorry…but why?’” she recalled.

“With support from the CIO, I had to drill down data right from when the company started — how the head count had grown and, along with it, data had grown, what was the backup window. Based on that, we did a trend analysis,” she explained. Business then gave her the need and armed with these metrics, Shukla was able to convince everyone. Subsequently, the approach changed from ‘No…’ to ‘How?’

Wipro Technologies’ Chandran agreed with Shukla — the latter featured in the Bangalore chapter, too. Chandran, in fact,

believes that a CIO must show ROI on storage within three years. “For this, it is important to have a dialogue with business and know where they are going.” This helps a CIO determine what is critical.

He cited his experience of signing a service-level agreement (SLA) for e-mail. The SLA clearly demanded 100 percent availability of e-mail — not even one e-mail should be lost. “For this, you have to reduce backup time and increase the number of swappable disks,” Chandran pointed out. It paved the way for a discussion on the tools to calculate ROI.

R.K. Upadhyay, deputy GM-IT of BSNL’s Bangalore Telecom District, said he should be able to see real cash flows in order to justify ROI. “If my disaster recovery needs 50 servers and I consolidate, I can store in different locations. So, even if one location fails, another will work,” he added.

But, what is the best way to convince management? Is it good to raise the fear of downtime to get the big bosses to sign a cheque? Shukla felt it can work. “Suppose the cost of one engineer is $250 and if, because of downtime problems 100 engineers are unable to work, this has a huge impact upon the functioning of the company. So, it is easy to justify this with the cost of downtime,” she explained. Chandran felt this is not always a good way of doing it, but ultimately this is reality. “I don’t think that CFOs are dumb to fall for downtime all the time. You have to provide them with data and tell the CFO that this downtime will impact business,” he said, pointing to the need for CIOs to be proactive.

In Akerkar’s opinion, ROI is nothing but benefits being monitored. It can be done in terms of opportunity costs, or better still, “If we did not have this storage, how much time will it take

Since storage is a part of business practices today, there mustn’t be a question mark on returns, asserted the Delhi panel that comprised (from left) Sandeep Parikh, GM (internal IT & procurement) of Keane India; Rajeev Seoni, CTO of Ernst & Young; S.R. Balasubramanian, executive VP of ISG Novasoft; and Ajay Khanna, CIO of Eicher Motors.

5 8 M A y 1 5 , 2 0 0 7 | CIO CUSTOM PUBLISHING

events

Event Report.indd 58 5/10/2007 10:51:01 PM

Page 43: CIO May 15 2007 Issue

to retrieve? And in case a disaster happens, how much time will it take me for us to be back on our feet?” concept. So, the loss in terms of operating income can be the approximate ROI. “The best thing is to deal with it, take the help of the financial guys, keep a band of error and, on that basis, calculate,” Akerkar advised. S.K. Sehgal, GM - IT (core banking) State Bank of India, seated in the audience in Mumbai, felt that no justification seems necessary in banking. However, Akerkar insisted that the whole reason of operating in core banking domain is to provide a better customer experience. That itself is the ROI on the investment on storage.

Up agaInST DOwnTImeThe subject of downtime arose in Delhi too, where the panel said most of the importance attached to IT applications and upgrades stem from criticality of data or its usage. The impact of downtime adds to the criticality factor. Information — about people, processes or databases — is critical to any business process.

All panelists agreed that if databases crash, it affects business revenues. Sandeep Parikh, GM (internal IT & procurement) of Keane India, pointed out that downtime in foreign organizations is full-time equivalent to manpower cost. In India, manpower is cheaper. So, we need to evolve our own models based on hardware criticality or software concerns.

The Mumbai panel, in fact, felt that the cost of downtime itself is an ROI or justification for investment. Sunil Mehta felt that if storage investment helped cut down on resources, it would be justified anyway. Besides, why wait for a disaster to happen? Projecting losses for disaster and using that figure as an ROI is a more feasible idea, he noted.

In the service industry, the client wants answers to certain questions: do the companies have a proper security audit, and is there a data security procedure laid down? Based on these answers, the deal is made or broken, and sometimes more business comes in. The idea is that if a company has best practices, it can be trusted. And, therein lies your ROI: client satisfaction. The interesting contour of the downtime debate is that the criticality of data varies. For instance, criticality of a production process data will be very high, so storage has to be maintained even if it is at a high cost. Storage for non-critical data can be accorded less expensive storage models.

A large number of pressure tactics also ride in to justify storage. “It could be compliance or regulatory issues as pitches by vendors, but in most cases these are not really relevant,” said Rajeev Seoni, CTO of Ernst & Young and another member of the Delhi panel.

The idea is to achieve a good balance between benefit, need and risk for the organization that storage can provide. Seoni explained: “If we find that right balance, we go ahead even if it’s a risky solution. Typically, it’s not always about money or compliance, but how you make sure it’s optimal for the organization at that point of time and if you can scale up going forward. That’s what we really look at.”

ISG Novasoft’s Balasubramanian agreed but insisted that though this balance is important, it is also prudent to consider maintenance costs over time, valued against the importance of the data stored. That equation will justify acquisition of storage. After all, management always understands business justifications better than technical ones.

Even TCO should be calculated very carefully, said Khanna of Eicher Motors. “Since the lifetime of applications has come down

to three years from the earlier five, we also have to consider upgrade costs and options in three years’ time,” he explained. In most cases, at the end of three years, the cost of upgrading is higher than the cost of buying a new system.

T h i s b ro u g h t the panel to the question: what is the right investment for storage? To arrive at this, a CIO needs to collect all relevant d a t a , a s c e r t a i n criticality, work out the sizing of this data, look at the solutions available, and talk to vendors.

Storage must have a justification to ensure that an enterprise is spending the right amount on it, noted the Bangalore panel that featured Jethin Chandran, head-IT, Infrastructure Planning & PMO of Wipro Technologies; Susmita Shukla, senior manager (IT & SDE-PEIL) of Philips; and R.K. Upadhyay, deputy GM-IT of BSNL’s Bangalore Telecom District.

events

6 0 M A y 1 5 , 2 0 0 7 | CIO CUSTOM PUBLISHING

Event Report.indd 60 5/10/2007 10:51:07 PM

Page 44: CIO May 15 2007 Issue

Balasubramanian had an idea to add: “Consider the volume of data we have now and the rate of growth we anticipate in the next few years. Plot it across and then arrive at a total storage value that is needed…”

Utilization of storage was another key point of the discussions. Independent research shows that 35-40 percent of storage is either underutilized or remains unutilized. Since many companies have different islands of storage based on different vendors’ products, the problem is not easy to overcome, said Chandran. Further, one should have some unutilized storage as a buffer for future growth, he added.

The paTh fORwaRDThe panels also identified storage virtualization as a problem area because very little is happening on that front in India. It will start happening in organizations that have reached a high maturity level with different storage products, noted Sushmita Shukla at the Bangalore forum.

Chandran, who is doing storage virtualization because he has multiple storage systems from multiple vendors, said that in several cases, his customers are not agreeing to storage virtualization because, irrespective of the security used, if all data reside on a single physical box, then customers are not comfortable with it. Upadhyay, however, reiterated the importance of storage virtualization, in the context of telecom, because CIOs in the sector have to store data on mobile calls, location of users, and tower locations. This can run into several gigabytes per mobile user, he said. “So, we are going in for both storage virtualization and server consolidation.”

The Delhi panel also mooted the issue of data. Unused or irrelevant data need to be cleaned out at regular intervals, opined the panel. Typically, most data are irrelevant after 90 days. So, a large part of the optimization is to retain only what is actually critical to the business.

Ajay Khanna mooted a solution: “We are using a technology for backing up users’ data onto the repositories that only copies incremental data. Changes are copied, and a log of the last three changes is kept. These are low, inexpensive discs that sit on the central storage. This data is not very high I/O-intensive, so we can leverage on that technology. For production data, which is I/O-intensive, we are using fiber channel discs for storing design data, using IP scans storage.”

The subject of virtualization as an answer for data storage woes cropped up in Delhi too, but the panel agreed that it costs a huge packet; not all companies are comfortable with that kind of expenditure. Sandeep Parikh of Keane India talked about alternatives to this expensive technology. “We do have networks in place. We have server pools that allow applications to operate on, and we’ve got virtual storage. It’s not exactly virtualization, but it's there in a crude form,” he explained.

Another option is offline storage devices like discs and pen drives, but that would be at an elementary level. A member of the Delhi audience pointed out that a study of advantages of offline storage devices as against online devices could solve the problem of ROI and justification.

On storage audits, the Delhi panel felt it was not a great idea because few companies want to add another audit to their already existing bag of woes. A company like Keane India has eight to nine audits annually. But in most companies, a storage audit is a regular affair and as awareness grows, controlling of data expansion will also help.

Akerkar of AC Nielsen, however, pointed out that audits and tests are hardly carried out for something that is working well. “Whatever is working fine has very little face value. So, no one bothers. Only when it gives trouble, maintenance is done.”

The conclusions reinforced the value of storage and that justification should be required only after optimal utilization of existing storage. Increasing awareness of criticality and the ability to separate the wheat from the chaff will also help save data storage resources in the long run.

Even if it's hard to quantify, ROI is the best way to justify the purchase of storage solutions, felt Sunil Mehta (left), senior VP & area systems director of JWT, and Bhushan Akerkar, executive director-IS & IT of AC Nielsen, who were part of the Mumbai panel.

6 2 M A y 1 5 , 2 0 0 7 | CIO CUSTOM PUBLISHING

events

Event Report.indd 62 5/10/2007 10:51:15 PM

Page 45: CIO May 15 2007 Issue

Trendline_Nov11.indd 19 11/16/2011 11:56:19 AM

Page 46: CIO May 15 2007 Issue

In spite of all the interest surrounding e-governance in India, a vast majority of these projects fails. Professor Rahul De’, who holds the Hewlett-Packard chair on ICT for Sustainable Economic Development at IIM Bangalore, takes a critical look at the reasons behind the failures, and offers suggestions to improve the success rate.

By Balaji NarasimhaN

Stakeholders of any e-governance project must be actively involved in defining its scope and requirements, asserts Prof. Rahul De’, who holds the H-P Chair on ICT for Sustainable Economic Development at IIM Bangalore.

CIO: Can you throw some light on your new paper that evaluates the impact of e-governance on marginalized people?Rahul De’: In the paper that you mention (The Impact of Indian E-Government Initiatives: Issues of Poverty and Vulnerability Reduction, to appear this year), I have looked at the impact of e-government on certain marginal groups in our country, such as dalits and landless farmers. My findings do not report positive impacts. Most e-government systems, particularly those meant to serve large numbers of citizens — such as land records delivery systems or urban utility payment systems Im

ag

Ing

by

an

Il t

Vol/2 | ISSUE/136 4 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

C ontrarian

Page 47: CIO May 15 2007 Issue

— do not serve the needs of the poor and the marginal. I found that e-government systems fall far short of enhancing basic capabilities of such marginal groups and, in that sense, do not contribute to their development. These systems largely benefit those who are relatively well-off.

You’ve observed that stakeholders in e-governance projects are rarely included in ascertaining their requirements. How does this affect the system?

One of the central tenets of information systems analysis and design, as also of software engineering, is that user requirements have to be included while designing the system. For e-government

systems, the most important users are the citizens, the demand-side stakeholders who finally consume the services of the system. They are rarely included in the design of the system.

The context is important here. I am talking about citizen-facing systems, the ones like Bhoomi, which are used by millions of citizens. The e-governance systems that are used by government departments internally also fail to do sufficient requirements-analysis of users. However, I am not including them in my comments here.

The drawbacks of not including citizens for requirements are many. First,

there is a narrowing of the scope of the system, where citizens’ concerns about access, availability and charges are not taken into consideration, and the system is designed in a manner best suited for the use of the department. Further, systems are designed to fit an available budget and the time constraints of officials championing them.

Second, the exclusion of citizens from analysis also excludes the variety and diversity of citizen needs. Statewide e-government systems assume that a common standard of language, format and interface for all types of electronic services is useful and acceptable to everyone, although this is hardly ever the case.

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 6 5Vol/2 | ISSUE/13

Interview | Rahul De’

C ontrarianC ontrarianC ontrarianC ontrariane-Governance

Page 48: CIO May 15 2007 Issue

Citizen involvement in the initial stages of a project would facilitate a buy-in from them, which could help the eventual acceptance of the system. With systems designed at the supply end, citizen acceptance is harder to get.

How often is the deeper impact of e-governance projects measured, as opposed to the immediate effects?

There are hardly any e-government projects that measure second-order effects. All the studies I have seen focus entirely on the first-order effects of increase in volumes and increase in the speed of processing. ROI is calculated only on the revenues generated from the immediate services that the system provides. There is very little analysis of the broader impact of the system.

I believe that e-government projects have to be treated as development projects, such as projects for improving literacy, or for increasing employment, and their impact has to be measured in that manner. If an e-government project is designed to help farmers have better access to land records, then the right questions to ask about the project is — not how easy it is to get the records — what impact has the system had on the lives of the farmers or how has easier access to land records improved their lives? Has it improved their access to credit? Has it helped them sell their produce at better prices? Has it helped them access government subsidies?

Similarly, if a system is helping urban citizens pay their water and electricity bills at a convenient location and at low expense to them, the questions to ask about the system are not only about how much easier and faster it is to pay the bills, but whether the system has helped them get better access to water and electricity.

“We see computers everywhere except in the productivity statistics,” said Nobel Prize winning economist Robert Solow. How true is this in the context of e-governance projects?

Solow’s statement came to be known as the ‘productivity paradox’ in academic literature. The problem that Solow pointed to, as research showed later, was that of measurement and of the time required for the effects to appear. The same is true for e-government systems in India.

Let me explain with the help of an example. The Bhoomi system of Karnataka helps farmers obtain RTC (record of rights, tenancy and crops) certificates and also make mutation requests electronically. RTC certificates are essentially used to get bank loans and for title verification, whereas mutation is a long drawn-out procedure used to change ownership details on the title.

One of the productivity issues here is farming output. If farmers can use the efficiency of Bhoomi to obtain loans more easily and use these as inputs for their farms and improve productivity, then the

system could show a tangible effect. But, these effects will only show in the long term as second-order effects, and will be difficult to measure. (We will have to account for the impact of other government programs for agriculture, droughts and loan melasto show the distinct impact of Bhoomi.) So far, my research does not show any long-term impact of Bhoomi on agricultural productivity. But, it may be too early to tell.

Another impact that Bhoomi could have, owing to the mutation queue feature that it has, is to impact the mutation rate of land parcels. What this basically means is that it makes it easier for farmers to sell land. Is this happening? Again, this is hard to measure and separate out from other economic activity that could also affect land sales. Also, these effects show up in the long term.

But these are the very effects we have to seek and measure.

The World Bank says that 33 percent of e-governance projects in developing countries are failures, while 50 percent are partial failures. What do you attribute such high rates of failure to?

Robert Schware (lead informatics specialist-global ICT Department, The World Bank) stated that the failures are due to lack of planning, setting projects to election cycles, not having a clear business case, not having a change management plan and not having a top-down approach. These are broad reasons that explain, partly, the

If a project is designed to help farmers get better access to land records, then the right question to ask is:

Vol/2 | ISSUE/136 6 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

question to ask is:question to ask is:

has it improved their access to credit?

Page 49: CIO May 15 2007 Issue

Interview | Rahul De’

failures across the world. For India, where the rate of failure is equally high, the reasons are similar. However, the reasons for failure go beyond those factors. As I mentioned earlier, Indian systems are designed by the supply-side for their convenience and not for the convenience of citizens.

Resistance to computerization is said to be a problem with e-governance. How far is this still true?

I have done extensive research on this issue. It is very clear that most employees of government organizations do not resist computerization per se, particularly if they can visualize how it will help them with their work. They also don’t resist change in the existing ways of doing work. From what I’ve seen, many employees we l c o m e c h a n ge. The problem comes when they see that the technology introduction is going to change their basis of power. This is when resistance emerges.

All employees of government organizations have a certain power relation within their organization and with people from outside. This power may have a basis in how much access they have to others, how much information they control and how many documents they manage. If a technology is introduced that removes this basis of power, then the change is heavily resisted. There are instances and examples of such resistance in every e-government system I have studied.

It is also believed that e-governance can reduce corruption. In your experience, how true is this?

It is not necessary that e-governance will reduce corruption. It all depends on the manner in which e-governance affects the root causes of corruption. The argument that e-governance will increase transparency, reduce interaction with officials, and increase efficiency

of services — and hence will reduce corruption — is a bit naïve.

Some types of corruption are caused by the extensive procedures that have to be followed by government departments to provide a service. Citizens are willing to pay ‘grease money’ to move things along faster. While e-governance can make a small change here by enabling faster processing, greedy officials can still slow

things down. Unless the laws are changed to reduce paperwork and official involvement, there is not much that e-governance can do.

Other types of corruption are caused by lack of accountability among officials and entire departments. Officials can willy-nilly extract rents from citizens; they are not afraid of consequences b e c a u s e t h e i r supervisors are also on the take. The most that e-governance can do in

situations like this is shift the location of corruption, literally, from one department or window to another.

There is some hope that with the RTI Act in place, citizens will demand accountability. Here, e-governance can help to make available information. However, I am yet to see any concrete examples.

What else can be done? For example, about 3 percent of citizens paid bribes despite Bhoomi, according to one study.

The study refers to bribes paid after Bhoomi was implemented. Before Bhoomi, a much higher number of people paid bribes. But this has to be understood in context. Before Bhoomi, farmers obtained the RTC from the village accountant who was, traditionally, paid a small service fee, although there was no charge for the RTC copy itself. After Bhoomi, the farmers pay Rs 15 for the RTC certificate, which is printed out by

kiosk operators and signed by a village accountant. So, IT did reduce corruption in this case. However, the process that is more prone to corruption is mutation. Bhoomi enforces a queue discipline for mutation processing — however, there is no substantive impact on reduction of corruption. Mutation is a legal process requiring the involvement of several officials.

In a study, you stated that problems have to be addressed politically and through wide participation, rather than with technical solutions. Is this being done?

No, this is not being done. The issue here is the same as including demand-side stakeholders in the design of the system. Political participation drives governance reform, as is the case in many European countries, and this prompts governments to design and implement systems that citizens want and demand (and not the other way around).

Which, in your opinion, is the best e-governance initiative in India? What makes it so?

I would not like to take a stand on this. There are now many e-government systems in India that are functioning and serving a purpose. Their implementation was difficult and it took a lot of effort on the part of the officials to see them through successfully. To this extent, they deserve to be celebrated. However, there is much more that can be done in terms of design and inclusion of broader citizen needs.

Much more thought must go in towards ensuring that the figures quoted by the World Bank do not persist. CIO

Special correspondent Balaji Narasimhan can be

reached at [email protected]

REAL CIO WORLD | m a y 1 5 , 2 0 0 7 6 7Vol/2 | ISSUE/13

33%of e-gov projects in

developing countries are total failures, while 50

percent of e-gov projects are partial failures.

Source: World Bank

THE BIG Pay-ouT

Page 50: CIO May 15 2007 Issue

Stacking IT Up By Mary K. Pratt

Storage innovation | It’s a simple equation: as data storage needs grow, so do storage costs. In fact, even as prices continue to come down, storage equipment now accounts for 19 percent of the IT hardware budget, according to a report from Forrester Research. And that figure doesn’t include costs such as energy and management.

“Disk might be cheap, but storing the increasingly high volumes of data that companies generate isn’t. It’s actually quite expensive,” says Forrester analyst Andrew Reichman.

And as the costs for physical space and energy (for both powering up and cooling down the hardware) continue to rise, storage efficiency will become a higher priority. Here are four next-generation technologies that could help.

Solid-state Disk/ Flash TechnologyDefinition: Data storage devices that rely on non-volatile memory, such as NAND flash, rather than spinning platters and mechanical magnetic heads found in hard disk drives. Vendors include Adtron, Samsung Electronics and SanDisk.

Until recently, solid-state disk found its home in niche markets where the need for speed outweighed costs. But dropping prices and technology advances have increased interest.

What four Next Gen

storage technologies have in store

to help you with those

present worries.

technologyEssEntial From InceptIon to ImplementatIon — I.t. that matters

Ill

us

tr

at

IoN

by

PC

aN

oo

P

Vol/2 | IssuE/136 8 M a y 1 5 , 2 0 0 7 | REAL CIO WORLD

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Essentisl Tec.indd 68 5/10/2007 9:54:49 PM

Page 51: CIO May 15 2007 Issue

“Cost is always going to be the driver here,” says Dave Russell, an analyst at Gartner. “The cost is coming down, and to the extent that holds true, that is going to help the market really take off.”

That price drop has been steep: solid-state disk prices fell 66 percent in 2006 and are expected to drop 60 percent this year, according to Gartner analyst Joseph Unsworth. Yet, hard drives are still cheaper, Unsworth says, therefore deployments of solid-state disk have been limited, usually to specialized uses in industrial, military and aerospace organizations.

Solid-state disks have been around for well over a decade, says Mike Karp, an analyst at Enterprise Management Associates. It looks like a regular disk, but without the characteristic spinning motion. And because there are no moving parts, it’s faster, he says. It also requires less energy, although

Karp says energy savings are a minor part of the cost equation. Because organizations don’t have large-scale deployments of this technology, they won’t see large-scale energy savings, either, he says.

But the use of NAND flash technology with solid-state disk could edge up the number and types of deployments, extending the technology beyond enterprise storage for use in laptops, for example. Unsworth estimates that solid-state disk with NAND flash technology could mean a 5 percent to 10 percent energy savings over a conventional notebook hard drive; it also offers faster performance in a smaller space.

“It could be important in ultraportable notebooks, but it’s not an advantage in desktop systems,” Unsworth says. “It’s still a very niche market because of cost. Right now, consumers and IT managers don’t

know why they should pay a premium for such technology.”

High-density DisksDefinition: As their name suggests, high-density disks can hold more data than conventional storage options. They do so by packing more bits into the same space, either by storing bits vertically instead of using the traditional horizontal pattern or by storing information using three dimensions, creating a hologram read by laser. Vendors include Seagate Technology and Hitachi Global Storage Technologies.

Higher density represents the next step in the evolution of storage, with perpendicular storage and holographic storage giving IT managers new options.

“They’re increasing the density per square inch, which to the end user increases the space and price efficiency of the

solutions,” says Brian L. Garrett, an analyst at Enterprise Strategy Group.

Perpendicular storage takes a real density and increases it by layering the bits vertically, says Karp. “Bits actually do have physical length, so instead of lying down, you stand them up on the disks,” he explains.

The potential savings with this technology are high, says Dianne McAdam, a consultant at The Clipper Group. Perpendicular storage promises to increase storage in the same physical space by a factor of 10, she says. “It also saves on energy, because we’ll need one-tenth the number of disk drives to store the same amount,” says McAdam.

Similarly, holographic storage promises to pack more into a smaller space by moving storage from 2-D to 3-D. “You start to look at [them not as] bits on a surface, but as being a cube. If you look at things in two

dimensions, you have an x and y axis. But in three dimensions, you have not only the x and y axis, but a z axis, too,” Karp explains.

One of the few holographic storage devices currently on the market is the Tapestry 300R from InPhase Technologies. The drive costs Rs 8.1 lakh, and the 1.5mm-thick platters are Rs 8,100 apiece.

Hybrid Hard DrivesDefinition: These use non-volatile flash memory as a large buffer to cache data before storing it on a traditional spinning drive, allowing the platters on the hard drive to rest most of the time. Vendors include Seagate and Samsung.

Hybrid hard drives are another evolutionary step in storage that could bring some important savings to IT.

The concept is fairly straightforward: “It’s sort of cache memory attached to a hard disk drive,” McAdam says, noting that she sees a future for this technology not only in PCs, but also in enterprise systems.

Data will write to a cache memory and, when the cache fills up, move to a hard drive. That concept isn’t new. Hard drives already have a buffer, says Garrett. But those buffers are in the 4MB to 8MB range; with hybrid hard drives, the buffer can reach 1GB. “It’s just a larger buffer, and it’s non-volatile,” Garrett explains.

Like other advances in storage technology, hybrid hard drives could save energy and space. It takes energy to power up and keep disks spinning, McAdam says. Because spinning creates heat, the disks need to be cooled. In hybrid disk drives, the hard drive is spun down so it requires less energy.

“If the disks aren’t spinning [all the time], it costs less to power, and you can pack more disks more closely together because they don’t generate as much heat,” says Russell.

Analysts aren’t ready to quantify how much money this technology may save, however. “It’s still too new, and we don’t have all the specs on this,” McAdam says. “They’re just coming to market now.”

Moreover, McAdam sees some circumstances where this technology could require more power than conventional disks

EssENtIal technology

stored data is something of a black hole: it keeps expanding, and no one has a complete understanding of what it holds. the average utilization rate of storage capacity is 40-50 percent.

REAL CIO WORLD | M a y 1 5 , 2 0 0 7 6 9Vol/2 | IssuE/13

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

Essentisl Tec.indd 69 5/10/2007 9:54:49 PM

Page 52: CIO May 15 2007 Issue

do. “If for some reason you have to keep powering this thing up and down and up and down, you may not be able to see some savings,” she says.

Storage Resource Management SoftwareDefinition: It provides a centralized view of a company’s storage environment. The software enables better control, management and provisioning of, as well as more accurate reporting on, storage resources. Vendors include EMC, HP and Symantec.

For many organizations, stored data is something of a black hole: it keeps expanding, and no one has a complete understanding of what it holds.

In fact, the average utilization rate of storage capacity is 40 percent to 50 percent, Russell says.

“No one thinks we should run at 100 percent — you want to have some reserves. But running at 80 percent to 90 percent would have enormous savings in utility costs and floor space,” he says.

Storage resource management software can help organizations reach that target, says Russell. “It’s about optimizing what you have and delaying future investments. There will be a time when you’ll have to add more resources but you’d like to get more out of what you’ve already deployed,” he says.

This software looks at all storage in a company and allows it to be managed as one pool, McAdam says. “With this software, because we virtualize how it looks, we can drive up utilization. If we drive up utilization, we can get away with less physical storage.”

“The potential is huge,” adds Reichman. Bumping up utilization just 10 percent could translate into a 10TB reduction in the storage capacity needed. And at a cost of Rs 31.5 lakh per terabyte for high-class storage, that’s a Rs 315 lakh savings.

Storage resource management software offers a broad range of capabilities, including provisioning, capacity planning and performance management, says Bob Laliberte, an analyst at Enterprise Strategy Group. The software used to just focus on

the storage array, he says, “but what you’re seeing these days is more of an emphasis on looking at the whole stack.” Some analysts have also lobbied to expand the term to include something such as infrastructure resource management or infrastructure services management.

But Laliberte says IT shops are interested in capacity planning to improve their use

of storage they’ve already deployed. “The larger, more complex environments will get more value from this,” he says. “You could be saving millions a year — especially the larger shops. But really, any size company can benefit from what this has to offer.” CIO

send feedback on this feature to [email protected]

Flash memory is a wonderful thing: it’s shock-resistant, it doesn’t have moving parts, it uses

battery power more efficiently than disk drives do and its price is crashing. Even so, it isn’t likely to

replace hard drives on mainstream PCs anytime soon.

Financial analysts are watching, among other things, to see how far apple expands its use of

flash memory in lieu of disk drives in iPods. the continuing decline in flash memory prices also

is renewing speculation on when PC makers may start producing mainstream laptops and

notebooks — not just niche systems — that include the technology.

such systems are beginning to arrive. last month, Fujitsu Computer systems announced two

laptop models with NaND flash memory, one supporting 16Gb and the other 32Gb. Fujitsu touted

flash memory’s lack of drive heads that could crash and other moving parts that could fail, saying

that solid-state drives based on the technology “are noise-free, generate virtually no heat and

weigh half as much as traditional notebook hard drives.”

What may help push more systems with solid-state storage is the decline in flash prices.

the semiconductor Industry association (sIa) reported recently that worldwide sales of

semiconductors in February increased 4.2 percent from the same month a year ago.

John Greenagel, an sIa spokesman, said hat the industry group thinks intense price competition

is affecting revenue. “there is just a very competitive market,” he said.

and the flash market is getting more competitors. For example, last month Intel introduced its first

solid-state device, with a storage capacity of up to 8Gb. but flash memory is still seen by analysts

as being too expensive for use in notebook PCs that have 40Gb or larger disk drives.

robert semple, an analyst at Credit suisse Group, said issued last month that he doesn’t expect

the per-gigabyte price of NaND flash memory and disk drives to intersect “at mainstream

capacities in notebooks” until sometime beyond 2012.

and the storage-capacity sweet spot for notebook disk drives is a moving target. “We believe the

actual sweet spot in 2010 will be closer to 250Gb,” wrote semple.

In addition, he voiced some caution about NaND technology’s ability to manage data-write

demands on PCs. “simply put, data is more read-orientated within cell phones and MP3 players

and less write-intensive, which plays to NaND’s strengths,” semple wrote. “Managing data within

a PC is inherently more difficult, and NaND has not performed as well in these environments.”

Power savings would be a major reason for apple for use flash technology. according to a report

released in February, Prudential Equity Group llC, estimates, a 30Gb disk drive can store about

40 hours’ worth of videos but only supports 3.5 hours of battery life to watch them. replacing the

hard drive with flash would allow for a 60 percent increase in battery life, Prudential said.

Flash memory prices declined 60 percent annually over the past two years and fell another 30

percent in this year’s first quarter, according to Prudential.

—Patrick thibodeau

Is Flash Memory ready for PCs?

EssENtIal technology

Vol/2 | IssuE/137 0 M a y 1 5 , 2 0 0 7 | REAL CIO WORLD

Essentisl Tec.indd 70 5/10/2007 9:54:49 PM

Page 53: CIO May 15 2007 Issue

The Trouble With Storage ManagementEnterprises lack tools to provide a comprehensive, storage-inclusive view of their IT services.By Mario apicella

Pundit

storage | It may sound hasty to dismiss a technology that many companies are yet to deploy or even evaluate, but some of the vibes I am getting lately from vendors suggest that storage management applications may become obsolete before becoming mainstream.

It’s hard to give a short answer to the question of what’s wrong with storage management, but perhaps the beginning of the end for the technology is the limited scope revealed by its name.

Does it make sense to devolve so much effort to rein in just a single piece of the infrastructure puzzle? Shouldn’t storage be orchestrated in harmony with other important pillars such as servers, networks, and, above all, applications?

To be fair, some vendors have not spared efforts to integrate storage management with other disciplines. But quickly finding the source of a problem when it lies within the storage labyrinth remains a difficult-to-nearly-impossible task for many systems administrators.

The fact is, in many organization charts, titles such as 'database administrators' belong to a different box than their storage counterparts. That often makes problem resolution an adversarial affair rather than a cooperative effort.

But, conflicts between administrators are more an effect of the divide between storage and other resources than its cause. The main problem is that companies lack tools to provide a comprehensive, storage-resource-inclusive view of their IT services, which complicates overall management, monitoring, and planning.

Companies such as Onaro say there’s a remedy. Onaro’s recently released SANscreen Foundation 3.5 is a suite of applications that promises to fill the gap

between storage and other resources in your datacenter.

Visit Onaro's website and you’ll find a Flash demo of SANscreen. Deployment starts with an automated discovery of the topology of your SAN, similar to many SRM applications. Next, you add a service model to that static picture, specifying the access, capacity, performance, and recovery characteristics delivered by your SAN.

From there, you begin creating policies to define what the SAN should deliver: application A should have dual path access to its database, no less than 1GB of free capacity, remote replicas, etcetera. Once that is done, SANscreen stands on your network like a referee at a tennis match, ready to intercept, record, and call out any policy violation or service level degradation.

However, SANscreen provides much more than just a list of fouls. In addition to examining a list of violations, you can drill down to specific details, review the policy for correctness, and quickly identify the affected applications and storage components.

Moreover, you can run what-if scenarios to test the impact of a new policy or the effect of adding a new application server. I should also stress that despite areas of overlap, SANscreen doesn’t replace or compete with storage management applications. For

tasks like storage provisioning or zoning, you should rely on the usual device-specific or third party tools.

Judging from what I have seen of SANscreen, I'd say Onaro succeeds in bridging the gap between storage and the rest of IT. But it is worth noting that the application has also attracted the interest of vendors including Cisco, Hitachi Data Systems, and Oracle, suggesting that vendors both outside and inside the storage spectrum think it’s time to move beyond conventional storage management. CIO

Mario apicella is a senior analyst at infoWorld. Send

feedback on this column to [email protected]

Does it make sense to devolve so much effort to rein in just a single piece of the infrastructure puzzle: storage?

EssEnTIal technology

Vol/2 | IssUE/137 2 m a y 1 5 , 2 0 0 7 | REAL CIO WORLD

SPECIALSTORAGE

100001 1

SPECIALSTORAGE

100001 1

ET-Pundit.indd 72 5/10/2007 10:06:19 PM