dell technology accelerating business growth / issue no. 3,...

17
Catalyst Dell technology accelerating business growth / Issue No. 3, 2012 Energy exploration demands state-of-the-art IT Page 12 The infrastructure issue. Do you need 10GbE? 16 High-powered workstations 18 / Efficient data center upgrade 22 / When virtual desktops make sense 15

Upload: others

Post on 27-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

CatalystDell technology accelerating business growth / Issue No. 3, 2012

Energy exploration demands state-of-the-art ITPage 12

The infrastructure issue. Do you need 10GbE? 16 High-powered workstations 18 / Efficient data centerupgrade 22 / When virtual desktops make sense 15

Program Team

Russell Fujioka, Christopher Ratcliffe,

Tom Kolnowski, Susan Fogarty

Editorial Team

EditorSusan Fogarty

Contributing WritersKen Drachnik, Tom Farre, Susan Fogarty,

David Reoch, Erik Schmude

Ryan Partnership Team

Carole Ambauen, Roland Ambrose, Calvin Lew, Cathy O’Leary, Jacqueline Teele, Henry Wong, Derrick Martin, Janet McKasson, Kay Elliott, Dave Higdon

Great attention has been paid to the content of the articles published in this magazine. However, the editorial board and publisher do not accept liability for any incomplete or incorrect copy.

None of the articles or information from this magazine may be copied, published or used elsewhere without explicit prior approval from the editorial board.

If you do not wish to receive this marketing communication from Dell in the future please let us know by sending us your name, company name and postal address: Post: Dell – Suppressions. PO Box 59,ROSS ON WYE, HR9 7ZS, United Kingdom

Email: [email protected]: 0800 3283071Follow us at @DellCatalyst

I know what you’re thinking. Why is Catalyst doing an issue about

infrastructure, when everyone else in the industry can’t stop talking about cloud,

Windows® 8 and the latest i-device? Today it’s all about the applications, right?

Not so fast. You and I know that deep down, we all love infrastructure.

And the plain truth is that the slickest device or the most sophisticated cloud

application will do absolutely nothing without the back-end equipment

to support it. In fact, most end-user organizations and providers require

large upgrades in order to leverage advancements like virtualization, cloud,

convergence and automation.

What these companies are amazed by when they deploy state-of-the-art

network, server and storage equipment is the way it combines complexity and

simplification in one package. New infrastructure makes the most of advanced,

high-performance technology, so that IT departments can design and manage

their environments in a more streamlined and plug-and-play way.

Take the case of customer Spectrum ASA, a player in the oil and gas industry,

profiled on page 12. Spectrum was having trouble transmitting the large

amounts of data necessary to process its seismic imaging surveys. The company

installed new network switches that create a distributed core fabric. The fabric

provides high-capacity networking and also produces a redundant and scalable

environment for the company’s high-performance server clusters.

We also look into the story of mobile search provider Easou on page 22.

Based in China, Easou is a leading mobile search engine experiencing dramatic

growth due to increasing mobile internet traffic. The midsized company is on

track to add 1,000 new servers every year to its four data centers. Read about

how it is implementing Dell™ PowerEdge™ server technology to help keep up

with all that traffic.

For those of you with more application-oriented minds, we have plenty

to offer as well. Be sure to read expert Erik Schmude’s advice on migrating to

the cloud on page 11. And don’t miss our article on how you can use systems

management software to help comply with PCI regulations (see page 24).

Happy reading! As always, please contact me with your comments and story

ideas. I’d love to hear from you.

Susan Fogarty

Editor, Catalyst magazine

[email protected]

@DellCatalyst

Infrastructure: As important as ever

Catalyst

Welcome letter from our editor.Catalyst magazine is a Dell publication.

2 - ad

Catalyst 54 Catalyst

Want a digital version of Catalyst?

Download @ dell.com/catalyst Subscribe @ zinio.com/catalyst

6 Market Trends High-tech threats create security hazards.

8 Dell Innovators

10 Expert Advisors Advice on virtualization assessment

and cloud migration.

12 Network supports growth of seismic proportion An upgrade is helping Spectrum ASA

meet high demands for oil.

15 Is VDI right for your business?

16 10GbE offers increased affordability, performance and reliability

18 Production studio doubles video output Dell™ workstations help streamline concert

video production.

20 Keeping mobile users happy and secure

22 Easou harnesses mobile evolution Mobile search engine provider delivers with data

center upgrade.

24 Compliance made easier with systems management

26 Products: Server improves database and collaboration performance

28 News: Solution Centers offer hands-on contact

30 New Products

Dell Innovators 8 Network supports growth 12 Doubling video output 18 Keeping mobile users happy and secure 20 Harnessing the mobile revolution 22

Assessing the cybersecurity landscape.

Despite best efforts, security incidents regularly occur. Impact of data loss/leakage.

Perceived security risk of mobility. Mobile secure strategies in use (U.S. data).

6 Catalyst Catalyst 7

Market Trends

High-tech threats plus mobility, social sharing equals security hazards

The proliferation of mobile devices, combined with increased

connectivity and data sharing as well as the growing sophistication

of hackers and malware, makes it clear that every business requires

solid security policies and education of employees and IT staff.

New global research from CompTIA reflects the concerns

of your peers and how they are addressing them.

Data is courtesy of non-profit IT association CompTIA, from the

Information Security Trends Ninth Annual Report, published in February

2012. The study was conducted with an international sample of 1,183

IT and business executives directly involved in setting or executing

information security policies and processes in their organizations.

Countries covered include Brazil, India, Japan, South Africa,

U.K. and U.S.

Security threat

Security threat Serious Moderate

Malware 59% 35%

Data loss/leakage 52% 38%

Hacking 51% 38%

Understanding security risks of emerging areas 46% 43%

Social engineering/phishing 45% 42%

Intentional abuse by insiders 44% 40%

Physical security threats 43% 40%

Lack of/inadequate enforcement of policy 40% 44%

Lack of budget/support for security 39% 42%

Human error among IT staff 36% 47%

Human error among end users 35% 53%

Level of risk

Perceived risk Serious Moderate

Employee downloading apps to mobile devices 42% 42%

Mobile ads infected with malware 40% 40%

USB flash drives 40% 45%

Use of open WiFi networks 39% 40%

Theft/loss of corporate mobile devices 38% 42%

Shortened URLs infected with malware 35% 47%

Employees using personal devices for business purposes

34% 48%

Auto-dialing/texting malware 33% 43%

Factors driving cybersecurity concerns

Greater interconnectivity of devices, systems and users 46%

Greater availability of easy-to-use hacking tools 45%

Criminalization of hackers 44%

Rise of social networking 44%

Increased reliance on Internet-based applications 42%

Sophistication of threats exceeding IT expertise 42%

Consumerization of IT 37%

Continued use of legacy operating systems, web browsers, etc. 37%

Volume of threats exceeding capacity to thwart them 36%

Challenges in finding/training employees 34%

Passcodes 76%

Encryption of data devices 40%

Requiring updates/patching to OS and apps 33%

Disallowing jailbreaking of OS 26%

Ability to track and/or wipe lost device 25%

Types of corporate data affected by data loss/leakage

Confidential financial data 56%

Corporate intellectual property

44%

Confidential data about employees

42%

Confidential customer data 26%

Unidentified data type 22%

Where data loss/leakage occurs

Data at rest 51%

Data in motion 46%

Data in use 37%

Data loss prevention steps firms plan to take

Stricter separation of work and personal devices/communication

53%

Reinforcing/creating policy on sharing company information via social media

53%

Encryption of files on mobile devices and portable media

49%

Reinforcing/creating policy on mobile devices safety

48%

More restriction to access sensitive corporate data

44%

Less employee use of consumer applications for data storage/file sharing

32%

Greater focus on preventing spyware

21%

CompTIA is the voice of the world’s information technology (IT) industry. Its members are the companies at the forefront of innovation; and the professionals responsible for maximizing the benefits organizations receive from their investments in technology. CompTIA’s Public Advocacy group focuses on issues affecting the IT industry, with particular emphasis on representing the interests of small and mid-sized IT companies and entrepreneurs. For more information, visit comptia.org or follow CompTIA at facebook.com/CompTIA and twitter.com/comptia.

Organizations catch many security incidents, but not all …

Companies experiencing a security incident in 2011

Average number of incidents

Number of incidents classified as serious

U.S. U.K. Brazil India Japan

Yes, probably experiencedYes, definitely experienced

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Likelihood of experiencing an undetected security incident

Outside of requiring passcodes, few organizations have

implemented any comprehensive mobile security strategy.

8 Catalyst Catalyst 9

Who they are. Who they are. Who they are.

How they innovate. How they innovate. How they innovate.

ToonBox is taking advantage of technology efficiencies to deliver

seamless animation with a stereoscopic look to its customers for a

much lower price than traditional films produced in Hollywood. The

company’s first feature-length film, The Nut Job, required significantly

increasing its IT infrastructure. In its render farm, ToonBox installed 25

Dell™ PowerEdge™ C6100 servers with dual Intel® Xeon® processors,

each with six cores. On the front end, the company invested in Dell

Precision™ T5500 workstations because of their ability to produce

animation of very high quality in a short period of time.

Softwerx prided itself on creating reliable IT for its clients,

but when it received a project request for a high-performance

development environment, the company knew it was facing a

challenge. Softwerx contracted with Dell and its partner, Softlogic,

to deploy a new high-performance server test laboratory that

would boost the company’s capability to handle and attract more

sophisticated customers. The company installed Dell PowerEdge

T710 and 1900 servers with Intel® Xeon® processors, lowering total

cost of ownership by 40% and reducing server power consumption

by up to 90%.

Between storing active customer data and meeting HIPAA

compliance regulations, Navicure has extremely high storage

demands, which were formerly outsourced. The company turned to

on-site Dell EqualLogic™ PS6000 series iSCSI storage arrays, installing

34 arrays. The system is able to be managed in a few hours per

week and has achieved 1-hour RPO and 1-hour RTO. Navicure also

deployed a VMware®-based virtual infrastructure on Dell PowerEdge

R710 servers.

is an animation house dedicated to creating

the best-quality content, including 3D

stereoscopic film, using state-of-the-art

technology.

Innovators exist everywhere — around

the globe and in every industry. These

businesses not only rely on Dell

technology, they take advantage of

its many benefits to make their

businesses more productive, efficient

and profitable.

Know a Dell Innovator that should be

highlighted in a future issue of Catalyst?

Nominate them at @DellCatalyst.

is an IT services and software development

consultancy focused on designing,

implementing, and managing custom IT

infrastructure and software for mission-

critical environments.

is an internet-based medical claims service

provider. The company takes on account

management for medical caregivers,

allowing them to focus on their patients.

ToonBox Entertainment, Toronto, Canada,

Dell Innovators Softwerx (Pvt) Ltd, Colombo, Sri Lanka,

Navicure, Inc., Duluth, Georgia,

ToonBox Entertainment Softwerx Navicure

To learn more about Softwerx, download the entire article:

http://bit.ly/JjZhCo

To learn more about Navicure, download the entire article:

http://bit.ly/IUgb9Q

Watch the video for the whole story on ToonBox:

http://dell.to/KDIkPD

10 Catalyst Catalyst 11

Virtualization assessment: The pain is worth the gain By David Reoch

Planning for the cloud: Essential steps By Erik Schmude

Expert Advisor: Virtualization Expert Advisor: Cloud

If you want to build a patio deck off the back door of your

house, going down to the lumber store and hauling a big load

of lumber back to the house as your first step is probably not

the best idea. You almost always want to start out with a plan.

You’ve got to take measurements, think about the deck’s

relationship to the house, how it will be supported and the

necessary maintenance. But the fact is, a lot of us do want

to just gather up a bunch of material and start cutting wood

and driving nails. It’s not fun to plan, scope, gather data

and create detailed lists.

Obviously, that’s a bit of satire. But planning and assessing

is one of the longest and most difficult parts of any significant

project, including sizing a transformation from physical to

virtual infrastructure or scaling an existing virtual infrastructure.

Fortunately, there are best practices and open source or

off-the-shelf tools that can help you get the job done.

Taking inventory There are many elements involved in an inventory of IT assets.

The low-hanging fruit is physical devices, including servers, storage

and networking components. Once physical devices are captured,

it’s time to correlate overall IP address inventory, resource utilization

per system (network, CPU, RAM, disk), and actual system workload

– which, simply put, refers to the applications that a server is

running. There are three more key inventory items to capture:

power consumption, cooling requirements, and rack space utilized

by the existing hardware. These items will be used in your final

assessment, when you’ll calculate a cost-based justification for the

move to virtualization as you realize cost savings from reductions

in these three categories.

Workload targetsAs the application inventory is gathered, additional fields

will need to be mapped to each application to determine

if it is a valid target for virtualization. Is the application code

compatible with a virtual platform? Can the virtual platform

support the performance requirements of the application?

As you get ready to map workloads to future virtual machines,

a mandatory data point that is derived during the inventory

will be the current workload-to-resource utilization, including

The march toward cloud computing continues unabated as

organizations look to cut costs and improve efficiency.

The cloud represents a fundamental shift in the way IT

organizations provide services to users, and how users consume

those services. But how do you get there? What should you

consider when evaluating a migration to the cloud?

Consider the following major issues before you plan a

cloud migration:

Understand business goals and how IT plays a part in achieving

those goals. Evaluate your existing infrastructure and determine

where changes make sense. Perhaps it’s best to move certain

applications completely to the cloud, as with Software-as-a-Service

(SaaS), or to “rent” compute capacity and resources to handle

fluctuating demand, as with Infrastructure-as-a-Service (IaaS).

Decide which applications are best suited, or easiest, for an initial

cloud migration. Which applications can be easily duplicated in the

cloud? For most companies, non-mission-critical and low-demand

applications make the most sense. Also, applications that see big

cyclical swings in user demand are obvious choices since these

tend to be a drain on internal resources when not in use. New areas

of expansion or business may also be well served by the cloud.

Decide on a migration strategy. Which applications will be

transferred and when? Make sure you have a plan to integrate

cloud applications with on-premise applications to ensure they

work together seamlessly and provide users with the most updated

information when they need it, regardless of where it lives. Further,

do you have the appropriate analytics in place for both on-premise

and cloud environments? You may need to enlist specialized

expertise.

Evaluate cloud providers. There are many factors to consider here,

including: where data is stored, who is managing it, security and

reliability of the provider’s Internet connections, how fast you can

get back online in the event of an outage, and how deep your levels

of security will be. Security should be evaluated from the aspects of

physical data center security, image security, anti-virus and intrusion

prevention. Strong service-level agreements are critically important

when moving corporate data and applications to the cloud.

average and peak usage data. This data will be used to determine

the number of processors and cores that will be required to support

each workload. In the end, you will have a list of in-scope workload

targets and workloads that will require dedicated physical hardware.

How many servers should be virtualized?How many virtual machines can you run in your environment?

In most cases, optimization and cost savings can be realized as

your assessments reveal under-utilized physical hardware. But the

final answer will lay in a few more considerations: Do you want to

run as many workloads as possible on the absolute least number

of physical servers? That will result in high server costs.

What else should be virtualized?We all understand the benefit of standardizing and automating

our infrastructures. Additional ease of management and predictive

costing of your data center can be obtained as you look beyond

typical server workloads as optimization targets. Also investigate

virtual firewalls, load balancers, and WAN optimization devices.

There is no magic formula or calculator that will automatically

discover what you have, measure it, and funnel it into a machine

that produces a virtualization plan for your business. Once you

put forth the effort, you will quickly find that the up-front

assessment will pay back a hundred-fold with an easy-to-scale,

easy-to-manage, and easy-to-predict infrastructure.

David Reoch is an enterprise technologist at Dell, specializing in cloud

and virtualization strategies for SMB customers.

Evaluate ROI and understand the trade-off between CAPEX and

OPEX. Spend the time to analyze whether a capital investment

in internal IT resources may have financial advantages over an

OPEX strategy. For example, it may be less expensive to make an

asset purchase (for servers running heavily used applications) and

depreciate it over time, rather than pay-as-you-go in a cloud model.

Step-by step cloud migration When you have thoroughly considered the above

factors, it is time to create a true migration plan for your

selected application(s). A successful migration must

account for changes in process and technology, and

should include the following steps:

1. Rethink business processes to take advantage of

cloud capabilities, but don’t change for the sake of

change or expect people problems to be resolved

2. Cleanse your data to ensure it is accurate, complete

and consistent

3. Determine the necessary data for the new cloud-

based application

4. Define and create integration points and connections

to other applications

5. Migrate data

6. Test

7. Train users

8. Go live

9. Work out the glitches

10. Solicit and implement ideas for improvementRecommended assessment tools Use these free tools to help assess your server

infrastructure and plan for virtualization:

• MicrosoftAssessment&PlanningToolkit

http://bit.ly/JnMuPs

• Gangliaservermonitoringsystem(opensource)

http://bit.ly/IAA8AO

• SolarwindsVirtualCapacityPlanningandManagement

(free trial) http://bit.ly/KCyKml

Erik Schmude has been with Dell for 13 years, working with customers to

develop their server, storage and network architectures.

To learn more about cloud computing, download the

free e-book “Bringing the Cloud Down to Earth.”

http://dell.com/Rhonda

12 Catalyst Catalyst 13

A distributed network core is helping Spectrum ASA grow quickly to meet high demands for oil.

Gas prices are going through the roof. Most of us cringe at those

words, but they cause Spectrum ASA to spring into action. When

the market is high, the seismic imaging company must act quickly

to scout and analyze new locations where oil and natural gas could

occur below the earth’s surface.

For Spectrum, action means processing huge volumes of data.

“Our job as a company is to do [oil] surveys in frontier areas,” says

Andrew Cuttell, executive vice president of data processing at

Spectrum. “We’ll do a survey in a particular area, process the data and

sell the results to the oil and gas companies, who are deciding if they

Cover Story

Network supports growth of seismic proportion By Susan Fogarty

Intense data demandsAll these tests produce reams of data, requiring a high-powered

IT infrastructure that is centralized in the company’s data center in

Houston, Texas. “We start out with very, very large volumes of data,”

explains John Lyons, vice president of information technology at

Spectrum. That data originates in the field, and most is relayed to

Houston’s computing cluster for processing. “Every time we process

a job, we’re creating thousands and thousands of data points that are

put through several algorithms in order to produce an image of what

the subsurface of the earth looks like. At least a few terabytes of data

are passed around the cluster, have calculations done to them, and

then have a new dataset generated of equivalent size,” Lyons says.

Spectrum needs to be able

to process its data quickly and

accurately to satisfy clients like

Exxon, Shell, and Chevron, says

Cuttell. When the petroleum

market surged ahead in 2011,

however, the company found

that part of its data center was

creating quite a bottleneck.

Although the cluster had the

ability to perform and had been

going through a rolling upgrade

to keep it working well, the

network infrastructure sending

data to the cluster and between

nodes could not keep up with processing performance.

“The cluster CPU central processing unit] was being starved of

data and we weren’t getting the maximum performance out of the

systems,” says Lyons. “We are expanding very rapidly and need to

expand the clusters to meet that.”

High-performance network neededPart of Spectrum’s expansion has included installing new

ultradense servers, says Lyons. The newer racks in the clusters

are built with Dell™ PowerEdge™ C6100, four-node chassis servers.

Each node in a chassis has two Intel® Xeon® X5675 processors, 96GB

memory, three 600GB SAS hard drives, and two 10Gb Ethernet

ports. “A fully populated rack provides up to 1152 CPU cores,” Lyons

calculates. “We currently have 12 racks in the cluster room.”

want to bid on licenses to explore these areas. Right now the oil price

is high, so oil companies have money to spend, and they are quite

encouraged to look at frontier areas to find the next big oil fields.”

Spectrum’s geophysicists are known for their success in handling

difficult and challenging datasets from all over the world. Based in

Oslo, Norway, Spectrum has eight offices in four continents, and is

growing at a rapid pace. The company performs geophysical tests

including land and marine processing, pre-stack migration in depth

and time, AVO and AVAZ analysis, and inversion studies.

Spectrum needed a network that could support all of that

computing power, and that could grow with the company’s needs.

“What I was looking for was an environment that would significantly

increase the throughput of the existing cluster but offer us the

potential to scale up going forward,” says Lyons. Spectrum also uses

Dell workstations and clients to help render data on the front end.

Lyons understood the problem and had clear goals he wanted the

new network to achieve. The former system was based on a single

core network switch with limited capacity, causing Spectrum’s data

to bottleneck.

Network requirements definedLyons defined Spectrum’s new network backbone as one that

would provide greater capacity, resiliency, and scalability. He found

the best deployment in a non-blocking architecture based on Dell

Force10 Z9000 core switches. Two Z9000 switches are connected

in a distributed design along with four new Dell Force10 S4810 top-

of-rack switches. Most of the legacy rack switches will gradually be

replaced over the next few years, says Lyons. The plan also includes

supplementing existing hardware as needed to meet demand.

Spectrum processes large

volumes of data to produce

models that represent the

earth’s subsurface.

Spectrum ASA provides geophysical data analysis

to the oil and gas industry.Areas analyzed in Spectrum’s data library

14 Catalyst Catalyst 15

Increased capacityThe refreshed backbone delivers capacity in 40Gb connectivity

between the core switches and top-of-rack switches, and between

the core switches and a pair of switches that connect to storage,

explains Lyons. The S4810 switches connect to the servers at 10Gb,

eliminating throughput issues. “We were already buying cluster nodes

with 10 Gb connectivity. Having the performance there was very key

to us,” Lyons affirms.

Improved resiliencyThe redesigned network uses a distributed architecture that

spreads out traffic loads and also provides redundancy for the system.

According to Lyons, a failure in either of the Z9000 core switches will

cause a slight performance drop, but will not affect routine business.

In addition, the S4810 switches have cross-connections that

support any-to-any connectivity between server nodes at line-rate

and are designed to fail over if a fault occurs. The distributed core

architecture also allows one node to be brought down or replaced

without having any impact on the overall switch fabric.

Scalability and open designLyons explains that the pair of Dell Force10 Z9000 core switches

are designed for scalability and easy growth, and that was critical

factor in selecting the technology. “For every rack that we’re putting

in, we have 320-gigabit capacity down to each rack. We can support

eight racks from one of those Z9000s. In the future, by simply adding

more Z9000s, we can scale the whole thing up, with no additional

bottlenecks being introduced — all we’re doing is adding more paths,”

he says. The distributed design approach is interoperable with all

existing IP and Ethernet technologies and also allows the use of

any standards-based Layer 3 protocols such as OSPF or BGP.

Overall, Spectrum’s IT environment is running at peak and meeting

its technical and business goals. Lyons reports that data processing

time has been cut in half because his servers are no longer waiting

for access to data.

Cuttell agrees, noting, “Reliable equipment that works well means

that we don’t have so many errors, we don’t have delays, and we can

then get the data out as quickly as possible.”

Susan Fogarty works for Dell as the editor of Catalyst magazine.

Cover Story: continued

Download the white paper on distributed core architecture

design: http://bit.ly/vq7mXK

Meeting demands for energy exploration

2007

km

2008 2009 2010 2011 2012

1,000,000

800,000

600,000

400,000

200,000

Amount of available seismic data grew from 500,000 km to 1,150,000 km in 2011

The Business of Technology

IT management often involves a balance between user

satisfaction and corporate goals. The increasing trend for employees

to use personal devices for work purposes raises even more issues

around data security and IT management.

Such concerns are addressed by virtual desktop infrastructure

(VDI), a form of server-based computing that leverages virtualization

technology to host multiple unique clients, including operating

systems and applications, on a server in the data center. Desktops

are delivered to users via the corporate network or the internet.

Users gain flexibility because their desktops are accessible on different

client devices, and IT benefits from centralized information security

and client management.

Such pluses are building momentum for VDI solutions, but it’s

still early in the adoption cycle. The following scenarios will help

you decide whether VDI could be right for your company.

Scenario 1: “We need better security.” Improving security for the

network and corporate data is the top reason to deploy a VDI solution.

VDI gives a central point of control, and allows policy-based access

control to the network, applications and data. It also becomes easier

to implement software patches and anti-malware protection,

making your company more secure.

Scenario 2: “IT staff is always putting out fires.” Centralized

control of clients makes IT staffers more effective and efficient.

Higher desktop reliability means fewer desk visits and easier software

upgrades. Simpler backups and business continuity save time and

money, freeing IT staff for more strategic tasks.

Scenario 3: “We want to bring our own devices.” If users are

unhappy with a single flavor of corporate client, VDI can help. It frees

them to use a wider range of devices, such as Windows and Linux

notebooks, Android and iOS tablets, and even mobile smart devices.

Scenario 4: “We crave desktop mobility.” With VDI, clients can

access the desktop from any network connection, a boon to users

when they are on the road or working from home. Aside from the

convenience, anytime-anywhere access can increase the hours that

employees work.

Scenario 5: “We need to upgrade our PCs.” It makes sense to

consider VDI during a PC refresh cycle. Because most processing

occurs on the server, VDI can extend the life of older, less powerful

PCs. A refresh project with VDI can also let you replace PCs with

less expensive thin clients or other devices.

If any of these scenarios hits home, VDI might be a good fit for your

organization. As with any technical solution, it pays to start with a clear

strategy for solving end-user problems and adding business value.

Deployment options for VDIOnce you’ve decided VDI is right for your business, it’s important

to explore different deployment options. Although VDI does simplify

desktop support and security, it can add complexity to the data center,

increase network traffic, and require purchase of servers, virtualization

software and middleware. So you’ll want to take stock of your current

environment and consider the following deployment modes:

•Buying best-of-breed: An enterprise-class solution, where IT

staff sizes, specifies, deploys and supports all solution components,

is probably the most complex. With this option, it pays to work

with a solution provider or vendor with deep VDI experience.

•Adding an appliance: Midsize firms with modest requirements

can benefit from a VDI appliance. Here the virtualization software

and middleware are bundled in a secure server and sized by number

of virtual clients. Installation is generally easy.

•Calling on the cloud: Virtual desktops can also be sourced as a

managed service from the cloud. This greatly lessens the need for

IT support staff, while turning the capital expenses of server-based

computing into an operational expense.

Tom Farre, a freelance journalist and the former editor of VARBusiness,

has been covering information technology for 20 years.

Is virtual desktop infrastructure right for your business? by Tom Farre

This video will help you determine your goals

for desktop virtualization:

http://dell.to/KPsiU5

16 Catalyst Catalyst 17

10GbE offers increased affordability, performance and reliability

Best Practices

Today, several far-reaching IT trends — namely virtualization,

cloud computing and network convergence — are rendering legacy

network and data center architectures obsolete. Server virtualization

and chatty web applications are profoundly increasing the volume

of server-to-server traffic within the data center, scaling networking

environments beyond what they can support. In addition, intensive

workloads requiring additional devices and port counts are making

the network increasingly inflexible and difficult to manage.

As a result, data centers — even in small businesses — must

become more dynamic and complex, making the network vulnerable

to a failure. This causes IT and network administrators to struggle

to maintain performance in lieu of innovating solutions around

business drivers.

Thus many organizations are investigating network virtualization

and convergence, and switching to 10 gigabit Ethernet (10GbE) as

their standard server-to-network interface.

Virtualization burdens the infrastructureServer virtualization initiatives are a top priority for today’s IT

departments. Midsized organizations maintain aggressive plans to

grow their already expansive stock of virtualized servers, which in turn

will increase the number of virtual servers that must be integrated

into the physical data center.

Virtualization decouples applications and operating systems from

physical hardware, allowing for multiple virtual machines (VMs) and

operating systems to run on a single physical device. By eliminating

the one-application-per-one-server model, virtualization enables

organizations to run a greater application load on less server hardware,

leading to greater server densities and more IO throughput.

In order to account for the added workloads, virtualization

servers are often clustered into shared volumes such that data and

resources can be provisioned and allocated automatically based on

need. Though the migration of data and resources are designed to

enhance performance, many legacy networks — most of which are

optimized for server-to-client traffic — experience increased latency

and operational failure with server-to-server communication.

continues the evolution of Ethernet in speed and distance, allowing

organizations to solve increased bandwidth demands brought about

by virtualization and SOA.

By requiring more traffic, virtualization necessitates faster

processes, and 1GbE can pose serious operational problems, warns

Robin Layland of Layland Consulting. “Trying to save some money by

using lower speed entails risk if the 1Gb link has capacity problems.”

With each deployment of next-generation Ethernet technology,

deployment costs have trended downward, making 10GbE a cost

effective solution for even small-sized to midsized organizations.

If your organization is experiencing networking problems,

consider upgrading to 10GbE. Implementing speed at a small

markup will allow your organization to maintain a strong networking

backbone and provide enough bandwidth for increases in future

capacity.

“Just remember,” says Kerravala, “experience shows that no matter

how much bandwidth you think you’ll need down the road, you will

likely need more, so build accordingly.” Build now and have peace

of mind later.

Application demands growGrowing deployments of web-based applications are also taxing

existing networking infrastructure. More and more organizations are

deploying service-oriented architectures (SOA) and web applications.

As with server virtualization, these applications scale horizontally

across server tiers and are challenging already overworked legacy

networking equipment and processes.

As a result of these initiatives, maintaining network and application

performance based on the traditional networking framework is

proving costly and time consuming. Managing the physical data

center network infrastructure, manually configuring physical servers,

and provisioning IP addresses has become a full time job for IT and

networking admins who are forced to spend all of their time trying

to keep the lights on rather than innovating.

Companies need an alternative solution to the impending — even

immediate — networking concerns plaguing the data center.

Networks get virtualSoftware-defined networks (SDNs) offer an alternative model

to traditional frameworks. While traditional models require that

networking equipment and path policies be independently set up on

a device-by-device basis, SDNs utilize software to virtualize networks

to externalize the control plane. Virtual networks are intended for

specific use cases like multitenant data centers and support VM

mobility, data center orchestration, and centralized management.

However, while virtual networks limit manual processes and

help streamline live migratory capabilities of virtualization and SOA,

legacy equipment may still create network bottlenecks. “The word

in networking is speed,” says Zeus Kerravala, principal analyst at ZK

Research. “Speed to move data in virtual machines and speed to

move increasingly complex, richer data such as video.” To this end,

organizations are investing more and more in upgrading networking

to 10GbE.

Why upgrade to 10GbE?10GbE is different than earlier Ethernet standards in that it only

functions in full-duplex mode, meaning that collision-detection

protocols are unnecessary. Ten times faster than 1Gb, 10Gb

Businesses using virtualization and with growing application demands can benefit from upgrading to 10GbE.

Watch the video to find out how virtualization

is changing today’s networks:

http://dell.to/MPbOix

18 Catalyst Catalyst 19

Case Study

For event organizer C3 Presents, supplemental video on YouTube

has become a crucial component of both the Austin City Limits

and Lollapalooza music festivals. The project includes live streaming

and nearly real-time editing of performance recaps and interviews.

In previous years, the firm used Apple® hardware and software,

but this year it needed to simultaneously produce more content

and accelerate the production process.

For both festivals, Arts+Labor deployed Dell Precision™

workstations with Intel® Xeon® and Intel Core™ processors running

Adobe® Premiere® Pro video-editing software, Dell™ UltraSharp™

monitors and Dell PowerVault™ network-attached storage.

To produce the event on YouTube, festival organizer C3 Presents

works closely with two visual production studios. Springboard

Productions streams the show in real time, while Arts+Labor, based

in Austin, Texas, produces video extras to support and surround the

live broadcast. “We produce content for opening and wrapping up

the show, and for filling breaks between acts,” explains Alan Berg,

president and co-founder of Arts+Labor. “We use artist interviews,

we do recap videos and we create other special video projects

to fill any gaps in coverage.”

Need for faster video productionThe high-end production extends the festival’s reach beyond

Austin and beyond the weekend of the event, yet Arts+Labor must

post its content as close to real-time as possible. Cameras positioned

throughout the festival space capture performances, crowd shots,

interviews and other content. Stage cameras fed directly into the

Arts+Labor production trailer using HD-SDI over fiber-optic

connections. The festival’s other cameras provided video content

through P2, SD or CF storage media.

In years past, Arts+Labor used MacBook® Pro laptops with

Apple® Final Cut Pro® software to turn camera feeds into content.

For storage, the firm used a series of FireWire hard drives

connected in a daisy chain. “This solution worked OK, but it was

cobbled together and wasn’t efficient,” says Erik Horn, the firm’s

creative director. “Last year, ingesting and editing the video took

several hours. As we planned this year’s broadcast, we looked

for ways to reduce that time.”

The search for a new video-editing workflow became more

urgent when C3 wanted to produce multiple simultaneous YouTube

feeds. “So many viewers accessed previous years’ broadcasts that C3

wanted to create additional channels on YouTube,” Berg says. “In this

live-music scenario, we’re really under the gun, and adding another

channel on YouTube increased the amount of content we had to

produce. We knew our legacy solution wouldn’t work again this year.”

Arts+Labor evaluated its options for a new hardware and software

setup that it could use for ACL and for Lollapalooza, which has

similar demands.

Streamlined workflow saves timeArts+Labor opted for an all-Dell solution. For this year’s ACL

and Lollapalooza festivals, footage from the stage cameras was

captured on Dell Precision M4600 mobile workstations with Intel

Core processors. All of the raw footage was then transferred onto

a Dell PowerVault NX200 network-attached storage (NAS) device.

From there, video editors used two Dell Precision T5500 tower

workstations with Intel Xeon processors to edit the raw video clips.

Footage from the event’s other roaming cameras was transferred

from the tower workstations onto the NAS via built-in SD and

CF card readers.

Arts+Labor found that the Dell Precision workstations were

much faster than the legacy MacBooks they replaced. “We were

very impressed with the render times on our new workstations,”

says Horn.

The Dell workstations run Windows® 7, and the two editing

systems run Adobe Premiere Pro CS5.5 and After Effects CS5.5

editing software. This further simplifies the process. “With Final Cut

Pro, we would have to transcode raw P2 MXF files before we could

begin working with them,” Horn says. “Now, raw footage can run

directly into the timeline without conversion. By eliminating these

conversions, we shortened the length of time required to ingest

a file by about 40 percent. Over the course of this year’s

ACL festival, this saved us about 30 hours.”

Video production doubledThe combination of Dell Precision workstations and Adobe

Premiere Pro streamlined the editing process as well. “Incorporating

Arts+Labor replaced Apple equipment with Dell workstations to streamline concert video production.

Production studio doubles video output

footage from different types of cameras was a daunting task with

Final Cut Pro,” says Horn. “The editor would have to convert all the

different file types into a format that would work with Final Cut Pro.

In contrast, we now can easily combine native footage from different

camera systems, and then convert the final exports into different

compressions and dimensions easily. That saved our video editors

a lot of time.” The firm was able to produce twice as much video

content than in years past, without adding staff.

Another benefit of the improved workflow is that Arts+Labor

requires approximately 50% less storage capacity. “Because we

don’t have to convert files from their native formats, we’re saving on

storage,” Horn says. “We were able to fit three days’ worth of footage

from multiple shooters and the stage feeds on our Dell PowerVault

NAS and had tons of storage to spare.”

Better yet, the Dell equipment is more flexible than the firm’s

legacy Apple setup. “We don’t have to invest in a broadcast-specific

setup because the Dell Precision workstations easily convert for our

other work,” Horn says. “We’re currently using them in our work on

feature films, short form documentaries, branded Web content and

a broadcast TV special. We see this as a trend in our industry: Other

firms are also moving away from Final Cut Pro to a workflow that

incorporates Adobe and Dell Precision workstations with 64-bit Intel

processors and AMD FirePro graphics.”

Arts+Labor’s new hardware and software solution:• Doubledamountofvideoproduced

• Saved30hoursover3daysbyeliminatingtranscoding

• Reducedvideofileprocessingby40%

• Uses50%lessstoragespace

Watch the video to see Arts+Labor in action:

http://dell.to/KmLgp1

20 Catalyst Catalyst 21

Use Dell’s online resources to build a mobile strategy

that fits your workforce: http://dell.com/mobility

Keeping mobile users happy and secure

Best Practices

Developing a strategy and using mobile management software can help ease BYOD complexity.

Yesterday, end-user computing meant cabling desktops and

updating spam filters. Today, enlightened client management

means dealing with tablets, smartphones and netbooks, along with

applications running on multiple operating systems. Add to that

employees’ desire to use their personal devices at work — called

BYOD for “bring your own device” — and managing becomes a

complex job.

Although BYOD increases complexity, it is becoming the norm,

rather than the exception. A recent study by Decisive Analytics

prepared for Trend Micro suggests that permitting BYOD provides

a competitive advantage, improves employee satisfaction, and

boosts user productivity, without impacting expenses.

Significant security risksThe security risk, however, is significant. According to the study,

nearly half of companies with BYOD programs experienced one

or more security breaches. Interestingly, the security threat is

different from what you might expect.

Most IT executives think the main threat is lost or stolen devices

that contain company information,” says Sean Wisdom, global

leader of small and medium mobility solutions at Dell. “But cell

phones and tablets have become like our keys, they are seldom

lost or stolen. This causes IT staff to underrate the security threat.”

What, then, are the real dangers? The most popular mobile

devices run either iOS or Android operating systems, which offer

little inherent security. Most companies allow such devices to

access the corporate network, without deploying robust security.

And as users gain more control over devices, downloading and

managing applications from various sources, the threat of malware

increases. Sensitive data also can be exposed by malicious means,

or copied by users with innocent intentions to such venues as

iTunes, Gmail or Facebook. And then it becomes out of IT’s control.

To address this challenge, experts advise analyzing your current

situation and carefully creating a mobility strategy, perhaps in

concert with a solution provider with mobility expertise.

Mobile action itemsAction items include taking inventory of mobile devices currently

in use, and devising a plan that includes a mix of company-owned

devices and approved user-liable devices. It’s also important to

institute policies on which apps can access the corporate network,

and how you will secure corporate information and manage mobile

applications. Delivery of your apps and management tools requires

some thought — will they be on-premise, from a cloud service,

hosted, or some combination of these? Remember that when

devising your plan, BYOD changes the emphasis from managing

the device itself to managing corporate assets.

Software solutions can helpAs an example, imagine a company that needs to secure only

email and related content. Mobile device management (MDM)

software is available that offers full email control, as well as

encryption of data in transit and on the device. You can control

the endpoint itself via a container that separates all corporate

computing from personal usage. The user cannot cut and paste

corporate information or backup emails outside the container,

and if need be the IT administrator can wipe the container clean.

Such controls ensure there will be no leakage of sensitive data.

Many companies will need to manage and secure multiple

applications across multiple devices and operating systems.

This puts a premium on mobile application management (MAM),

which supports app distribution, provisioning, version control

and policy management.

For company-owned devices, there’s also the issue of expense

management — how to minimize costs from pricey services such

as texting and downloading attachments when roaming. Some

MDM solutions specialize in cost reduction by enabling you to

set user policies, track behavior and receive alerts when policies

are being violated.

By deploying the right policies and technology for mobile device

and application management, you can have the security IT needs,

combined with productivity and morale gains when users are free

to choose their own device.

BYOD changes the emphasis from managing the user device itself to managing corporate assets.

Bring your own devices.

22 Catalyst Catalyst 23

Case Study

Mobile search skyrockets• Easouactivemobilesearchusers:2.2billion

• Dailyactiveusers:25million

• Pageviewsperday:11.5billion

• Searchesperday:2.92billion

• Easouannualrevenuegrowth:125%

Every day, more than 25 million people in China access the

Easou mobile search engine using their wireless devices. These users

consume 11.5 billion page views and perform 2.92 billion searches per

day, with numbers growing by the minute. The organization knows

that success depends on the performance of its search technology,

with customers expecting quality, high-speed results.

To keep up with customer demands, Easou has been growing

quickly. Annual revenue increased more than 125% in 2011.

Infrastructure is not far behind. “At present, we need to purchase

more than 1000 servers annually for expanding and upgrading

our data center,” says Lyn Wang, senior systems manager.

Performance plus energy efficiencyWhen Easou began updating its four data centers last year,

it looked for a server solution that delivered greater performance

and better energy efficiency. The company understands that

environmental responsibility is increasingly important to customers,

who want companies to do as much as possible to tackle global

warming. Frank Wang, chief executive at Easou, says, “In China,

we have 2000 servers, but that may easily rise to around 10,000

in the future. Energy-efficient technology has a real impact on

our bottom line and our environmental impact.”

Easou turned to Dell, whose solutions had gradually replaced

those of other IT vendors across the business. Dell was selected

based on a comparative test with other vendors, taking into account

the energy efficiency, stability and cost effectiveness of each solution.

Frank Wang says, “Dell met all of our selection criteria, particularly

in terms of energy-efficient technology.”

Easou deployed Dell™ PowerEdge™ R710, R510 and R410 servers

with Intel® Xeon® processors at each data center, consolidating the

number of machines at each site. Each of the data centers now

performs better, while consuming less power due to more efficient

power supplies and effective heat-management systems. Lyn Wang

says, “Data center operation is more stable, faults are greatly

reduced, and we are able to increase single loads, providing

users with more stable service.”

Frank Wang agrees. “Our aim is to provide the best possible

service to our customers, and the stability of our solution makes

this a reality for us.”

More investments in improvementEasou has more resources to explore other IT improvements

because it has reduced the total cost of ownership (TCO) for its IT

environment. The company has been virtualizing servers and is in

discussions about cloud services. “We use multiple PowerEdge R510

servers and Citrix™ XenServer to optimize use of virtual machines,

and use virtualization to manage our special servers,” says Lyn

Wang. The company has also deployed the Citrix™ XenDesktop™

desktop virtualization solution for its internal IT environment. Using

this environment, operations staff can access the same software

desktop to securely maintain company servers, no matter what

the server location or employee location.

Most of all, increasing operational efficiency allows Easou

to concentrate on developing its mobile search technology.

Frank Wang explains, “Our new solution has definitely reduced

our TCO, which enables us to invest more in improving

the services we offer our customers.”

Easou harnesses mobile evolution

Mobile search engine company Easou delivers higher performance and greater energy efficiency with data center upgrades.

Read about Dell Fresh Air technology, the latest in data

center energy efficiency:

http://dell.to/K34yfx

24 Catalyst Catalyst 25

Best Practices

We all have them — payment cards. Whether purchase,

debit or credit, we use them to buy items in stores and online,

make travel reservations, even pay bills. But the convenience of

these cards comes with a price. Every time we make a purchase,

our personal information is shared across a variety of networks,

making it vulnerable to theft. So it’s no surprise that data theft

has increased significantly in recent years.

The payment card industry has responded to this growing trend

by developing a set of 12 requirements called the Payment Card

Industry Data Security Standard (PCI DSS), designed to ensure that

any organization that processes, stores or transmits credit card

information maintain a secure environment. While these standards

can have major consequences for noncompliance, many retailers

and other organizations that process payment cards face significant

IT challenges, which can make achieving PCI compliance difficult.

Leveraging systems-management technology can help. It can

serve as a critical building block in helping retailers of all types

accelerate PCI compliance by deploying, configuring and maintaining

secure systems that access and handle cardholder information.

Addressing management pain pointsFrom an IT perspective, retail organizations small and large are

often challenged by pain points centering primarily on cost, structure

and security. For example, some may not have in-house IT specialists

to address compliance issues, or may have a need to manage

multiple or remote locations. Others may only have minimal security

measures in place around their Point-of-Sale (POS) systems, which

are among the most vulnerable targets for fraud.

These concerns transcend organization size, but are often

extra critical for small and medium organizations that operate

with limited resources and staff. Consider City News, a Chicago-

based newsstand. The small retailer with two locations recently

experienced a devastating data breach that nearly brought down

the business. Hackers were able to access the store’s POS system

via a weak username and password, and installed software that

captured and copied credit card information before it was sent

to be processed. The software was discovered a year later and

Systems management helps enable PCI compliance with little cost or complexity.

Compliance made easier with systems management By Ken Drachnik

removed, but not before racking up a $22,000 bill for investigating

the source of the violation and defining security improvements.

How systems management contributes to complianceSystems management plays a vital role in PCI compliance

as it unifies efforts across the landscape of PCI, in the data center

as well as at the end point. It simplifies such tasks as configuration

management and OS and application patching to improve IT

efficiency, while enforcing compliance obligations.

Some of the key benefits that systems management provides

for PCI compliance include:

•SavingITtimebyautomatingtheroutineandrepetitivesystems-

management tasks required to maintain compliance, such as

software updates, approved configuration maintenance and

systems patching with the latest approved versions

•Criticalcapabilitieslikeautomaticapplicationofsoftwaresecurity

patches, enforcement of security policies such as password

strength, remote software distribution and upgrades, streamlined

IT inventory to help ensure that only approved devices are

connected to your network, and compliance reporting

•Ensuringtheprotectionofprivateinformation,boosting

customer confidence and loyalty

•Enablingorganizationstofocusoncorebusinessandsaving

money by avoiding penalties for noncompliance

Attaining PCI compliance is a complex undertaking that often

requires the use of a Qualified Security Auditor (such as Dell

SecureWorks®), along with ongoing reporting applications.

Organizations should evaluate solutions that are robust, affordable

and easy to deploy and maintain. These can help reduce IT operations

costs and improve management performance so that organizations

can focus on their businesses and their customers.

Ken Drachnik is the director of marketing for Dell KACE. For more

information, visit www.dell.com/kace.

To learn more, download the white paper,

“Attaining PCI compliance: Building on an effective

systems management foundation”

http://bit.ly/IL4YKS

26 Catalyst Catalyst 27

Products In-Depth

Released earlier this year, the 12th generation of Dell™

PowerEdge™ servers provides increased power and manageability.

The server family includes rack, tower and blade servers to allow

your IT environment to accomplish more than ever before. Here

we look at test results focusing on the PowerEdge R720xd server,

a two-socket, 2U rack server designed with large storage capacity

and high memory density.

Consolidating with SQL Server 2010 and PowerEdge R720xd: Better performance, lower cost

Using virtualization to support more load and users with

less hardware and resources makes sense in theory. Principled

Technologies set out to test the idea by measuring cost and

database performance of two solutions: a three- to four-year-old

Dell PowerEdge 2950 III running Microsoft® SQL Server® 2008 R2,

and a Dell PowerEdge R720xd server running the same workload

in each of five virtual machines with SQL Server 2012.1

Triple your users with Exchange 2010 on PowerEdge R720xd

IT organizations often follow refresh cycles to keep up to date

on technology, especially when it comes to Microsoft products

and its Exchange Server. Dell examined performance improvements

in Microsoft® Exchange Server from the 2007 version to the 2010

version and the benefits of using the latest generation Dell™

PowerEdge™ servers to provide performance improvement per

server.2 The study used the Microsoft Jetstress tool to simulate

Exchange Server workloads. It compared Exchange Server 2010

running on the new PowerEdge R720xd server against Exchange

Server 2007 running on a PowerEdge R510 server. Results show

that improvements in Exchange Server mailbox performance,

combined with increased capacity and throughput of the

PowerEdge R720dx, can support up to three times the number

of users; and each of those mailboxes is capable of handling up

to twice the number of messages as the older configuration.

The virtualized solution on the PowerEdge R720xd server,

supporting five instances of SQL Server 2012, delivered a total of

175,787 orders per minute (OPM). That is 462% greater performance

than the 31,295 OPM the legacy server delivered.

The figure above summarizes the cost for the two solutions

over three years, taking into account hardware acquisition and

SQL Server software expenses.

1 Data from the white paper, “Microsoft SQL Server consolidation and TCO: Dell PowerEdge R720xd and Dell PowerEdge 2950 III,” March 2012, http://bit.ly/LuRYI1.

2 Data from the white paper, “A comparative study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510,” March 2012, http://bit.ly/JuINoV.

Read the entire report for detailed results:

http://bit.ly/LuRYI1

Read the white paper for detailed results:

http://bit.ly/JuINoV

New server improves database and collaboration performance

Benchmarks show the Dell PowerEdge R720xd server excels in several areas.

Dell PowerEdge 2950 III solution

Dell PowerEdge R720xd solution

Year 3 operational cost

Year 2 operational cost

Year 1operational cost

Acquisition cost

$300,000

$250,000

$200,000

$150,000

$100,000

$50,000

$0

The Dell PowerEdge R720xd solution can deliver $151,747.94 savings over three years

$273,997.80

$122,249.86

US

do

llars

Dell PowerEdge 2950 III Dell PowerEdge R720xd

200,000

175,000

150,000

125,000

100,000

75,000

50,000

25,000

0

Total database performance

Exchange performance metrics

175,787

31,295

OP

M

VM 5

VM 4

VM 3

VM 2

VM 1

Exchange 2007 on R510

Exchange 2010 on R720xd

Derived values from Disk Subsystem Text

Message Profile (message/day) 100 messages /day 200 messages /day

I/O profile (IOPs/user) 0.32 0.2

Number of Mailbox users 1500 5000

Mailbox Profile Text Performance Results

Target Transactional IOPs 480 1200

Achieved Transactional IOPs 503 1345

DB read latency (ms) 17.4 17.06

28 Catalyst Catalyst 29

News

Dell opened 12 centers around the world where customers can try out complex technologies.

Dell™ Solution Centers are a global network of state-of-the-art

technical labs constructed just for customers. Launched in 2011, they

are places where customers can explore and test solutions, enabling

them to select the best technologies that will truly work for them

and meet their business objectives.

Solution Centers are built and run on Dell platforms, representing

a “living lab” to showcase real-world deployment of Dell technologies

and capabilities. Dedicated experts work with customers in the

centers to share best practices and knowledge. This combination

allows customers to “try before they buy,” proving out and optimizing

their architectures on Dell infrastructure before committing to

a production environment or services contract.

Solution Centers also provide customers with technical briefings,

architectural design sessions and independent software vendor

certification. With 12 centers now open globally, the most recent

in Silicon Valley, Dell has connected with thousands of customers,

enabling them to get hands-on with Dell solutions.

Catalyst asked Lee Morgan, executive director of Dell Solution

Centers, to explain the value of the centers for Dell customers:

What type of customers use the Solution Centers?

We work with all of our enterprise customers, whether that

customer is a large enterprise company, a public sector organization

or a small to medium business. We also work directly with our

Premier Channel Partners to support their customers. Over the

past 12 months, we have engaged with customers from all types

of businesses, creating industry-specific use-cases to support

each customer appropriately.

What can customers do at a Solution Center?

We work with our customers in several ways. Technical briefings

can help customers to get a deep-dive view of a solution while

our architectural design sessions will explore a solution further,

in reference to the customer’s pain points. Through discussion and

white-boarding, we will talk through how the solution could work

for them. Proof-of-concept engagements are a true hands-on

experience; we set up the solution and our customer can come

in and really learn how to use it, conducting testing to prove how

their application or data management requirements will be met.

Typically, a customer will leave a proof-of-concept with a reference

architecture to implement, reducing risk around their

own deployment.

Knowledge and experience are vital in the decision-making

process for our customers. Through the deep-dive sessions and

hands-on experience that we provide, every customer can

benefit from a Solution Center engagement.

How can customers work with you?

Customers interested in engagement with us should reach out

to their account team to discuss their requirements. The account

team will submit a request to us and we will work with both our

customer and account team to scope out the engagement.

We focus on identifying the issue the customer is trying to resolve

to ensure the right resources are put in place to make this a really

valuable experience for them. We also work very closely with our

teams in the product and services groups at Dell and can leverage

their expertise to support our customer’s needs as well.

Can you work with customers who can’t travel to the center?

Absolutely! Every day we work with customers who cannot

travel to one of the centers through our dedicated remote network

capability. Completely secure, this connects all of our Solution

Centers allowing us to share resources and expertise easily, and

Learn more about Dell Solution Centers and read

customers’ experiences:

www.dell.com/solutioncenters

ensuring we can connect with customers anywhere in the world. For

customers with multiple locations, this also has the benefit of enabling

global team members to connect and collaborate on an engagement.

Solution Centers offer hands-on contact

Notebook supportEnable your mobile workforce, safeguard your data and

protect your investments by leveraging Dell’s global team of

experts on mobile products. Each notebook comes with a basic

hardware warranty covering hardware repair and replacement

during local business hours. Upgrade to Dell™ ProSupport™ services

for enhanced problem resolution or subscribe to ProSupport

Value-Added Services for premium protection.

•DellProSupportprovides round-the-clock phone-based

problem resolution and next-day on-site service from our

highly trained experts.

•DellProSupportValue-AddedServicesprovides ProSupport

services plus additional features including extended battery

service, Accidental Damage Protection and hard drive

data recovery.

Learn more or contact a services consultant:

https://marketing.dell.com/AU-support-mobility

Learn more or contact a services consultant:

https://marketing.dell.com/AU-support-enterprise

Choose local support tailored to your company Region: Asia Pacific

Enterprise supportTailor your support to align with your organization’s data

center and IT landscape. Each piece of Dell equipment includes a

hardware warranty that covers repair and replacement during local

business hours. Upgrade to Dell ProSupport for enhanced problem

resolution or IT Advisory Services, a ProSupport value-added service,

for premium protection of your enterprise equipment.

•DellProSupportprovides around-the-clock phone-based

problem resolution and next-day on-site service from our

highly trained experts.

•DellProSupportITAdvisoryServices provides ProSupport

plus services designed to optimize your IT environment, including

an annual health check and assessment, as well as pre-emptive

reporting and analytics. Choose from the Essential or Strategic

packages, which provide differing levels of support to

suit your needs.

Strengthen your IT team. Dell Services offers:• 24x7x365support

• End-to-endsupportservicesavailablein180countries

• In-region,localcallcenters

30 Catalyst Catalyst 31

Products

Find out about the latest products from Dell and the details that make them distinctive.

NewProducts

New workstations pump up performanceThe new portfolio of Dell Precision™ tower workstations, including the Dell Precision T7600, T5600

and T3600 workstations, help creative and design professionals deliver results faster with increased

performance, a re-organized interior, externally removable power supply, and front-accessible hard

drives. Patented Reliable Memory Technology eliminates virtually all memory errors, increases

reliability and eliminates the need for extensive full memory tests, IT support calls and memory

DIMM replacement.

The workstations deliver high performance from new Intel® microarchitecture and eight-core

CPUs for multithreaded applications; generation three PCIe IO support for improved visualization

performance with next-generation graphics; and up to 512GB, quad-channel memory for running

large data sets in memory. They offer NVIDIA Maximus technology, which enables simultaneous

visualization and rendering or simulation, and also feature a broad range of professional class

graphics cards from AMD and NVIDIA.

Blade servers boast memory and densityIT departments looking to optimize server, networking, and storage resources and create an

agile infrastructure will be interested in the new Dell™ PowerEdge™ M520 and M420 blade servers,

part of Dell’s recent launch including rack, tower and blade servers. The M520 blade is a half-height,

two-socket model designed for mainstream workloads such as entry-level virtualization and general

business applications. The device includes a large memory footprint, with up to 192Gb in 12 DIMMs;

two mezzanine cards; four 1GbE Broadcom™ network interface cards; dual SD cards for redundant

hypervisors; up to two 2.5" hard disk drives; and advanced RAID options including three hardware

RAID options for improved performance.

The PowerEdge M420 is a two-socket, quarter-height blade server with up to 64 processors in a

10U chassis designed for dense computational environments where space is at a premium. It offers

enterprise-class features in individually-serviceable and highly-available server nodes. The device offers

192Gb of memory in six DIMMs, embedded hardware RAID, dual SD cards, one mezzanine card, two

10GbE Broadcom converged network adapters, and up to two 1.8" solid-state drives.

Unify block and file storageThe Dell Compellent™ FS8600 device is a unified, scale-out storage solution bringing SAN

and NAS together as one system. The FS8600 device allows Compellent arrays to share a single,

virtualized array for all block and file data. The architecture provides non-disruptive performance

and capacity upgrades as needed, without forklift upgrades. Data progression automatically migrates

structured and unstructured data to its appropriate tier based on usage and performance needs.

Find out more about this product: http://www.dell.com/precision

Find out more about this product: http://www.dell.com/compellent

Find out more about these products: http://www.dell.com/performance

31 - ad

32 Catalyst

32 - ad