e-guide anin-depthlookat applicationbench - markingforqoe

12
An in-depth look at application bench- marking for QoE In this exclusive E-Guide, enterprise management analyst Dennis Drogseth offers an in-depth look into application benchmarking for quality of experience (QoE). This guide offers insight into the difference between QoE benchmarks and diagnostics for application performance and examines some touchstones for calculating true QoE—explaining why technical benchmarking alone is not enough and offers a look into the technologies available for enterprise use today. Sponsored By: E-Guide

Upload: others

Post on 17-Feb-2022

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: E-Guide Anin-depthlookat applicationbench - markingforQoE

An in-depth look atapplication bench-marking for QoEIn this exclusive E-Guide, enterprise management analyst Dennis Drogseth

offers an in-depth look into application benchmarking for quality of experience

(QoE). This guide offers insight into the difference between QoE benchmarks

and diagnostics for application performance and examines some touchstones

for calculating true QoE—explaining why technical benchmarking alone is not

enough and offers a look into the technologies available for enterprise use today.

Sponsored By:

E-Guide

Jeff's Account
Tequipment Medium
Page 2: E-Guide Anin-depthlookat applicationbench - markingforQoE

An in-depth look at application benchmarking for QoE

Table of Contents

Page 2 of 12

Table of Contents:

QoE benchmarks or diagnostics for application performance: What’s the difference?

Quality of experience: Why technical benchmarking is not enough

QoE benchmarking unique approaches and unique environments

Related Resources from Fluke Networks

An in-depth look atapplication bench-marking for QoE

E-Guide

Sponsored by:

Page 3: E-Guide Anin-depthlookat applicationbench - markingforQoE

QoE benchmarks or diagnostics for application performance:What’s the difference?

By Dennis Drogseth

The importance of managing (monitoring as well as actively managing) networked applications for QoE should be all

but self evident. EMA data from 2007 (Figure 1) showed that a healthy 72% of respondents from a wide range of

enterprises and service providers had more than 20 remote branch offices, and in a parallel EMA survey, 34.1% had

more than 100 remote locations. Not only are IT applications becoming the heart of many businesses and central to

their competitiveness, networked applications in all their flavors are reshaping how and where enterprises across virtu-

ally all verticals, government agencies, and even some service providers are doing business. Networked applications

are in fact actually enabling new business models across verticals—true today with Web2, and even truer tomorrow

with unified communications, and in particular the advent of globally dispersed service-oriented architectures (SOAs).

Figure 1: How Many Branch Offices Does Your Company Have?

Given this challenging situation—one that nevertheless is placing more not less attention on network operations—

many networking planners, engineers, managers and architects are looking to leverage more holistic and cohesive

views of network-to-application performance, including QoE. But beyond this almost constant drumbeat for “cohe-

siveness,” there are generally two types of approaches, two schools of thought, and often two types of personalities

most associated with the challenge of managing networked applications. Group A includes those within the network

team who prefer to see their role as absolutely bounded by the networked infrastructure without much (or any)

interest in dialog with other constituencies. These constituencies might include application managers, data center

managers, and even application developers, as well as service management teams or those groups that are directly

customer-facing. To the degree that network operations has a defined set of customers—and in some cases this

group is often first call for networked application problems—the constituencies can directly include the customers

themselves. The second group, Group B, includes those within network operations who either grudgingly, or more

proactively, recognize the need for these dialogs as an intelligent extension of their broader responsibilities.

An in-depth look at application benchmarking for QoEQoE benchmarks or diagnostics for application performance:

What’s the difference?

Sponsored by: Page 3 of 12

Page 4: E-Guide Anin-depthlookat applicationbench - markingforQoE

As you may have assumed by now, this analyst, along with EMA’s IT consulting team, favors Group B. And so to

answer the question of QoE, this e-Book will reflect a wholehearted affirmation of the idea that application QoE is

a collective, collaborative endeavor that cannot be done with stovepiped metrics and stovepiped thinking. In this

section, we’ll look at the differences between QoE metrics and diagnostic metrics. While the two may overlap, they

are fundamentally different.

Figure 2: How Would You Rate the Following in Terms of Diagnosing the Root Cause of a Problem?

As you can see in Figure 2—taken from early 2007 EMA research—network configuration, systems configuration

and application configuration top the charts, followed by event-based information, when it comes to diagnosing

problems with application performance over the network. Other technologies—such as dynamic packet analysis,

session reconstruction, and flow-based volume information—follow. This data is striking in that it shows the growing

awareness of configuration information—changes made to the infrastructure, in actually diagnosing problems

reflective of application performance over the network. The population involved with this research was about 60%

network operations and 40% application management and data center (respondents had to feel that they “owned”

or at least “shared in owning” responsibility for managing application performance over the network). This data

alone reflects the need for collaboration, and the report also showed that more than 60% of our respondents actu-

ally had organizational changes made to support better cooperation between network and application managers.

All this is well and good, and useful in its way. In other research, the importance of flow-based information as a

continuum (application flows, packet analysis, route analytics, transaction analysis) showed more strongly, which

is worth pointing out here. But the even more important point is that none of this is QoE.

In this same research in Figure 3, we see some rather basic priorities relevant to QoE, with availability edging

out response time 71% to 59%, followed by mean time to repair (MTTR), mean time between failure (MTBF), and

round-trip latency across the network. In following sections, we will look more closely at what QoE metrics can

and should be, and how you can best instrument to gauge them. Suffice it to say here, though, that the first step

An in-depth look at application benchmarking for QoEQoE benchmarks or diagnostics for application performance:

What’s the difference?

Sponsored by: Page 4 of 12

Page 5: E-Guide Anin-depthlookat applicationbench - markingforQoE

in planning for effective QoE for networked applications is to recognize that it is a collaborative effort. The second

step is to realize that QoE and diagnostics are typically two separate categories of metrics, even if there is some

overlap. One set of metrics, QoE, begins to capture that unfathomable complexity of actual user “experience” in

dimensions, ideally, most relevant to the user.

Figure 3: How Would You Prioritize the Below for Assessing Quality of Experience?

Putting yourself in their shoes is the way to start. One hint: If you think that the network is a self-contained entity

and that you should care only about the demarcations of your WAN, you’re missing the point, even if that’s how

you’ve been trained organizationally. The network is a part of the larger delivery system for QoE in which the only

true metrics that matter ultimately reside within the flesh-and-blood experience of your consumers. Even if you’re

a WAN service provider, you should begin to think twice about the complacency of ignoring this core requirement.

As many as eight years ago, one CIO threw out his brand-name telco for one that would partner more smartly on

applications versus plumbing. And in the enterprise, network management will forever be relegated to blue-collar

status, without even a seat at the table to defend itself, if it ignores this increasingly relevant lesson.

The good news is that technologies are available today that are well tuned to network operations that can address

this very requirement. We’ll address those in section three. In the next one, we’ll examine some touchstones for

calculating true QoE.

About the Author: Dennis Drogseth is the vice president of Enterprise Management Associates (EMA), an IT

management research, analysis and consulting firm. Having joined EMA in 1998, Dennis currently manages the

New Hampshire office. He has been a driving force in establishing EMA’s New England presence. Dennis brings

24 years of experience in various aspects of marketing and business planning for systems and network solutions.

He directs a team of analysts who focus on the development of the Networked Services Management practice areas

that span performance availability and service management across enterprise and telecommunication markets.

His team also addresses accounting, billing, QoS, outsourcing and other disciplines related to these markets.

An in-depth look at application benchmarking for QoEQoE benchmarks or diagnostics for application performance:

What’s the difference?

Sponsored by: Page 5 of 12

Page 6: E-Guide Anin-depthlookat applicationbench - markingforQoE

N E T W O R K S U P E R V I S I O N ©2008 Fluke Corporation. All rights reserved.

As a certified Cisco Technology

Developer Partner, Fluke Networks

gives you valuable insight into

Cisco infrastructure technologies

and data sources.

Fluke Networks helps Cisco customers

discover the power of Cisco IOS® by

utilizing embedded technologies such

as NetFlow, IP SLA, and Performance

Routing (PfR), for in-depth forensic

analysis, VoIP and application

pre-assessment testing, proactive

performance management and more.

www.flukenetworks.com/cisco

Discover the power of Cisco IOS technologies

s a certified Cisco TA

,artnerr, Flukeloper PvDe

aluable insight into ou ves ygiv

echnology o TTechnology

sorke NetwFluk

insight into aluable insight into ou ves ygiv

eastructurCisco infr

.cesand data sour

elps Cisco custometworks he NFluk

e power odiscover th

ed techndg embedutilizin

insight into

e technologies

ers ps Cisco custom

OS® by f Cisco Io

ch es suologitechned techndg embedutilizin

, IP SLA, anetFlowas N

or i), fRfg (PRoutin

d aoIP an, Valysisan

t testinene-assessmpr

eagance manormperf

ch es suologitechn

ce anormerfod Pan

csienorepth fforn-d

oncatiappli

oactive g, prtin

.eord mt anenem

sorkenetw.flukwwww.fluk .com/ciscos

N E T W O R KW O R K S U P E R V I S I O N ©2008 Fluk .eservedts rghon. All riatie Corpork

Page 7: E-Guide Anin-depthlookat applicationbench - markingforQoE

Quality of experience: Why technical benchmarking is not enough

By Dennis Drogseth

In the last section, we looked at some metrics prioritized by respondents in EMA research from last year. The

rankings topped with availability, then response time, then MTTR, MTBF and networking round-trip latency. We

also separated this class of metric from diagnostic metrics such as configuration information and forensic session

reconstructing.

In this section, we’re going to take the bull by the proverbial horns and look in more depth at metrics truly targeted

at QoE. The first thing to say is that in terms of customer priorities, our respondents got it wrong. Any number of

studies have shown that end users care more about degraded response time than intermittent availability issues.

This has more to do with human psychology than network engineering. End users typically believe that a complete

failure in availability will soon be remedied, whereas they feel isolated and unsure that any action will be taken if

their response time is degraded. Moreover, degraded response time tends to persist far longer than most availability

issues, especially in 2008. So, in this case, their perception really is reality.

And by the way, end-to-end network latency is of course only an approximation of response time. In the next

column, we’ll dive more deeply into unique technologies—most notably synthetic transactions and observed

transaction response. But for now, suffice it to say that while both apply, for true QoE it’s important to capture

observed response time from the end-station out.

Yet response time can also be problematic in other ways. Averaged response time over a day or a week or a month

may not be very meaningful in itself. Inconsistent response time metrics, even with faster overall averages, can be

far more troublesome to rhythms in working and communicating than somewhat slower but more consistent service

delivery. And those terrible spikes that alienate users can occur within a single minute—spikes that may not only

help to catch alienated users but also help to provide insight on where the problems lie.

And response times, as central as they are, are just one metric for QoE. For instance, setting the appropriate

response-time goal should be predicated not on pure-play speed but on appropriateness and cost. This, of course,

is where SLAs can come in—and they should be based on what you know to be true about the needs of your

customers, not what you presume to be true. For some applications, such as email, your customers may care

more about flexibility in accessing their mail between wireless and tethered environments than about absolute

response time.

Even availability can be a challenge. The availability of the “network” is in itself a far from obvious discussion. In

Figure 1, you can see a number of components ranging from servers, hubs, routers and database transactions that

can all affect availability and, of course, performance. The math is easier in availability—as availability tends to

aggregate across components as Figure 1 demonstrates. Performance metrics can be more complex, and—depend-

ing on the specifics in timing and parallel activities—they may or may not aggregate.

An in-depth look at application benchmarking for QoEQuality of experience: Why technical benchmarking is not enough

Sponsored by: Page 7 of 12

Page 8: E-Guide Anin-depthlookat applicationbench - markingforQoE

Figure 1: End-to-End Availability

And yes, MTTR and MTBF do affect end-user experience—statistics that you need to understand whole-cloth through

your service organization if not directly through your own internal metrics. In other words, if you really care about

QoE, you should understand MTTR and MTBF as they affect the service consumer, not simply as they are relevant

to one of the components in the network.

But other metrics will come into play with various degrees of relevance. These include flexibility and choice of

service—something in which network planning plays a role. Data security is another core value that people may

not think of in QoE, but for certain applications and certain information, it can be a prime customer concern—one

for which, when cost is a factor, your customers may be willing to pay more, for more absolute guarantees. And

speaking of cost—well, cost effectiveness and even visibility into usage and cost justification are of increasing

interest to business clients who may themselves be expected to contribute to the value of their service. Mobility is

another QoE attribute, more important for some applications than others, as I’ve already indicated. And frankly, the

list goes on.

The main point to remember is that each application and each customer set may suggest different QoE parameters.

This means dialog (either direct or indirect through your service organization), and that dialog should be iterative,

as business demands and requirements change. You can save yourselves a lot of time, money and grief just by

making sure up front that you’ve invested in listening to your customers’ top requirements, and then instrument to

support those, versus scattering your efforts in an introverted and uninformed manner. In this way, QoE is a little

bit like being a good partner in a marriage—doing what’s right for the two of you, not just doing what you believe

to be the right thing without asking.

An in-depth look at application benchmarking for QoEQuality of experience: Why technical benchmarking is not enough

Sponsored by: Page 8 of 12

Service Level Objective: 99.9% availability;downtime <= 50 minutes

LAN 99.97% 13.14

Local Server 99.95% 21.9

Building Hub 99.96% 17.52

Intranet Router 99.88% 52.56

Remote Host 99.93% 30.66

Order Entry Applic. 99.90% 43.8

Customer Data Base 99.92% 35.04

Inventory Data Base 99.91% 39.42

Total 99.42% 254.04

Minutes ofComponent % Availability Downtime

Page 9: E-Guide Anin-depthlookat applicationbench - markingforQoE

QoE benchmarking unique approaches and unique environments

By Dennis Drogseth

Benchmarking for QoE immediately presents a challenge from any number of perspectives. There are many network

technologies that, in most environments, function together. These support a variety of protocol types and in the end

enable the delivery of multiple application types. I don’t think EMA has worked with a single IT or service provider

that had a simple, monolithic set of requirements for delivering application services over the network.

Figure 1: Which of the Following Technologies Does Your Network Currently Incorporate?

Let’s look first at the network transport technologies present in most IT organizations today, as in Figure 1. Old

technologies such as frame relay and ATM cohabit with wireless, VPN and MPLS networks, many of which function

as virtual overlays to core Ethernet transports. WAN, LAN and VLAN technologies exist together, along with the

many flavors of WiMAX for last-mile wireless complements. And while the answers for QoE benchmarking reside

beyond all of these transport challenges, they are far from immune from them. If you care about QoE, you need to

have real visibility into the design and performance of your network, ideally with technologies that can begin to

effectively link the user “experience” with real network transport issues.

More to the point for QoE itself, there is a wide array of application technologies and types, as in Figure 2. As you

can see from this data from last year—and EMA has revisited this discussion on several occasions with virtually

identical results—Web applications clearly dominate when it comes to managing the performance of applications

over the network. Many of these application types come with unique protocols—such as AJAX, Java or ActiveX, and

SOAP—and are increasingly becoming componentized so that more pieces of different applications are residing in

different locations, escalating the complexity of the user’s end station (laptop, desktop, mobile device) with virtually

the rest of the outside world. This complexity will only become greater as service-oriented architectures (SOAs)

An in-depth look at application benchmarking for QoEQoE benchmarking unique approaches and unique environments

Sponsored by: Page 9 of 12

Page 10: E-Guide Anin-depthlookat applicationbench - markingforQoE

become more and more pervasive. Data from mid-2007 shows that SOA is already beginning to challenge VoIP as

a protocol of concern to operations managers.

Figure 2: Which Applications Are You Delivering or Planning to Deliver Over a Network?

Given the limits of space, I have chosen to focus on benchmarks targeted at this increasingly mainstream, Web-

based application world, as opposed to VoIP or video, which do, of course, present unique challenges of their own

and which deserve their own, separate discussion. The first thing to stress is that QoE telescopes the problem

beyond the network itself into the true end-user experience. You can’t benchmark the network if you’re blind to

these transaction-driven details, which often require complementary and more application- oriented monitoring than

most network management solutions offer. As a result, I am going to focus primarily on those technologies and

solutions targeted at capturing transactional levels of awareness.

One of the more hotly debated topics over the last few years has been the difference between synthetic transac-

tions and observed transactions in testing application response. Another part of this discussion involves where the

transactions are monitored by location—e.g., in the data center, at the edge of the data center, or from end stations

themselves. And if the end station, is it sufficient to have instrumentation on a per-branch office basis, or do you

need to strive for blanket coverage across a wide variety of end-user points?

As for synthetic versus observed, the truth is that both are valuable. Synthetic tests are proactive, can give you

more consistent data suitable for SLA requirements, and can let you know if availability is lost, which observed

transactions typically cannot. Many synthetic tests also offer diagnostic value, especially when the scripts are

optimized to look at certain types of transactional behaviors that occur on an ad-hoc basis in the real world. On

the other hand, synthetic tests occur at specified intervals and therefore may fail to capture any number of real

problems that occur in finite time frames. They may create overly simplified pictures of real-world experiences by

a wide variety of the customers you may care about most. Moreover, many observed capabilities have become

increasingly rich in function and are beginning to offer much of the granularity of insights once available only in

synthetic tests. So the truth is that both synthetic and observed should be in place—if you really care about QoE.

An in-depth look at application benchmarking for QoEQoE benchmarking unique approaches and unique environments

Sponsored by: Page 10 of 12

Page 11: E-Guide Anin-depthlookat applicationbench - markingforQoE

Placement is also a choice that is more both/and than either/or, but one where you should apply common sense

based on business needs and cost versus a monomaniacal urge to achieve technical perfection. Data-centric trans-

actional monitoring can provide back-office detail that is quite useful in diagnostics, but it can also provide rich

insights into certain issues surrounding QoE—in some cases, playing back actual transactions in cinematic fashion.

These solutions can catch Web-application design issues, and even problems with layout and graphics, to which—

from a network perspective—you may feel immune. But your immunity can be challenged when you’re blamed for

degraded network performance that’s really caused by poor application design. However, there are also solutions

that provide full-service playback from the user end station and which capture a different set of dynamics more

thoroughly—fully aligned with how the end user sees the world. And while many of these, especially those opti-

mized for rich transactional detail, are optimized for branch office rather than blanket deployments, both are

available and, to be frank, offer largely complementary values.

And then many of the more network-centric solutions for QoE benchmarking sit at the edge of the data center and

calculate end-user experience, in some cases in conjunction with insights into the back office transaction as well.

Some solutions also offer probes, or are probe-based, with a focus on network segments and are optimized to

assess bandwidth/per application usage. While many of these function at Layers 3 and 4, almost all at least support

HTTP monitoring, or detailed capture of Session Layer (Layer 5) transactions, and some offer more granular trans-

action analysis through Layer 7. Though most of these are not the most “heavy hitting” in the true QoE sense,

having their insights can allow you to execute far more quickly on actually diagnosing the cause of the problem,

and some will suffice for proactively anticipating most performance degradations in remote locations.

I’m going to close this e-Book with a partial list of vendors that offer strong insights into QoE from one or more of

these perspectives. These include services such as Keynote and Gomez, which can offer valuable insights across

the board, but which have come out of the tradition of synthetic test suites. Data-center transaction-oriented

monitoring tools such as Empirix (which combines Web and VoIP call center interaction) and TeaLeaf (for granular

transaction replay and analysis) can be complemented by strong end-station visibility through solutions such as

AlertSite, Coradiant, Symphoniq and Xangati. Multi-purpose capabilities for QoE are available from larger vendors

such as Compuware, Fluke Networks, OPNET and Quest. And finally, these complement some of the more network-

centric approaches for QoE such as those from Apparent Networks, NetScout, NetQoS and Shunra.

How you choose to approach this thorny problem is, once again, a matter of pragmatism, in which cost and overall

resources should be balanced with the need to know all the gory details. But do realize that benchmarking for QoE

is an operations-wide endeavor and should ideally even touch application development and QA/Test as well. You

can’t optimize the network in a vacuum when the experience you’re trying to support lies beyond it.

An in-depth look at application benchmarking for QoEQoE benchmarking unique approaches and unique environments

Sponsored by: Page 11 of 12

Page 12: E-Guide Anin-depthlookat applicationbench - markingforQoE

An in-depth look at application benchmarking for QoERelated Resources from Fluke Networks

Sponsored by: Page 12 of 12

Related Resources from Fluke Networks

Monitoring & Improving Application Performance across your Company (WebEx player required—download at

www.webex.com/downloadplayer)

Correlating Granular Network and Application Visibility for Improved Performance

Leveraging Capabilities in Your Existing Cisco Network for Optimized Performance

VoIP and MPLS—Making it work for your company

About Fluke Networks

Our history of innovation, product quality, and customer service began in 1948. Today, Fluke Networks is part of

Danaher Corporation, a growing Fortune 500 company and leading manufacturer of professional instrumentation,

industrial technologies, tools and components with revenues of more than $11 billion (USD) annually.

Our technology offerings are used by major carriers including AT&T, Global Crossing, Sprint, Verizon Business and

others to run their managed services. Our global reach of sales offices, laboratories, factories, and home and retail

environments spans six continents and more than 50 countries and gives customers the peace of mind that they

made the right choice in partnering with Fluke Networks for all of their Performance Management needs.

www.flukenetworks.com

Jeff's Account
Text Box
Jeff's Account
Tequipment Medium