vmware performance

24
Exploring ESX Server ® 2.5.1 Virtual CPU Capabilities in an Internet Mail Protocol Environment Enterprise Product Group (EPG) Dell White Paper By Amresh Singh, C S Prasanna Nanda, and Scott Stanford

Upload: webhostingguy

Post on 22-Jun-2015

1.302 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: VMware Performance

Exploring ESX Server® 2.5.1 Virtual CPU Capabilities in an

Internet Mail Protocol Environment

Enterprise Product Group (EPG)

Dell White Paper

By Amresh Singh, C S Prasanna Nanda, and Scott Stanford

Page 2: VMware Performance

Contents

1. Executive Summary………………………………………………………………………………..3

2. Introduction…………………………………………………………………………………………….4

3. Server and Storage Configuration…………………………………………………………5

4. VMware ESX Server and Exchange Server 2003 Setup………………………..9

5. Simulating and Measuring Internet Mail Workloads………………………………11

6. Analyzing and Understanding Virtual CPU Capabilities………………………….14

7. Virtual CPU Resource Management in ESX Server…………………………………18

8. Summary………………………………………………………………………………………………….23

9. Resources………………………………………………………………………………………………….24

Scalable Enterprise

2

Page 3: VMware Performance

Section 1 Executive Summary

One measure of computing scalability is the ability for a system to quickly and efficiently grow to meet increasing performance demands. VMware ESX Server® provides IT organizations with the flexibility to meet those demands by enabling users to create multiple virtual machines on the same physical host server.

On a physical machine dedicated to single purpose applications, overall system resource utilization may not take full advantage of the server hardware, but VMware ESX Server has the ability to scale virtualized hardware resources dynamically and evenly across virtual machines.

VMware ESX Server uses a proportional share mechanism to allocate CPU, memory, and disk resources when multiple virtual machines are contending for the same resource.1

In this paper, Dell Scalable Enterprise engineers present how VMware ESX Server virtual CPU resources can be managed to help achieve flexibility and resource utilization in a virtualized Internet Mail protocol computing environment.

                                                           

1 See www.vmware.com/pdf/esx2_performance_implications.pdf for more information.  

Scalable Enterprise

3

Page 4: VMware Performance

Section 2

Introduction

This paper discusses the results of testing Microsoft® Exchange Server 2003 front-end services running inside VMware ESX Server 2.5.1 virtual machines (VM) with a primary focus on exploring Virtual CPU resource management options and the benefits that can be achieved from deploying those options in various Internet Mail protocol scenarios.

The Scalable Enterprise team at Dell conducted the VM sizing tests on ESX® Server 2.5.1 running on a DellTM PowerEdgeTM 2850 Server. The Microsoft Exchange Server 2003 front-end components, PE2850 server, and Dell/EMC storage were configured to support the infrastructure required for running multiple VMs.

A front-end server is an Exchange Server that accepts requests from clients and proxies them to the appropriate back-end server for processing. Microsoft Exchange Server 2003 provides support for a variety of different messaging protocols, however in this white paper testing and analysis was focused on the following front-end services:

WebDAV (Outlook Web Access) Simple Mail Transfer Protocol (SMTP)

In the following sections, the server and storage hardware configurations and design are presented along with the ESX Server host and VM installation steps. The Exchange Server 2003 setup phases are covered along with how the Internet Mail workloads were simulated and measured. After understanding how the test environment is designed and configured, the remainder of the paper focuses on achieving flexibility and resource utilization through the use of ESX Server Virtual CPU management capabilities.

Scalable Enterprise

4

Page 5: VMware Performance

Section 3

Server and Storage Configuration

This section describes how Dell engineers designed and deployed ESX Server 2.5.1 on the Dell PowerEdge 2850 Server. The hardware components used are listed below in Table 1.

Host Operating System VMware ESX Server 2.5.1 build 14182

CPU 2 x 2.8 GHz Intel® XeonTM Processor with 1 MB L2 Cache

Memory 8GB

NIC 2 x dual port Intel 8254NXX Gigabit Adapter

Fibre Channel

Host Bus Adapter

2 x Qlogic® 2340

Disk Controller PERC 4i Dual Channel

Internal Disks 3 x 36 GB

Table 1. ESX 2.5.1 Server Host Configuration

The Dell PowerEdge 2850 server was connected to a Storage area network via the Qlogic 2340 Fibre channel Host Bus Adapters (HBA) as shown in Figure 1. A Dell/EMC CX300 was attached to the SAN to provide storage. Twelve drives in the CX300 were assigned for use in the test environment. Hyper-Threading was enabled on the server and used in all the test cases.

Scalable Enterprise

5

Page 6: VMware Performance

Figure 1. Microsoft Exchange Server 2003 front-end test configuration

Scalable Enterprise

6

Page 7: VMware Performance

Configuring the SAN: 

Controller

Dell/EMC CX300

LUNs

5 x 6-Disk RAID 1 LUN for Virtual Machine boot Drive

4 x 6-Disk RAID 5 LUN for Exchange Database

Software

Navisphere® Manager

Access Logix

Table 2. Dell/EMC CX300 SAN Configuration

Table 3 below shows the CX300 disk array configuration. For the configuration used in these series of tests, from the 12 disk enclosures in CX300 that are numbered 0-11, five RAID groups were created.

• RAID group 0 was created from physical disk 0 and 1; LUN 1 and 2 are bound out of the RAID group. LUN’s are configured as RAID 1.

• RAID group 1 was created from Physical disk 2 and 3; LUN 5 and 6 are bound out of the RAID group. LUN’s are configured as RAID 1.

• RAID group 2 was created from Physical disk 4 and 5; LUN 0 is bound out of the RAID group. LUN is configured as RAID 1.

• RAID group 3 was created from Physical disk 6, 7 and 8; LUN 3 and LUN 4 are bound out of the RAID group. LUN’s are configured as RAID 5.

• RAID group 4 was created from Physical disk 9, 10 and 11; LUN 7 and LUN 8 are bound out of the RAID group. LUN’s are configured as RAID 5.

Scalable Enterprise

7

Page 8: VMware Performance

Scalable Enterprise

8

0 1 2 3 4 5 6 7 8 9 10 11

LUN 1

LUN 2

LUN 5

LUN 6 LUN 0

LUN 3

LUN 4

LUN 7

LUN 8

VM0 Log

VM1 Log

VM2 Log

VM3 Log

VM System Base Disk

VM0 Database

VM1 Database

VM2 Database

VM3 Database

Table 3. ESX Server VMFS, VMDK, and Virtual Disk LUN assignments

Most of the workload generated by the Internet Mail protocol messaging traffic is not very I/O intensive, so in a Virtual infrastructure it is even possible to have the front end services VMs use the local ESX host RAID/SCSI controller while the back-end services VMs like those supporting Exchange Server 2003 mailbox servers can use the SAN attached storage. In these test cases, a mixture of RAID 1 and RAID 5 LUNs were used. For production messaging environments, Dell recommends using fault tolerant RAID levels for all ESX Server and VM LUNs.

Page 9: VMware Performance

Section 4

VMware ESX Server and Exchange Server 2003 Setup

VMware ESX Server 2.5.12 was installed to boot from the local RAID controller, and was installed following the Dell best practice guidelines. Please refer to the Dell VMware ESX Server 2.5.1 deployment guide for more information.3

In this configuration, the onboard RAID controller was shared with the service console and VMs, and a VMkernel swap of 8 GB was created on the internal disk. The vmkernel swap file must be at least the size of the physical RAM to fully utilize the available physical RAM. The PE2850 used for the tests had 8 GB of RAM installed.

For networking, one onboard NIC was dedicated to the service console, while two PCI based NICs were dedicated to the virtual machines.

VM creation and Installation of guest OS

Two VMs with Windows 2003 Enterprise Server as the Guest Operating System were created using the ESX Server MUI (VMware Web Management Interface). Each VM was configured with one virtual CPU (VCPU) and 3.6 GB of RAM. VMware ESX Server supports up to 3.6 GB of memory per VM and five virtual PCI Slots. Each VM was configured with two Virtual SCSI controllers and two virtual Ethernet Cards. By default a Windows 2003 Enterprise Server VM uses the Virtual LSI Logic controller. To use the Virtual Bus Logic controller, download and install the drivers that are available from the download section of the VMware website.

Each SCSI controller was configured to host one Virtual Disk to take advantage of the VMware ESX Server virtualization layer and eliminate any extra overhead that could be incurred by hosting two Virtual Disks on one virtual SCSI controller. Virtual Disks are saved on the VMFS volume. VMFS is a specialized proprietary file system designed by VMware to store virtual machines disk files. The VMFS file system allows additional block size tunings based upon usage requirements, but for the tests conducted, the default block size was used. VMFS-2 is the file system version used by ESX Server 2.5.1. VMDK (Virtual Machine disks) are virtual machine disk files, and are noted by the file extension of .vmdk.

                                                           

2 VMware ESX Server 2.5.1 builds 14182. Build 14182 is the equivalent of Upgrade Patch 1 for ESX 2.5.1 release build 13057.

 3 See http://www.dell.com/downloads/global/solutions/vmware_251_deployment_guide.pdf for the Dell ESX 2.5.1 Best practices guide.

Scalable Enterprise

9

Page 10: VMware Performance

Scalable Enterprise

10

VMware Tools Installation in the Guest OS

VMware tools were installed inside both the VMs. VMware tools install the drivers for the VMware SVGA video adapter and the Virtual Networking Adapter (which is also known as VMXnet adapter). VMware tools also install the vmmemctl (VMware memory control driver) driver automatically when the VMware Tools are installed in the guest operating system. The vmmemctl driver is used to support dynamic memory resource management.

Installing Microsoft Exchange Server 2003

Microsoft Exchange Server 2003 was installed following the best practices for the front-end services by leveraging the tools and steps provided by the Exchange Server 2003 Installation Wizard. The Exchange executables were installed in the default path and OS system disk, while the database and transaction logs where housed on different Virtual Disks. The Exchange System Manager tool was used to relocate the transaction log and databases to the correct logical drives. A new Exchange installation was completed on both the Virtual machines so that the second Exchange server could properly join the Exchange Organization.

In addition to the basic Exchange Server 2003 installation steps, the prerequisites like installing additional Windows Server components, and running netdiag and dcdiag, were completed to validate the Active Directory and DNS infrastructure.

After the Exchange Server 2003 installation, Exchange Server 2003 SP1 was applied followed

by the SP1 update (06.05.06.7226).

To isolate the networks, and simulate multiple IP networks that can be found in many front- end messaging server environments, both of the Exchange Servers where configured to run on a separate virtual networks.

Page 11: VMware Performance

Section 5

Simulating and Measuring Internet Mail Workloads To simulate Internet Mail protocol activity on the Exchange Server 2003 front-end systems, the Microsoft Exchange Stress and Performance (ESP) 2003 tool, also known as Medusa, was used.

ESP can be used to test a variety of workload scenarios, and provides support for most Internet Protocols.

ESP 2003 Test Mode

The ESP tool was run in native mode, or direct mode, in all of the test scenarios. Since the primary goal of these tests was to explore and understand ESX Server 2.5.1 and virtual resources like processor and memory while supporting multiple Exchange Server 2003 front- end systems and different Internet Mail protocols, ESP was run in direct mode on the front- end servers. Supporting tens of thousands of Internet Mail user accounts and driving heavy workloads against the front-end servers, while an interesting exercise, was beyond the scope of this paper. Since the ESP tool cannot create user accounts, LoadSim 2003 was used to create up to 500 mailboxes and users.

ESP Modules

As indicated earlier, ESP supports a variety of Internet Mail protocols. For more information on the ESP tool, and a description of the protocols and test modules that ESP supports, please refer to the Microsoft document entitled Exchange Stress and Performance Tool 2003. 4 The protocols listed below were used for the tests conducted:

WebDAV (Outlook Web Access) Simple Mail Transfer Protocol (SMTP)

Figure 2 shows a view of the protocols supported by ESP and a look at the tool’s graphical user interface. Figure 3 shows WebDAV protocol module statistics. For the protocols tested, all of the recommended variables were set as suggested by Microsoft in the ESP documentation.

                                                           4 http://www.microsoft.com/downloads/details.aspx?FamilyId=773AE7FD-860F-4755-B04D-1972E38FA4DB&displaylang=en.

Scalable Enterprise

11

Page 12: VMware Performance

Figure 2. ESP with available modules

Measuring and Analyzing ESX Server and VM Performance

To measure and understand how ESX Server architecture and virtualized hardware performs under an Internet Mail protocol workload driven by the ESP tool, the Dell team used the following performance tools:

Perfmon.msc – Perfmon, or Performance Monitor, is a performance logging, and analysis tool that ships with most versions of Microsoft Windows Operating systems. Perfmon and many applications provide performance objects and counters that can be polled and measured over specific time intervals and frequencies. Applications like Microsoft Exchange Server 2003 provide an extensive set of Perfmon objects and counters that Systems Engineers and Exchange Administrators can use to track and measure Exchange Server performance.

Esxtop - VMware esxtop tool provides a real-time view (updated every five seconds, by default) of ESX Server worlds sorted by CPU usage.

Vmkusage - The vmkusage tool displays historical graphs that show physical server

and virtual machine system statistics. These graphs show the most recent data, as well as daily and weekly views. The tool generates the graphs as Web pages that one can view by going to http://<ESXservername>.<company>.com/vmkusage.

Scalable Enterprise

12

Page 13: VMware Performance

Navisphere Analyzer - collects performance information in graphical form to identify storage system component performance utilization.5

Figure 3. WebDAV (OWA) module with statistics

                                                           5 Navisphere Analyzer is a tool that comes with the Navisphere Management Suite. Navisphere is used to manage all Dell/EMC SANs. Refer to the following URL for more information http://www.emc.com/products/storage_management/navisphere.jsp.

Scalable Enterprise

13

Page 14: VMware Performance

Scalable Enterprise

14

Section 6

Analyzing and Understanding Virtual CPU Capabilities Using the ESP modules and the performance monitoring and analysis tools outlined in Section 5, the Dell team simulated various Internet Mail protocol scenarios to demonstrate and explore how ESX Server Virtual CPU (VCPU) capabilities can be used to provide various levels of processor resources to VMs. Results and the analysis from those tests are examined in the following sections.

Baseline Performance for Outlook Web Access (OWA)

Scenario 1:

In this scenario, an OWA test was run on one of the VMs with 100 users/instances, while the other virtual machine was powered on but idle.

Front-end server VM0

Inetinfo Private Bytes 17MB

Available Mbytes 3163

% Processor Time 24.28

Context Switches/sec 2017

Process(Inetinfo)/IO Read Operations/Sec 0.19

                                             Table 4.  Virtual Machine perfmon data for an OWA workload at 100 users 

As can be seen in Table 4, some key performance counters are shown that can be used to help detect VM bottlenecks and track VM and Internet Mail services health. Processor time is the percentage of time a thread uses for execution, and at 100 simulated OWA users, the VM uses less than 25% processor time on average. The percentage processor utilization in this case is that of the VM processor. Context switching is the combined average rate at which processor(s) are switched from one thread to another. Front-End Services like OWA use relatively small amounts of system memory. For the tests in Scenario 1, only 440 MB of total allocated memory is being used by the VM. The Available Mbytes Perfmon counter shows that 3164 MB of memory is free out of the 3600 MB allocated to the VM.

Inetinfo Private Bytes is the amount of memory the Inetinfo process is using, and monitoring this counter over time can help system administrators determine how much memory Inetinfo is using relative to the number of concurrent or active OWA users. Monitoring the above counters in addition to VM performance information that is available from a tool like vmkusage allows systems administrators to track Guest OS, application, and VM level resource utilization and performance trends.

Page 15: VMware Performance

Figure 4. vmkusage CPU utilization graph

Figure 4 above shows the processor graph generated by the vmkusage utility. The overall CPU utilization on the ESX server averaged 13% as shown below in the Esxtop output, and in Figure 4, one can see a spike to 100% CPU utilization when ESP loaded the initial module for OWA and then the CPU utilization stabilized once the test module was loaded completely and the test started to run. vmkusage and esxtop can be used in combination to track and monitor ESX Server host and VM level processor utilization.

Esxtop output:

PCPU: 11.61%, 14.69% : 13.15% used total LCPU: 11.36%, 0.25%, 7.41%, 7.28%

VCPU0 is the VCPU; and in Scenario 1, each VM used a dedicated VCPU. Ready0 is the ready state of the VM and Ready0 shows the percentage of time the VM was ready but could not get scheduled to run on a physical CPU. As shown in Figure 4, average VM CPU utilization was approximately 20 percent while the Ready time was less than five percent. A low level of Ready0 time indicates that the VM is not stalled and waiting to complete pending instructions.

In this configuration PCPU 0 was also shared by the service console with the VMs. On a two-way host, whenever one powers on the first VM, VMkernel schedules the VM to run on PCPU 1. PCPU 0 and PCPU 1 are two physical processors, and when Hyper-Threading is enabled one can see the two logical processors on each CPU. LCPU0 and LCPU1 are always bound to PCPU0 and while LCPU2 and LCPU3 bound to PCPU1.

After measuring and exploring how the VMkernel schedules and manages VCPUs at 100 simulated OWA users, the same test with 100 up to 500 users in 100 user increments were run to see how the VMkernel and VCPU utilization were impacted at the VM and ESX Server host levels. Figure 5 shows how both of the CPU’s were utilized on a two-way server as the work load was increased. One can see that while both of the physical processors are actively used, the ESX Server scheduler stresses PCPU 1 more because the service console is running on PCPU 0.

Whenever multiple VMs are powered up on an ESX Server host, VMkernel schedules the VMs to run on all the other processors except PCPU 0. PCPU0 is mainly used by the service console to load drivers and the vmkernel. This trend of reserving PCPU 0 for as much of the service console related work as possible can be observed in Figure 6. Figure 6 shows the average utilization for both of the physical processors. In this case, the scheduler maintained a state

Scalable Enterprise

15

Page 16: VMware Performance

where active service console processes running on PCPU 0 did not have to wait for CPU cycles.

CPU

010203040506070

100 200 300 400 500

No of Users

% E

SX C

PU

PCPU 0PCPU 1

Figure 5. ESX Server physical processor utilization

Looking closer, the data shown in Table 5 below indicate how the VMkernel scheduler distributed the workload across the physical and logical CPU’s. One can notice that LCPU0 is utilized more in the PCPU 0 subset because the service console runs on the physical core of the processor.

Logical processors (LCPU 2 & LCPU 3) on the second subset of PCPU 1 were relatively evenly utilized as compared to LCPU 0 and LCPU 1. It is important to point out that in Scenario 1; the LCPUs are logical processors that are available for use by the ESX Server kernel, and are not the VCPUs that are assigned to VMs.

OWA USERS PCPU 0 PCPU 1 LCPU 0 LCPU 1 LCPU 2 LCPU 3

100 12 14 11.36 0.45 7.41 7.28

200 19 23 19.22 0.72 14.83 9.05

300 25 39 24.05 1.49 21.3 17.5

400 37 44 36.8 0.2 26.75 19.47

500 33 65 31.75 1.42 39.48 25.21Table 5. Processor utilization across physical & logical CPU’s

Scenario 2:

Thus far, the tests conducted concentrated on an active VM running with one VCPU. The world or process representing that VM was scheduled to run on the available LCPUs. In Scenario 2, additional OWA tests were run on two active VMs, each with one VCPU. Figure 6 shows how two active VMs running under the OWA workload impact ESX Server CPU

Scalable Enterprise

16

Page 17: VMware Performance

utilization. In addition, VCPU utilization per active VM is shown.

2 VM CPU Utilization

01020304050607080

1 2 3 4 5

Time

% E

SX C

PU VM0

VM1PCPU 0PCPU 1

Figure 6. Two VMs with 300 OWA users

This test shows the percentage of processing time inside both of the VMs compared to the physical processor utilization on the ESX Server host. By examining Figure 6 closely, one can see that the utilization of PCPU 1 was more as compared to PCPU 0. As explained earlier in this section; VMkernel stresses PCPU 1 first when ever there is contention for resources. One can also observe from the chart in Figure 6 that the percentage CPU utilization for the VMkernel is not the same in relative terms as it is for a VM’s VCPU. VMkernel utilization is the overall utilization of resources which includes the service console and other driver threads running on the ESX server.

Scalable Enterprise

17

Page 18: VMware Performance

Section 7

Virtual CPU Resource Management in ESX Server

In the previous sections, concepts like PCPU, VCPU, and CPU resource utilization were introduced and explored within the context of how those resources are managed and scheduled by the ESX Server kernel. Several OWA test scenarios were run to stress physical and logical CPUs and some details about how physical and virtual resource utilizations differ were examined. In this next section, the Dell team further explores how ESX Server manages and allocates the physical resources across multiple VMs, with a special focus on how the different Virtual CPU resource allocation mechanisms can be utilized to distribute and manage ESX Server host processor resources.6

Allocating Virtual CPU Resources

There are three basic parameters that control the allocation of CPU resources available to a VM:

Shares allocation

CPU shares entitled to a VM to a relative fraction of CPU resources.

Min

Minimum percentage of CPU resource VM will use.

Max

Maximum Percentage of CPU resource VM can use.

In the following test scenarios, the ESX Server dynamic resource allocation mechanism options were used to show how dynamically allocating resources can benefit the VMs by utilizing unused or unallocated processor reservations.

Limiting CPU Resources by Allocating Shares

One way to allocate or control CPU resources is by allocating shares. CPU resources are allocated to the VMs on a proportional share based processor scheduling model. In this model, each VM is allocated a specified number of shares. The amount of time the VM gets depends on the proportions of its share against the total number of available shares and the current usage of CPU resources on the ESX Server host.

                                                           6 The Dynamic resource allocation settings described in this paper are per ESX host, and settings configured on a VM are not migrated to another ESX host if you are using VMotionTM  technology. (http://www.vmware.com/products/vc/vmotion.html). 

 

Scalable Enterprise

18

Page 19: VMware Performance

Scalable Enterprise

19

Shares, in essence, represent a guarantee of service. If two VMs are running and one has twice the CPU shares as the other, it is allocated twice as many CPU cycles as the other, provided both VMs are active.

By default, all single processor VMs receive an allocation of 1000 shares and have equal access to resources.

As an example, the status of VM0 with VM Id 151 during a test was:

%less -S /proc/vmware/vm/151/cpu/status

vcpu vm type name uptime status costatus usedsec syssec wait

151 151 V vmm0:Exchang 1364647.127 WAITB NONE 74991.611 419.716 IDLE

waitsec idlesec readysec cpu affinity htsharing min max shares 1275933.220 1270388.050 3928.922 2 0,1,2,3 any 0 100 1000

Below is the explanation of what each column indicates

VCPU -> is the Virtual CPU Identifier

VM -> Virtual Machine Identifier

Type -> Type of VCPU : “V” is for virtual machine

Name -> Display name associated with VM

Uptime -> Elapsed time since VM powered on

Status -> Current VCPU state: running (RUN), Ready to run (READY), Waiting on an event (WAIT or WAITB), terminating (ZOMBIE)

Cosststus -> current SMP VM co-scheduling state, uniprocessor VM (NONE)

usedsec -> Cumulative processor time consumed by vcpu

syssec -> cumulative system time consumed by vcpu

wait -> current vcpu wait even type

waitsec -> cumulative vcpu wait time

cpu -> current vcpu processor assignment

htsharing -> Hyperthreading sharing setting

Min -> Minimum Processor percentage reservation for VM

Max -> Maximum processor percentage reservation for VM

Shares -> CPU shares allocation for VM

As stated earlier VM0 by default had 1000 shares and the status of the VM was waiting on an event. To demonstrate the impact of changing the CPU shares allocated to a VM, the number of shares was changed to 2000 by running the following command:

# proc 2000 > /proc/vmware/vm/151/cpu/shares

The same test as discussed in Scenario 2 was run and it was demonstrated that VM0 had access to more CPU resources than VM1. In this case, a higher level of VCPU utilization was

Page 20: VMware Performance

exhibited by VM1. VM1 showed higher VCPU utilization because the resources available to VM0 were proportionally more given the number of available processor shares.

CPU Shares

01020304050607080

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91

Time

% C

PU VM0

VM1

Figure 7. Impact of Setting CPU Shares on a VM

One can see from the graph shown in Figure 7 that VM0 did the same amount of work with less VCPU utilization than VM1. Dynamically allocating shares is beneficial when a mix of low-priority and high-priority VMs is present on the ESX Server host, and as demonstrated, either increasing the shares given to the high-priority VM or reducing the shares of a low-priority VM can satisfy the resource constraints.

Specifying Minimum and Maximum CPU Resources

Another way to manage ESX Server CPU resources is by utilizing thresholds. When a Minimum CPU reservation is set on a VM, the VM will receive at least a minimum (Min) percentage of a processor, regardless of the changes in the total number of shares in the system. If the system does not have enough unreserved CPU time available to guarantee the Min requirement of a VM, that VM will not be able to power on. This features helps in various types of workload environments to ensure that an application running in a VM can meet a required response time or throughput metric. In those situations where a SLA or QOS level is in place, ESX Server can be configured to assure that a VM receives sufficient CPU resources.

In contrast to the Min setting, a Maximum (Max) reservation guarantees that a VM will never receive more than a Max percent of a processor, even if extra time is available in the system. ESX Server can limit the maximum amount of CPU resources a VM may require.

The settings and concepts described above were used in the following test cases to further demonstrate how workloads running on VMs are impacted by Min and Max settings.

Scalable Enterprise

20

Page 21: VMware Performance

Scenario 3:

In Scenario 3, SMTP tests were run on one and then two active VMs, each with one VCPU. The data for one VM under an SMTP workload is shown in Table 8. Compared to the VM utilization data for the OWA tests run in Scenario 1 (Table 4), it is clear that the SMTP workload generates a higher level of VCPU utilization and correspondingly, an increase in Context Switches/sec. The Inetinfo process is consuming most of the processor time, with some overhead associated with the ESP tool. Figure 6 shows how two active VMs running under the OWA workload impact ESX Server CPU utilization. In addition, VCPU utilization per active VM is shown.

SMTP Server Performance VM0

% Processor Time 93.10

Context Switches /sec 8702

Available Mbytes 3142

SMTP Messages Sent/Sec 28852

Process(Inetinfo) % Processor time 59.48

Table 8. VM data for IMAP workload

Shown below in Figure 8 is the chart which tracks the performance of two VMs running under the same SMTP workload. The VMs have equal access to processor resources under the default reservation settings.

2 VM SMTP Workload

75

80

85

90

95

100

105

1 4 7 10 13 16 19 22 25 28 31 34 37 40

Time/Hr

% C

PU VM 1

VM 0

Figure 8. Two VMS running under an SMTP workload with equal CPU resources

Scalable Enterprise

21

For the next test, Dell engineers limited the CPU resources on one of the VMs via Dynamic

Page 22: VMware Performance

CPU resource allocation. The maximum CPU usage was set to 50 % for VM0. By default the Max CPU is set to 100 and Min is set 0. Below is the command which shows the Max setting that was issued for VM0:

# cat /proc/vmware/vm/143/cpu/max 100

Below is the command which can be used to change the Maximum for a virtual machine:

# echo 50 > /proc/vmware/vm/143/cpu/max

# cat /proc/vmware/vm/143/cpu/max 50

In Figure 9 below is the graph which shows the amount of CPU cycles the VMs were utilizing on the ESX Server. VM0 was limited not to exceed the Max value that was set and the result of this setting is visible with VM0 trending at 50% VCPU utilization. The test results show that implementing the Max reservation feature can be used to provide a higher quality of service to one VM by restricting the resources available to another VM.

CPU Max

0

20

40

60

80

100

1 2 3 4 5

Time/Hr

% C

PU VM 0VM 1

Figure 9. Two VMS running under an SMTP workload without equal CPU resources

Alternatively, ESX Server provides an option where one can guarantee resources via Minimums. One use case example is to use Mins to limit the number of high priority VMs that can be run by a server.  

Scalable Enterprise

22

Page 23: VMware Performance

Section 8

Summary In this paper, the server and storage infrastructure was outlined along with the steps taken to install and configure ESX Server, the VMs, and Exchange Server 2003. A brief overview of ESP and the performance monitoring analysis tools used for the tests were provided.

Dell engineers then explored VMware ESX Server Architecture with a particular focus on how Virtual CPU Hardware can be utilized to further refine and dynamically allocate virtual resources. Tests were run using several Internet Mail protocols under different scenarios to demonstrate these architectural concepts and to show how Exchange Server 2003 front-end VMs are impacted in a simulated messaging environment.

And, by analyzing and explaining the results, the Dell team showed that by leveraging the Proportional shares mechanism and the Maximum and Minimum CPU reservations allocations features in ESX Server, IT requirements like SLA and QOS levels can be met by restricting lower priority over higher priority VMs and workloads.

Scalable Enterprise

23

Page 24: VMware Performance

Section 9

Resources

1. Exchange Stress and Performance Tool 2003 (ESP) Document

http://www.microsoft.com/downloads/details.aspx?FamilyId=773AE7FD-860F-4755-B04D-1972E38FA4DB&displaylang=en

2. Navisphere Analyzer http://www.emc.com/products/storage_management/navisphere.jsp

3. VMware ESX Server 2.5.x Documentation http://www.vmware.com/support/pubs/esx_pubs.html

4. Manpage of esxtop,vmkusage and cpu(8)

5. http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/perfscalguide.mspx

6. http://www.vmware.com/vcommunity/technology/performance/index.html

7. https://blogs.technet.com/exchange/default.aspx

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

Dell, OpenManage, PowerEdge, and PowerVault are trademarks of Dell Inc.

Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell disclaims proprietary interest in the marks and names of others.

©Copyright 2005 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell cannot be responsible for errors in typography or photography.

 

Scalable Enterprise

24