vmware faq

68
VMware FAQ 1. What is Cloud Computing? Cloud computing is a style of computing that enables on-demand network access to a shared pool of scalable and elastic infrastructure resources. The term cloud computing originates from the standard network diagram where a cloud is used to represent the abstraction of a complex networked system such as the Internet. Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. In cloud computing, the word cloud (also phrased as "the cloud") is used as a metaphor for "the Internet," so the phrase cloud computing means "a type of Internet-based computing," where different services -- such as servers, storage and applications -- are delivered to an organization's computers and devices through the Internet. Cloud computing builds on virtualization to create a service-oriented computing model. This is done through the addition of resource abstractions and controls to create dynamic pools of resources that can be consumed through the network. Benefits include economies of scale, elastic resources, self-service provisioning, and cost transparency. 2. What is Virtualization? Virtualization is the creation of a virtual (rather than actual) version of IT resources, such as a hardware platform, operating system (OS), storage device, or network resources. 3. What is the difference between Virtualization and Cloud Computing? Virtualization is the creation of a virtual (rather than actual) version of something, such as a hardware platform, an operating system, a storage device or a network resource. Simply stated, Virtualization is a technique that allows you to run more than one server (or another infrastructure component) on the same hardware. For example, one server is the host server and controls the access to the physical server’s resources. One or more virtual servers then run within containers provided by the host server. The hypervisor software (which controls access to the physical hardware) may run on “bare metal” allowing a user to run multiple operating systems on the same physical hardware, or the hypervisor may run on top of a host operating system, allowing other operating systems to run within this host OS, and so on the same physical hardware. The latter inherently gives lower performance, since it has to go through more layers of software to access the physical resources.

Upload: nishant-kumar

Post on 12-Sep-2015

100 views

Category:

Documents


14 download

DESCRIPTION

VMware Interview Questions

TRANSCRIPT

  • VMware FAQ

    1. What is Cloud Computing?

    Cloud computing is a style of computing that enables on-demand network access to a shared

    pool of scalable and elastic infrastructure resources. The term cloud computing originates

    from the standard network diagram where a cloud is used to represent the abstraction of a

    complex networked system such as the Internet.

    Cloud computing is a type of computing that relies on sharing computing resources rather

    than having local servers or personal devices to handle applications.

    In cloud computing, the word cloud (also phrased as "the cloud") is used as a metaphor for

    "the Internet," so the phrase cloud computing means "a type of Internet-based computing,"

    where different services -- such as servers, storage and applications -- are delivered to an

    organization's computers and devices through the Internet.

    Cloud computing builds on virtualization to create a service-oriented computing model. This

    is done through the addition of resource abstractions and controls to create dynamic pools of

    resources that can be consumed through the network. Benefits include economies of scale,

    elastic resources, self-service provisioning, and cost transparency.

    2. What is Virtualization?

    Virtualization is the creation of a virtual (rather than actual) version of IT resources, such as

    a hardware platform, operating system (OS), storage device, or network resources.

    3. What is the difference between Virtualization and Cloud Computing?

    Virtualization is the creation of a virtual (rather than actual) version of something, such as a

    hardware platform, an operating system, a storage device or a network resource. Simply

    stated, Virtualization is a technique that allows you to run more than one server (or another

    infrastructure component) on the same hardware. For example, one server is the host server

    and controls the access to the physical servers resources. One or more virtual servers then

    run within containers provided by the host server.

    The hypervisor software (which controls access to the physical hardware) may run on bare

    metal allowing a user to run multiple operating systems on the same physical hardware, or

    the hypervisor may run on top of a host operating system, allowing other operating systems

    to run within this host OS, and so on the same physical hardware. The latter inherently gives

    lower performance, since it has to go through more layers of software to access the physical

    resources.

  • Cloud Computing is the delivery of computing as a service rather than a product, whereby

    shared resources, software, and information are provided to computers and other devices as a

    metered service over a network (typically the Internet). Cloud Computing may look like

    Virtualization because it appears that your application is running on a virtual server detached

    from any reliance or connection to a single physical host. However, Cloud Computing can be

    better described as a service where Virtualization is part of a physical infrastructure.

    Cloud Computing builds on top of a virtualized infrastructure (compute, storage, network) by

    using standardization and automated delivery to provide service management. This makes

    monitoring the virtualized resources and the responsible deployment of these resources

    possible.

    Virtualization is simply a preparation for the delivery of IT in a very powerful way within an

    organization. It removes a level of complexity for end users, one that should never have been

    there in the first place, while primarily cutting costs for the organization. Cloud Computing

    ties directly to the way an organization uses its IT resources and enables a quantum change in

    the experience throughout the organization easing the administrative burden of deploying,

    managing, delivering IT resources, and providing the ability for end users to request and use

    virtualized IT resources (or perhaps even an application or a business process where the end

    user does not have to be aware of the underlying IT resources being used).

    4. Why Virtualize?

    There seems to be a bit of confusion about the benefits of server virtualization, with many

    tending to focus on cost savings. As a district that has been running a virtual infrastructure

    for some time, I can honestly say that virtualization is not so much about saving money

    (although you certainly will) as it is about better resource utilization, more reliability, and

    greater flexibility.

    Better resource utilization

    There is no question that most of our servers are doing nothing about 90% of the time. This

    becomes quite obvious with even a cursory glance at historical utilization data for any given

    server. It would seem that the obvious solution for this would be to simply run more

    applications on each one, but the reality of this is that the more apps you install on one OS,

    the more unreliable it becomes (especially if it's a Microsoft product.) So, what we all do

    instead is buy a new machine every time we want a new app that we think is "critical,"

    because we want to be sure it has its own sandbox to play in.

    So, we find ourselves with racks and racks of servers consuming more and more space (at a

    cost,) all generating heat which we must cool (at a cost,) all pulling more and more power (at

    a cost,) all requiring more and more time to manage (at a cost.)

  • Virtualization offers a way to safely put more than one operating system (or virtual server)

    on one piece of hardware by isolating each operating system from any others running on the

    box. Essentially, you are establishing a bunch of sandboxes on one piece of hardware. If one

    of the virtual servers crashes, hard or soft, it will have no impact on any of the others on the

    box. Hardware resources are better used since, rather than having 10 independent servers

    running at 10 percent utilization each, you can have 2 running 5 virtual servers each for a

    total of 50 percent utilization per box. Better still, if designed properly (more on that later,)

    should a virtual server require more resources, it can easily and instantly be moved to a

    machine that offers more, often live and transparently to its end users.

    More reliability

    It's important to note, before any discussion on reliability comes into play, that a virtualized

    operating system is, by nature, relatively hardware agnostic. This means that it (its image,

    which is) can easily be moved from one piece of hardware to another, even if that hardware

    is of completely different design, without modification and often without shutting the system

    down (i.e. live migration.) This can dramatically reduce the time required to bring a failed

    system back up, as the typical 2-4 hour OS reinstall phase can be eliminated.

    However, virtualization, by its very design, dramatically increases the impact of a single

    system failure, as a variety of services will be impacted when multiple virtual servers go

    down simultaneously. This is where the "designed properly" comes into play.

    Properly designed, virtualized infrastructure can provide far greater reliability and less down

    time than an infrastructure of individual machines could ever achieve. The keys to the design

    are redundancy and shared storage. All individual pieces of server hardware must be

    redundantly linked to a properly designed SAN or other shared storage device, where all

    virtual machine images are stored for a user to realize the true reliability benefits of server

    virtualization.

    Greater flexibility

    Finally, and perhaps most importantly, virtualization provides flexibility, or what I like to

    call, an agile infrastructure. I've already described some of that flexibility in the reliability

    section - moving virtual machines live from box to box. Imagine, for example that one of

    your virtual machines is consuming too many resources on the box it's on, lets say processor

    time. People are complaining that things are slowing down. You say, "no problem," and

    move the virtual machine to a box with a free processor. Or, you take advantage of virtual

    smp, and simply pin another processor to the virtual machine. Ever needed to add more RAM

    to a server because a process has outgrown its allocation? No problem - simply allocate more

    RAM to the process. No pulling the server, no extended periods of down time.

    Deployments are equally easy. Once you have one image of an OS, you know that it will

    work on any hardware, so you never have to sit and watch an installer run, followed by

    endless online updates again. Simply copy the image and fire it up - you're ready to install

  • that new app in less than 5 minutes. How much are you paying people to do this sort of thing,

    when they could be working on more important things, like innovating!

    5. What is Hypervisor?

    A hypervisor, also called a virtual machine manager, is a program that allows multiple

    operating systems to share a single hardware host. Each operating system appears to have

    the host's processor, memory, and other resources all to itself. However, the hypervisor is

    actually controlling the host processor and a resource, allocating what is needed to each

    operating system in turn and making sure that the guest operating systems (called virtual

    machines) cannot disrupt each other.

    6. What is VMware HA?

    VMware vSphere High Availability (HA) provides easy-to-use, cost effective high

    availability for applications running in virtual machines. In the event of physical server

    failure, affected virtual machines are automatically restarted on other production servers with

    spare capacity. In the case of operating system failure, vSphere HA restarts the affected

    virtual machine on the same physical server.

    With 2 ESX Servers, a SAN for shared storage, Virtual Center, and a VMHA license, if a

    single ESX Server fails, the virtual guests on that server will move over to the other server

    and restart, within seconds. This feature works regardless of the operating system used or if

    the applications support it.

    7. How VMware HA works?

    VMware HA continuously monitors all virtualized servers in a resource pool and detects

    physical server and operating system failures. To monitor physical servers, an agent on each

    server maintains a heartbeat with the other servers in the resource pool such that a loss of

    heartbeat automatically initiates the restart of all affected virtual machines on other servers in

    the cluster.

    VMware HA leverages shared storage and, for FibreChannel and iSCSI SAN storage, the

    VMware vStorage Virtual Machine File System (VMFS) to enable the other servers in the

    cluster to safely access the virtual machine for failover. When used with VMware Distributed

    Resource Scheduler (DRS), VMware HA automates the optimal placement of virtual

    machines on other servers in the cluster after server failure.

    To monitor operating system failures, VMware HA monitors heartbeat information provided

    by the VMware Tools package installed in each virtual machine in the VMware HA cluster.

    Failures are detected when no heartbeat is received from a given virtual machine within a

    user-specified time interval.

    VMware HA ensures that sufficient resources are available in the resource pool at all times to

    be able to restart virtual machines on different physical servers in the event of server failure.

  • VMware HA is easily configured for a cluster through VMware vCenter Server.

    8. How to get the HA Status?

    Go to the summary tab of your cluster and click Cluster Status.

    Because vCenter Server 5.0 uses Fault Domain Manager (FDM) agents for High Availability

    (HA), rather than Automated Availability Manager (AAM) agents, the troubleshooting process

    has changed.

    There are other architectural and feature differences that also affect the troubleshooting process:

    One main log file (/var/log/fdm.log) and syslog integration

    Datastore Heartbeat

    Reduced Cluster configuration (approximately 1 minute, as opposed to 1 minute per host)

    FDM does not require that DNS be configured on the hosts, nor does FDM rely on other

    Layer 3 to 7 network services

    9. What is a Slot?

    A slot is a logical representation of the memory and CPU resources that satisfy the

    requirements for any powered-on virtual machine in the cluster. In other words a slot size is

    the worst case CPU and Memory reservation scenario in a cluster.

    10. Where is HA configuration and log file in vSphere 4.1?

    1. To check the current installed version of HA agent....run

    rpm -qa |grep aam

    2. HA agent is installed under

    /opt/vmware/aam

    3. To check HA nodes log..run

    less /var/log/vmware/aam/aam_config_util_listnodes.log

    4. To check HA agent log...run

    less /var/log/vmware/aam/agent/run.log

    5. To check HA install and current configuration log.....run

    less /var/log/vmware/aam/aam_config_util_install.log

  • 11. Where is HA configuration and log file in vSphere 5.0?

    /var/log/fdm.log

    12. What is AAM in HA?

    AAM is the Legato automated availability management. Prior to vSphere 4.1, VMware's HA

    is actually re-engineered to work with VM's with the help of Legatos Automated Availability Manager (AAM) software. VMware's vCenter agent (vpxa) interfaces with the

    VMware HA agent which acts as an intermediary to the AAM software. From vSphere 5.0, it

    uses an agent called FDM (Fault Domain Manager).

    13. What is Fault Domain Manager?

    In and among all its new features, vSphere 5.0 introduces a complete rewrite of vSphere HA

    clustering. Replacing its earlier technology for vSphere HA (AAM) is a new host agent

    called the Fault Domain Manager, or FDM. This agent is responsible for monitoring host

    availability and the power state of protected VMs, with the mission of restarting

    protected VMs when a host or VM fails.

    14. What are prerequisites for VMware HA to work?

    Shared storage for the VMs running in HA cluster

    Essentials plus, Standard, Advanced, Enterprise and Enterprise Plus Licensing

    VMHA enabled Cluster

    At least two shared heartbeat data stores between the hosts in VMware HA cluster

    Management network redundancy to avoid frequent isolation response in case of temporary network issues (preferred not a requirement)

    15. What is the maximum number of hosts supported per HA cluster?

    Maximum number of hosts in the HA/DRS cluster is 32

    16. What is VMware DRS?

    VMware DRS (Distributed Resource Scheduler) is a utility that balances computing

    workloads with available resources in a virtualized environment. VMware DRS

    dynamically balances computing capacity across a collection of hardware resources

    aggregated into logical resource pools, continuously monitoring utilization across resource

    pools and intelligently allocating available resources among the virtual machines based on

    pre-defined rules that reflect business needs and changing priorities. When a virtual machine

    experiences an increased load, VMware DRS automatically allocates additional resources by

    redistributing virtual machines among the physical servers in the resource pool.

    With VMware DRS, users define the rules for allocation of physical resources among virtual

    machines. The utility can be configured for manual or automatic control. Resource pools can

    be easily added, removed or reorganized. If desired, resource pools can be isolated between

    different business units. If the workload on one or more virtual machines drastically changes,

  • VMware DRS redistributes the virtual machines among the physical servers. If the overall

    workload decreases, some of the physical servers can be temporarily powered-down and the

    workload consolidated.

    Other features of VMware DRS include:

    Dedicated infrastructures for individual business units

    Centralized control of hardware parameters

    Continuous monitoring of hardware utilization

    Optimization of the use of hardware resources as conditions change

    Prioritization of resources according to application importance

    Downtime-free server maintenance

    Optimization of energy efficiency

    Reduction of cooling costs.

    17. What is VMware DPM?

    VMware Distributed Power Management (DPM) is a pioneering new feature of VMware

    DRS that continuously monitors resource requirements in a VMware DRS cluster. When

    resource requirements of the cluster decrease during periods of low usage, VMware DPM

    consolidates workloads to reduce power consumption by the cluster. When resource

    requirements of workloads increase during periods of higher usage, VMware DPM brings

    powered-down hosts back online to ensure service levels are met.

    VMware DPM allows IT organizations to:

    Cut power and cooling costs in the datacenter Automate management of energy efficiency in the datacenter

    18. How Does VMware DRS Work?

    VMware DRS allocates and balances resources in a DRS cluster. It does this dynamically

    and continuously monitors for changes in utilization.

    Resource pools are used to allocate resources to a set of virtual machines in a DRS cluster.

    When load increases in a VM, DRS will redistribute VMs to other physical servers if

    required to ensure all VMs get their correct share of resources.

    When a VM is powered on DRS is used to decide which server it is best to be placed on.

    If a VM is running and DRS decides that it needs to be placed on another physical server to

    ensure its requirements are met, vMotion is used.

    This allows the VM to be moved without powering it off or loss of service, allowing

    resources to be balanced.

    19. What are the requirements for FT?

  • Here are the requirements for the host.

    The vLockstep technology used by FT requires the physical processor extensions added

    to the latest processors from Intel and AMD. In order to run FT, a host must have an FT-

    capable processor, and both hosts running an FT VM pair must be in the same processor

    family.

    CPU clock speeds between the two hosts must be within 400MHz of each other to ensure

    that the hosts can stay in sync.

    All hosts must be running the same build of ESX or ESXi and be licensed for FT,

    which is only included in the Advanced, Enterprise, and Enterprise Plus editions of

    vSphere.

    Hosts used together as an FT cluster must share storage for the protected VMs (FC,

    iSCSI, or NAS).

    Hosts must be in an HA-enabled cluster.

    Network and storage redundancy is recommended to improve reliability; use NIC

    teaming and storage multipathing for maximum reliability.

    Each host must have a dedicated NIC for FT logging and one for VMotion with speeds

    of at least 1Gbps. Each NIC must also be on the same network.

    Host certificate checking must be enabled in vCenter Server (configured in vCenter

    Server Settings SSL Settings).

    Here are the requirements for the VMs.

    The VMs must be single-processor (no vSMPs).

    All VM disks must be "thick" (fully allocated) and not "thin." If a VM has a thin disk, it

    will be converted to thick when FT is enabled.

    There can be no nonreplayable devices (USB devices, serial/parallel ports, sound cards, a

    physical CD-ROM, a physical floppy drive, physical RDMs) on the VM.

    Most guest OSs are supported, with the following exceptions that apply only to hosts

    with third-generation AMD Opteron processors (i.e., Barcelona, Budapest, Shanghai):

    Windows XP (32-bit), Windows 2000, and Solaris 10 (32-bit)..

    In addition to these requirements, there are also many limitations when using FT, and they

    are as follows.

    Snapshots must be removed before FT can be enabled on a VM. In addition, it is not

    possible to take snapshots of VMs on which FT is enabled.

    N_Port ID Virtualization (NPIV) is not supported with FT. To use FT with a VM you

    must disable the NPIV configuration.

    Paravirtualized adapters are not supported with FT.

    Physical RDM is not supported with FT. You may only use virtual RDMs.

    FT is not supported with VMs that have CD-ROM or floppy virtual devices connected to

    a physical or remote device. To use FT with a VM with this issue, remove the CD-ROM

    or floppy virtual device or reconfigure the backing with an ISO installed on shared

    storage.

    The hot-plug feature is automatically disabled for fault tolerant VMs. To hot-plug devices

    (when either adding or removing them), you must momentarily turn off FT, perform the

    hot plug, and then turn FT back on.

  • EPT/RVI is automatically disabled for VMs with FT turned on.

    IPv6 is not supported; you must use IPv4 addresses with FT.

    VMotion is supported on FT-enabled VMs, but you cannot VMotion both the primary

    and secondary VMs at the same time. SVMotion is not supported on FT-enabled VMs.

    In vSphere 4.0, FT was compatible with DRS, but the automation level was disabled for

    FT-enabled VMs. Starting in vSphere 4.1, you can use FT with DRS when the EVC

    feature is enabled. DRS will perform initial placement on FT-enabled VMs and also will

    include them in the cluster's load-balancing calculations. If EVC in the cluster is disabled,

    the FT-enabled VMs are given a DRS automation level of "disabled". When a primary

    VM is powered on, its secondary VM is automatically placed, and neither VM is moved

    for load-balancing purposes.

    20. What are prerequisites for VMware DRS to work?

    Hosts that are added to a DRS cluster must meet certain requirements to use cluster features

    successfully.

    Shared Storage

    Ensure that the managed hosts use shared storage. Shared storage is typically on a SAN, but

    can also be implemented using NAS shared storage.

    Shared VMFS Volume

    Configure all managed hosts to use shared VMFS volumes.

    Place the disks of all virtual machines on VMFS volumes that are accessible by source and

    destination hosts.

    Processor Compatibility

    To avoid limiting the capabilities of DRS, you should maximize the processor compatibility

    of source and destination hosts in the cluster.

    vMotion transfers the running architectural state of a virtual machine between underlying

    ESX/ESXi hosts. vMotion compatibility means that the processors of the destination host

    must be able to resume execution using the equivalent instructions where the processors of

    the source host were suspended. Processor clock speeds and cache sizes might vary, but

    processors must come from the same vendor class (Intel versus AMD) and the same

    processor family to be compatible for migration with vMotion.

    vCenter Server provides features that help ensure that virtual machines migrated with

    vMotion meet processor compatibility requirements. These features include:

    Enhanced vMotion Compatibility (EVC) You can use EVC to help ensure vMotion compatibility for the hosts in a cluster. EVC ensures that all hosts in a cluster present

    the same CPU feature set to virtual machines, even if the actual CPUs on the hosts

    differ. This prevents migrations with vMotion from failing due to incompatible CPUs.

  • CPU compatibility masks vCenter Server compares the CPU features available to a virtual machine with the CPU features of the destination host to determine whether to

    allow or disallow migrations with vMotion. By applying CPU compatibility masks to

    individual virtual machines, you can hide certain CPU features from the virtual

    machine and potentially prevent migrations with vMotion from failing due to

    incompatible CPUs.

    vMotion Requirements

    To enable the use of DRS migration recommendations, the hosts in your cluster must be part

    of a vMotion network. If the hosts are not in the vMotion network, DRS can still make initial

    placement recommendations.

    To be configured for vMotion, each host in the cluster must meet the following requirements:

    The virtual machine configuration file for ESX/ESXi hosts must reside on a VMware Virtual Machine File System (VMFS).

    vMotion does not support raw disks or migration of applications clustered using Microsoft Cluster Service (MSCS).

    vMotion requires a private Gigabit Ethernet migration network between all of the vMotion enabled managed hosts. When vMotion is enabled on a managed host,

    configure a unique network identity object for the managed host and connect it to the

    private migration network.

    21. How vMotion works

    There are 3 underlying action happening in vMotion.

    First:-

    The entire state of a virtual machine is encapsulated by a set of files stored on shared storage

    such as Fibre Channel or iSCSI Storage Area Network (SAN) or Network Attached,Storage

    (NAS).

    VMware vStorage VMFS allows multiple ESX to access the same virtual machine files

    concurrently.

    Second:-

    The active memory and precise execution state of the virtual machine is rapidly transferred

    over a high speed network, allowing the virtual machine to instantaneously switch from

    running on the source ESX host to the destination ESX host.

    VMotion keeps the transfer period imperceptible to users by keeping track of on-going

    memory transactions in a bitmap.

    Once the entire memory and system state has been copied over to the target ESX host,

  • VMotion suspends the source virtual machine, copies the bitmap to the target ESX host, and

    resumes the virtual machine on the target ESX host.

    This entire process takes less than two seconds on a Gigabit Ethernet network.

    Third:-

    The networks being used by the virtual machine are also virtualized by the underlying ESX

    host, ensuring that even after the migration, the virtual machine network identity and network

    connections are preserved.

    VMotion manages the virtual MAC address as part of the process. Once the destination

    machine is activated, VMotion pings the network router to ensure that it is aware of the new

    physical location of the virtual MAC address.

    Since the migration of a virtual machine with VMotion preserves the precise execution state,

    the network identity, and the active network connections, the result is zero downtime and no

    disruption to users.

    22. What are vSphere Standard Switches?

    vSphere standard switches are abstracted network devices. A standard switch can bridge

    traffic internally between virtual machines in the same port group and link to external

    networks.

    You can use standard switches to combine the bandwidth of multiple network adapters and

    balance communications traffic among them. You can also configure a standard switch to

    handle physical NIC failover.

    A vSphere standard switch models a physical Ethernet switch. The default number of logical

    ports for a standard switch is 120. You can connect one network adapter of a virtual machine

    to each port. Each uplink adapter associated with a standard switch uses one port. Each

    logical port on the standard switch is a member of a single port group. Each standard switch

    can also have one or more port groups assigned to it.

    When two or more virtual machines are connected to the same standard switch, network

    traffic between them is routed locally. If an uplink adapter is attached to the standard switch,

    each virtual machine can access the external network that the adapter is connected to.

    23. What is vSphere Distributed Switch?

    The vSphere Distributed Switch (VDS) simplifies virtual machine networking by enabling

    you to set up virtual machine access switching for your entire datacenter from a centralized

    interface. VDS provides:

    Simplify Virtual Machine Network Configuration

  • Simplify provisioning, administration and monitoring of virtual networking across multiple

    hosts and clusters from a centralized interface.

    Central control of virtual switch port configuration, portgroup naming, filter settings, etc

    Link Aggregation Control Protocol (LACP) - Negotiates and automatically configures

    link aggregation between vSphere hosts and access layer physical switch

    Network health check capabilities to verify vSphere to physical network configuration

    Enhanced network monitoring and troubleshooting capabilities

    The vSphere Distributed Switch provides rich monitoring and troubleshooting capabilities to

    your networking staff

    Support for RSPAN and ERSPAN protocols for remote network analysis

    IPFIX Netflow version 10

    SNMPv3 support

    Rollback and Recovery for Patching and Updating the Network Configuration

    Templates to enable backup and restore for virtual networking configuration

    Network based coredump (Netdump) to debug hosts without local storage

    Support advanced vSphere Networking Features

    The vSphere Distributed Switch provides the building blocks for many advanced networking

    features in a vSphere environment.

    Core building block for Network I/O Control (NIOC)

    Maintains network runtime state for virtual machines as they move across multiple hosts,

    enabling inline monitoring and centralized firewall services

    Supports third-party virtual switch extensions such as the Cisco Nexus 1000V and IBM

    5000v virtual switches

    Support for SR-IOV (Single Root I/O Virtualization) to enable low latency and high I/O

    workloads

    BPDU filter to prevent virtual machines from sending BPDUs to the physical switch

    24. What is the difference between standard switch (vSwitch) and distributed switch (dvSwitch)?

    Both types of switches provide the following:

    Can forward L2 frames

    Can segment traffic into VLANs

    Can use and understand 802.1q VLAN encapsulation

    Can have more than one uplink (NIC Teaming)

    Can have traffic shaping for the outbound (TX) traffic

    These features are available only with Distributed Switch:

    Can shape inbound (RX) traffic

  • Has a central unified management interface through vCenter

    Supports Private VLANs (PVLANs)

    Provides potential customization of Data and Control Planes

    vSphere 5.0 provides these improvements to Distributed Switch functionality:

    Increased visibility of inter-virtual-machine traffic through Netflow

    Improved monitoring through port mirroring (dvMirror)

    Support for LLDP (Link Layer Discovery Protocol), a vendor-neutral protocol.

    25. What is Port Group?

    You can think of port groups as templates for creating virtual ports with particular sets

    of specifications. You can create a maximum of 512 port groups on a single host.

    Port groups are important particularly for VMotion. To understand why, consider what

    happens as virtual machines migrate to new hosts using VMotion.

    Port groups make it possible to specify that a given virtual machine should have a particular

    type of connectivity on every host on which it might run.

    Port groups are user-named objects that contain configuration information to provide

    persistent and consistent network access for virtual Ethernet adapters:

    Virtual switch name

    VLAN IDs and policies for tagging and filtering

    Teaming policy

    Layer 2security options

    Traffic shaping parameters

    In short, port group definitions capture all the settings for a switch port. Then, when you

    want to connect a virtual machine to a particular kind of port, you simply specify the name of

    a port group with an appropriate definition.

    Port groups may specify different host-level parameters on different hosts teaming

    configurations, for example. But the key element is that the result is a consistent view of the

    network for a virtual machine connected to that port group, whichever host is running it.

    Note: Port groups do not necessarily correspond one-to-one to VLAN groups. It is possible,

    and even reasonable, to assign the same VLAN ID to multiple port groups. This would be

    useful if, for example, you wanted to give different groups of virtual machines different

    physical Ethernet adapters in a NIC team for active use and for standby use, while all the

    adapters are on the same VLAN.

    26. How many ports we can have on a vSwitch?

    ESX 2.x allowed only 32 virtual machines per vSwitch. ESX 3.x raised the maximum

    number of ports to 1016. In ESX 4.x, you can change the number of ports to 24, 56, 120, 248,

    504, 1016, 2040, or 4088.

  • What might seem odd about these numbers is they are exactly eight digits less than what you

    might expect (32, 64, 128, 256, 512, 1032, 2048, and 4096). So what happened to the other

    eight ports? Well, those eight ports are there, but they are used by the VMkernel for

    background monitoring processes.

    27. What are the three port groups present in ESX4 server networking ?

    1) Virtual Machine Port Group - Used for Virtual Machine Network

    2) Service Console Port Group - Used for Service Console Communications

    3) VMKernel Port Group - Used for VMotion, iSCSI, NFS Communications

    28. VMware vSphere 5 License types?

    Standard: HA, FT, vMotion, Storage vMotion, vShield Endpoint, Replication

    Enterprise: Standard + DRS, DPM, Storage APIs, Virtual Serial Port Concentrator

    Enterprise Plus: Enterprise + Distributed Switch, Storage DRS and Profile-Driven Storage, Host Profiles and Auto Deploy, Storage I/O Control and Network I/O

    Control

    29. How to add license?

    You can add any number of licenses to the vSphere 5.x inventory. When assigning licenses

    in 5.x products, you can create a relationship between an asset and a license key. Each asset

    can be licensed by one and only one license key or it can be unlicensed as an Evaluation

    Mode.

    Note: To perform these steps, your vSphere Client needs to be connected to the vCenter

    Server.

    Adding License Keys

    To add licenses:

    1. Log in to the vSphere Client. 2. Click Home. 3. Under the Administration section, click the Licensing icon. 4. Click Manage vSphere Licenses. 5. Enter the License Key in the Enter new vSphere license keys field (one per line). 6. Include labels for new license keys as necessary. 7. Click Add License Keys.

    After clicking Add License Keys, you can review the license keys you added, capacity

    counts, expiration dates, and labels associated with the license keys.

    8. Click Next to assign the license keys.

    Assigning License Keys

    To assign licenses to the vCenter Server or the ESXi host:

  • 1. Log in to the vSphere Client. 2. Click Home. 3. Under the Administration section, click the Licensing icon. 4. Choose Evaluation Mode and expand the list. Find the product you want to license. 5. Right-click on the product and click Change License Key. 6. Assign a key from list that was entered previously on Manage License window. 7. Click OK. 8. Verify that the product is licensed now.

    30. VMware path selection policies?

    These pathing policies can be used with VMware ESXi 5.x and ESXi/ESX 4.x:

    Most Recently Used (MRU): Selects the first working path, discovered at system boot

    time. If this path becomes unavailable, the ESXi/ESX host switches to an alternative path

    and continues to use the new path while it is available. This is the default policy for

    Logical Unit Numbers (LUNs) presented from an Active/Passive array. ESXi/ESX does

    not return to the previous path if, or when, it returns; it remains on the working path until

    it, for any reason, fails.

    Note: The preferred flag, while sometimes visible, is not applicable to the MRU pathing

    policy and can be disregarded.

    Fixed (Fixed): Uses the designated preferred path flag, if it has been configured.

    Otherwise, it uses the first working path discovered at system boot time. If the ESXi/ESX

    host cannot use the preferred path or it becomes unavailable, the ESXi/ESX host selects

    an alternative available path. The host automatically returns to the previously-defined

    preferred path as soon as it becomes available again. This is the default policy for LUNs

    presented from an Active/Active storage array.

    Round Robin (RR): Uses an automatic path selection rotating through all available

    paths, enabling the distribution of load across the configured paths.

    o For Active/Passive storage arrays, only the paths to the active controller will be

    used in the Round Robin policy.

    o For Active/Active storage arrays, all paths will be used in the Round Robin

    policy.

    Note: This policy is not currently supported for Logical Units that are part of a Microsoft

    Cluster Service (MSCS) virtual machine.

    Fixed path with Array Preference: The VMW_PSP_FIXED_AP policy was introduced in

    ESXi/ESX 4.1. It works for both Active/Active and Active/Passive storage arrays that

    support Asymmetric Logical Unit Access (ALUA). This policy queries the storage array for

    the preferred path based on the array's preference. If no preferred path is specified by the

    user, the storage array selects the preferred path based on specific criteria.

  • Note: The VMW_PSP_FIXED_AP policy has been removed from ESXi 5.0. For ALUA

    arrays in ESXi 5.0, the MRU Path Selection Policy (PSP) is normally selected but some

    storage arrays need to use Fixed.

    31. What is swap size?

    When you power on a VM, a memory swap file is created that can be used in lieu of physical

    host memory if an ESX host exhausts all of its physical memory because it is overcommitted.

    These files are created equal in size to the amount of memory assigned to a VM, minus any

    memory reservations (default is 0) that a VM may have set on it (i.e., a 4 GB VM with a 1

    GB reservation will have a 3 GB vswp file created).

    32. What is ballooning?

    Ideally, a VM from which memory has been reclaimed should perform as if it had been

    configured with less memory. ESX Server uses a ballooning technique to achieve such

    predictable performance by coaxing the guest OS into cooperating with it when possible.

    When the ESX hosts machine memory is scarce or when a VM hits a Limit, The kernel needs to reclaim memory and prefers ballooning over swapping. The balloon driver is

    installed inside the guest OS as part of the VMware Tools installation and is also known as

    the vmmemctl driver.

    When the ESX kernel wants to reclaim memory, it instructs the balloon driver to inflate. The

    balloon driver then requests memory from the guest OS. When there is enough memory

    available, the guest OS will return memory from its free list. When there isnt enough memory, the guest OS will have to use its own memory management techniques to decide

    which particular pages to reclaim and if necessary page them out to its swap- or page-file.

    In the background, the ESX kernel frees up the machine memory page that corresponds to the

    physical machine memory page allocated to the balloon driver. When there is enough

    memory reclaimed, the balloon driver will deflate after some time returning physical memory

    pages to the guest OS again.

    This process will also decrease the Host Memory Usage parameter

    Ballooning is only effective it the guest has available space in its swap- or page-file, because

    used memory pages need to be swapped out in order to allocated the page to the balloon

    driver. Ballooning can lead to high guest memory swapping.

    33. What is thin provisioning?

    When creating a virtual disk file, by default VMware ESXi/ESX uses a thick type of virtual

    disk. The thick disk pre-allocates all of the space specified during the creation of the disk.

    For example, if you create a 10 megabyte disk, all 10 megabytes are pre-allocated for that

    virtual disk.

    In contrast, a thin virtual disk does not pre-allocate all of the space. Blocks in the VMDK file

    are not allocated and backed by physical storage until they are written during the normal

  • course of operation. A read to an unallocated block returns zeroes, but the block is not

    backed with physical storage until it is written.

    34. What is FT?

    Fault Tolerance (FT) is a new feature in vSphere that takes VMwares High Availability technology to the next level by providing continuous protection for a virtual machine (VM)

    in case of a host failure. It is based on the Record and Replay technology that was introduced

    with VMware Workstation that lets you record a VMs activity and later play it back.

    The feature works by creating a secondary VM on another ESX host that shares the same

    virtual disk file as the primary VM and then transferring the CPU and virtual device inputs

    from the primary VM (record) to the secondary VM (replay) via a FT logging NIC so it is in

    sync with the primary and ready to take over in case of a failure. While both the primary and

    secondary VMs receive the same inputs, only the primary VM produces output such as disk

    writes and network transmits. The secondary VMs output is suppressed by the hypervisor and is not on the network until it becomes a primary VM, so essentially both VMs function

    as a single VM.

    35. What is difference between HA and FT?

    VMware Fault Tolerance is a high-availability feature that can be used within a VMware

    High Availability cluster. However, high availability is not synonymous with fault tolerance;

    there are meaningful differences between the two terms. Each setup requires different

    available resources and will affect virtual machines differently.

    The key difference between VMware's Fault Tolerance (FT) and High Availability (HA)

    products is interruption to virtual machine (VM) operation in the event of an ESX/ESXi host

    failure. Fault-tolerant systems instantly transition to a new host, whereas high-availability

    systems will see the VMs fail with the host before restarting on another host.

    VMware High Availability

    VMware High Availability should be used to maintain uptime on important but non-mission-

    critical VMs. While HA cannot prevent VM failure, it will get VMs back up and running

    with very little disturbance to the virtual infrastructure. Consider the value of HA for host

    failures that occur in the early hours of the morning, when IT is not immediately available to

    resolve the problem.

    In addition to tending to VMs during ESX/ESXi host failure, VMware High Availability can

    monitor and restart a VM, ensuring the machine is capable of restarting on a new host with

    enough resources.

    VMware Fault Tolerance

    VMware FT instantly moves VMs to a new host via vLockstep, which keeps a secondary

    VM in sync with the primary, ready to take over at any second, like a Broadway understudy.

    The VM's instructions and instruction sequence are the actor's lines, which pass to the

  • understudy on a dedicated server backbone network. Heartbeats ping between the star and

    understudy on this backbone as well, for instantaneous detection of a failure.

    36. Difference between HA and vMotion?

    VMotion and HA are not related and are not dependents of each other. DRS have a

    dependency on vMotion, but not HA. HA is used in the event that a hosts fails you can have

    your virtual machines restart on another host in the cluster. vMotion allows you to move a

    virtual machine from one host to another while it is running without service

    interruption. Ideally you will utilize vMotion, HA and DRS within your cluster to achieve a

    well-balanced VI environment.

    37. What is Deference between ESX and ESXi?

    VMware ESX Architecture. In the original ESX architecture, the virtualization kernel

    (referred to as the vmkernel) is augmented with a management partition known as the

    console operating system (also known as COS or service console). The primary purpose of

    the Console OS is to provide a management interface into the host. Various VMware

    management agents are deployed in the Console OS, along with other infrastructure service

    agents (e.g. name service, time service, logging, etc). In this architecture, many customers

    deploy other agents from 3rd parties to provide particular functionality, such as hardware

    monitoring and system management. Furthermore, individual admin users log into the

    Console OS to run configuration and diagnostic commands and scripts.

    VMware ESXi Architecture. In the ESXi architecture, the Console OS has been

    removed and all of the VMware agents run directly on the vmkernel. Infrastructure

    services are provided natively through modules included with the vmkernel. Other authorized

    3rd party modules , such as hardware drivers and hardware monitoring components, can run

    in vmkernel as well. Only modules that have been digitally signed by VMware are allowed

    on the system, creating a tightly locked-down architecture. Preventing arbitrary code from

    running on the ESXi host greatly improves the security of the system.

    Capability VMware ESX VMware ESXi

    Service Console Service Console is a standard

    Linux environment through

    which a user has privileged

    access to the VMware ESX

    kernel. This Linux-based

    privileged access allows you

    to manage your

    environment by installing agents

    and drivers and executing scripts

    VMware ESXi is designed

    to make the server a

    computing appliance.

    Accordingly, VMware

    ESXi behaves more like

    firmware than traditional

    software. To provide

    hardware-like security and

    reliability, VMware ESXi

    does not support

    a privileged access

  • and other Linux-environment

    code.

    environment like

    the Service Console for

    management of VMware

    ESXi.

    CLI-Based

    Configuration

    VMware ESX Service Console

    has a host CLI through which

    VMware ESX can be configured.

    VMware ESX can also be

    configured using vSphere CLI

    (vCLI).

    The vSphere CLI (vCLI) is

    a remote scripting

    environment that interacts

    with VMware ESXi hosts

    to enable host

    configuration through

    scripts or specific

    commands. It replicates

    nearly all the equivalent

    COS commands for

    configuring ESX.

    Scriptable

    Installation

    VMware ESX supports scriptable

    installations through utilities like

    KickStart.

    VMware ESXi Installable

    does not support scriptable

    installations in the manner

    ESX does, at this time.

    VMware ESXi does

    provide support for post

    installation configuration

    script using vCLI-based

    configuration scripts.

    Boot from SAN VMware ESX supports boot from

    SAN. Booting from SAN

    requires one dedicated LUN per

    server.

    VMware ESXi may be

    deployed as an embedded

    hypervisor or installed on a

    hard disk.

    In most enterprise

    settings, VMware ESXi is

    deployed as an embedded

    hypervisor directly on the

    server. This operational

    model does not require any

    local storage and no SAN

    booting is required because

    the hypervisor image is

    directly on the server.

    The installable version of

    VMware ESXi does not

    support booting from SAN.

    Serial Cable VMware ESX supports VMware ESXi does not

  • Connectivity interaction through direct-

    attached serial cable to the

    VMware ESX host.

    support interaction through

    direct-attached serial cable

    to the VMware ESXi host

    at this time.

    SNMP VMware ESX supports SNMP. VMware ESXi supports

    SNMP when licensed with

    vSphere Essentials,

    vSphere Essential Plus,

    vSphere Standard, vSphere

    Advanced, vSphere

    Enterprise, or vSphere

    Enterprise Plus.

    The free version of

    VMware ESXi does not

    support SNMP.

    Active Directory

    Integration

    VMware ESX supports Active

    Directory integration through

    third-party agents installed on the

    Service Console.

    VMware ESXi does not

    support Active Directory

    authentication of local

    users at this time.

    HW

    Instrumentation

    Service Console agents provide a

    range of HW instrumentation on

    VMware ESX.

    VMware ESXi provides

    HW instrumentation

    through CIM Providers.

    Standards-based CIM

    Providers are distributed

    with all versions of

    VMware ESXi. VMware

    partners include their own

    proprietary CIM Providers

    in customized versions of

    VMware ESXi. These

    customized versions are

    available either from

    VMwares web site or the

    partners web site,

    depending on the partner.

    Remote console

    applications like Dell

    DRAC, HP iLO, IBM

    RSA, and FSC iRMC

    S2 are supported with

    ESXi.

  • Software Patches

    and Updates

    VMware ESX software patches

    and upgrades behave like

    traditional Linux based patches

    and upgrades. The installation of

    a software patch or upgrade may

    require multiple system boots as

    the patch or upgrade may have

    dependencies on previous

    patches or upgrades.

    VMware ESXi patches and

    updates behave like

    firmware patches and

    updates. Any given patch

    or update is all-inclusive of

    previous patches and

    updates. That is, installing

    patch version n includes

    all updates included in

    patch versions n-1, n-2,

    and so forth. Furthermore,

    third party components

    such as OEM CIM

    providers can be updated

    independently of the base

    ESXi component, and vice

    versa.

    VI Web Access VMware ESX supports managing

    your virtual machines through

    VI Web Access. You can use the

    VI Web Access to connect

    directly to the ESX host or to the

    VMware Infrastructure Client.

    VMware ESXi does not

    support web access at this

    time.

    Diagnostics and

    Troubleshooting

    VMware ESX Service Console

    can be used to issue commands

    that can help diagnose and repair

    support issues with the server.

    VMware ESXi has several

    ways to enable support of

    the product:

    Remote command sets

    such as the vCLI include

    diagnostic commands such

    as vmkfstools, resxtop, and

    vmware-cmd.

    The console interface of

    VMware ESXi (known as

    the DCUI or Direct

    Console User Interface)

    has functionality to help

    repair the system,

    including restarting of all

    management agents.

  • Tech Support Mode, which

    allows low-level access to

    the system so that

    advanced diagnostic

    commands can be issues.

    38. Difference between vSphere 4.1 and vSphere 5?

    Features vSphere 4.1 vSphere 5.0

    Hypervisor ESX & ESXi Only ESXi

    VMA Yes VMA 4.1 Yes VMA 5

    HA Agent

    AAM

    Automatic Availability

    Manager

    FDM

    Fault Domain Manager

    HA Host Approach Primary & Secondary Master & Slave

    HA Failure Detection Management N/W

    Management N/W and Storage

    communication

    HA Log File /etc/opt/vmware/AAM /etc/opt/vmware/FDM

    DNS Dependent on DNS Yes NO

    Host UEFI boot support NO

    boot systems from hard drives,

    CD/DVD drives, or USB media

    Storage DRS Not Available Yes

    VM Affinity & Anti-Affinity Available Available

    VMDK Affinity & Anti-

    Affinity Not Available Available

    Profile driven storage Not Available Available

    VMFS 5 VMFS-3 VMFS-5

    VSphere Storage Appliance Not Available Available

  • Iscsi Port Binding GUI

    Can be only done via Cli

    using ESXCLI

    Configure dependent

    hardware iSCSI and software

    iSCSI adapters along with the

    network configurations and

    port binding in a single dialog

    box using the vSphere Client.

    Storage I/O control for NFS Fiber Channel Fiber Channel & NFS

    Storage Vmotion Snapshot

    support

    VM with Snapshot cannot be migrated

    using Storage vMotion

    VM with Snapshot can be

    migrated using Storage vMotion

    Swap to SSD NO Yes

    Network I/O control Yes Yes with enhancement

    ESXi firewall Not Available Yes

    vCenter Linux Support Not Available vCenter Virtual Appliance

    vSphere Full Client Yes Yes

    vSphere Web Client Yes yes with lot of improvements

    VM Hardware Version 8 7 8

    Virtual CPU per VM 8 vCpu 32 vCpu

    Virtual Machine RAM 255 GB 1 TB of vRAM

    VM Swapfile size 255 GB 1 TB

    Support for Client

    connected USB Not Available Yes

    Non Hardware Accelerated

    3D grpahics support Not Available Yes

    UEFI Virtual BIOS Not Available Yes

    VMware Tools Version 4.1 5

    Mutlicore vCpu Not Available Yes configure at VM setting

    MAC OS Guest Support Not Available Apple Mac OS X Server 10.6

    Smart card reader support

    for VM Not Available Yes

  • Auto Deploy Not Available Yes

    Image Builder Not Available Yes

    VM's per host 320 512

    Max Logical Cpu per Host 160 160

    RAM per Host 1 TB 2 TB

    MAX RAM for Service

    Console 800 MB Not Applicable (NO SC)

    LUNS per Server 256 256

    Metro Vmotion

    Round-trip latencies of up to

    5 milliseconds.

    Round-trip latencies of up to

    10 milliseconds. This provides

    better performance over

    long latency networks

    Storage Vmotion

    Moving VM Files using moving to using

    dirty block tracking

    Moving VM Files using I/O

    mirroring with better

    enhancements

    Virtual Distributed Switch Yes

    Yes with more enhancements

    like deeper view into virtual

    machine traffic through Netflow

    and enhances monitoring and

    troubleshooting capabilities

    through SPAN and LLDP

    USB 3.0 Support NO Yes

    Host Per vCenter 1000 1000

    Powered on virtual

    machines

    per vCenter Server 10000 10000

    Vmkernel 64-bit 64-bit

    Service Console 64-bit Not Applicable (NO SC)

  • Licensing

    vSphere Essentials

    vSphere Essentials Plus

    vSphere Standard

    vSphere Advanced

    vSphere Enterprise

    vSphere Enterprise Plus

    vSphere Essentials

    vSphere Essentials Plus

    vSphere Standard

    vSphere Enterprise

    vSphere Enterprise Plus

    39. Describe vSphere 5 Licensing?

    vSphere 5 has changed entitlements around CPU cores and memory use. vSphere 5 has

    also introduced a small change to the entitlement process around what is known as virtual

    memory or vRAM.

    40. What is vRAM pool?

    vRAM or virtual RAM is the total memory configured to a virtual machine;

    Available pooled vRAM is equal to the sum total of vRAM entitlements for all VMware

  • vSphere licenses of a single edition, managed by a single instance of VMware vCenter

    Server or by multiple instances of VMware vCenter Server in Linked Mode.

    41. What difference between VMFS 3 and VMFS 5?

    VMFS3

    Volume size 64TB

    Raw device mapping size (virtual compatibility) 2TB minus 512 bytes

    Raw Device Mapping size (physical compatibility) 2TB minus 512 bytes

    Block size 8MB

    File size (1MB block size) 256GB

    File size (2MB block size) 512GB

    File size (4MB block size) 1TB

    File size (8MB block size) 2TB minus 512 bytes

    Files per volume Approximately 30,720

    VMFS5

    Volume size 64TB

    Raw Device Mapping size (virtual compatibility) 2TB minus 512 bytes

    Raw Device Mapping size (physical compatibility) 64TB

    Block size 1MB

    File size 2TB minus 512 bytes

    Files per volume Approximately 130,690

    42. How to enable Tech Support mode in ESX 3.5?

    ESXi 3.5 does ship with the ability to run SSH, but this is disabled by default (and is not

    supported). If you just need to access the console of ESXi, then you only need to perform

    steps 1 - 3.

    1. At the console of the ESXi host, press ALT-F1 to access the console window. 2. Enter unsupported in the console and then press Enter. You will not see the

    text you type in.

    3. If you typed in unsupported correctly, you will see the Tech Support Mode warning and a password prompt. Enter the password for the root login.

    You should then see the prompt of ~ #.

    43. How to enable Tech Support mode in ESXi 4.1?

    Tech Support Mode (TSM) provides a command-line interface that can be used by the

    administrator to troubleshoot and correct abnormal conditions on VMware ESXi hosts.

    TSM can be accessed in two ways:

  • Logging in directly on the console of the ESXi server.

    Logging in remotely via SSH.

    44. How to enable Tech Support mode using CLI?

    To enable/disable and start/stop the local ESXi Shell or local TSM from the local

    command line on the ESXi host:

    To start the ESXi Shell or local TSM, run the command:

    ESXi 5.x vim-cmd hostsvc/start_esx_shell ESXi 4.1 vim-cmd hostsvc/start_local_tsm

    To disable the ESXi Shell or local TSM, run the command:

    ESXi 5.x vim-cmd hostsvc/disable_esx_shell ESXi 4.1 vim-cmd hostsvc/disable_local_tsm

    To stop the ESXi Shell or local TSM, run the command:

    ESXi 5.x vim-cmd hostsvc/stop_esx_shell ESXi 4.1 vim-cmd hostsvc/stop_local_tsm

    45. What are the files that make a Virtual Machine?

    .vmx - Virtual Machine Configuration File

    .nvram - Virtual Machine BIOS

    .vmdk - Virtual Machine Disk file

    .vswp - Virtual Machine Swap File

    .vmsd - Virtual Machine Snapshot Database

    .vmsn - Virtual Machine Snapshot file

    .vmss - Virtual Machine Suspended State file

    .vmware.log - Current Log File

    .vmware-#.log - Old Log file

    vswp file is only present when the VM is powered on and the .vmss file is only

    present when a VM is suspended.

    46. What is the .nvram file?

    This small file contains the Phoenix BIOS that is used as part of the boot process of the

    virtual machine. It is similar to a physical server that has a BIOS chip that lets you set

    hardware configuration options. A VM also has a virtual BIOS that is contained in the

    NVRAM file. The BIOS can be accessed when a VM first starts up by pressing the F2

    key. Whatever changes are made to the hardware configuration of the VM are then saved

    in the NVRAM file. This file is in binary format and if deleted it will be automatically re-

    created when a VM is powered on.

  • 47. What is the .vmx file?

    This file contains all of the configuration information and hardware settings of the virtual

    machine. Whenever you edit the settings of a virtual machine, all of that information is

    stored in text format in this file. This file can contain a wide variety of information about

    the VM, including its specific hardware configuration (i.e., RAM size, network interface

    card info, hard drive info and serial/parallel port info), advanced power and resource

    settings, VMware tools options, and power management options. While you can edit this

    file directly to make changes to a VM's configuration it is not recommended that you do

    so unless you know what you are doing. If you do make changes directly to this file, it's a

    very good idea to make a backup copy of this file first.

    48. What are VMDK files?

    All virtual disks are made up of two files, a large data file equal to the size of the virtual

    disk and a small text disk descriptor file, which describes the size and geometry of the

    virtual disk file. The descriptor file also contains a pointer to the large data file as well as

    information on the virtual disks drive sectors, heads, cylinders and disk adapter type. In

    most cases these files will have the same name as the data file that it is associated with

    (i.e., myvm_1.vmdk and myvm_1-flat.vmdk). You can match the descriptor file to the

    data file by checking the Extent Description field in this file to see which flat, -rdm or delta file is linked to it. An example disk descriptor file is shown below:

    The three different types of virtual disk data files that can be used with virtual machines

    are covered below:

    The flat.vmdk file

    This is the default large virtual disk data file that is created when you add a

    virtual hard drive to your VM that is not an RDM. When using thick disks, this

    file will be approximately the same size as what you specify when you create your

    virtual hard drive. One of these files is created for each virtual hard drive that a VM

    has configured, as shown in the examples below.

  • The delta.vmdk file

    These virtual disk data files are only used when snapshots are created of a virtual

    machine. When a snapshot is created, all writes to the original flat.vmdk are halted and it becomes read-only; changes to the virtual disk are then written to these delta files instead. The initial size of these files is 16 MB and they are grown as needed in 16

    MB increments as changes are made to the VM's virtual hard disk. Because these files are

    a bitmap of the changes made to a virtual disk, a single delta.vmdk file cannot exceed the size of the original flat.vmdk file. A delta file will be created for each snapshot that you create for a VM and their file names will be incremented numerically (i.e., myvm-

    000001-delta.vmdk, myvm-000002-delta.vmdk). These files are automatically deleted

    when the snapshot is deleted after they are merged back into the original flat.vmdk file.

  • The -rdm.vmdk file

    This is the mapping file for the RDM that manages mapping data for the RDM

    device. The mapping file is presented to the ESX host as an ordinary disk file,

    available for the usual file system operations. However, to the virtual machine the

    storage virtualization layer presents the mapped device as a virtual SCSI device. The

    metadata in the mapping file includes the location of the mapped device (name

    resolution) and the locking state of the mapped device. If you do a directory listing

    you will see that these files will appear to take up the same amount of disk space on

    the VMFS volume as the actual size of the LUN that it is mapped to, but in reality

    they just appear that way and their size is very small. One of these files is created for

    each RDM that is created on a VM.

  • 49. What is the .vswp file?

    When you power on a VM, a memory swap file is created that can be used in lieu of

    physical host memory if an ESX host exhausts all of its physical memory because it

    is overcommitted. These files are created equal in size to the amount of memory

    assigned to a VM, minus any memory reservations (default is 0) that a VM may have set

    on it (i.e., a 4 GB VM with a 1 GB reservation will have a 3 GB vswp file created). These

    files are always created for virtual machines but only used if a host exhausts all of its

    physical memory. As virtual machine memory that is read/written to disk is not as fast as

    physical host RAM, your VMs will have degraded performance if they do start using this

    file. These files can take up quite a large amount of disk space on your VMFS volumes,

    so ensure that you have adequate space available for them, as a VM will not power on if

    there is not enough room to create this file. These files are deleted when a VM is powered

    off or suspended.

    50. What is the .vmss file?

    This file is used when virtual machines are suspended and is used to preserve the

    memory contents of the VM so it can start up again where it left off. This file will be

    approximately the same size as the amount of RAM that is assigned to a VM (even empty

    memory contents are written). When a VM is brought out of a suspend state, the contents

    of this file are written back into the physical memory of a host server, however the file is

    not automatically deleted until a VM is powered off (an OS reboot won't work). If a

    previous suspend file exists when a VM is suspended again, this file is re-used instead of

    deleted and re-created. If this file is deleted while the VM is suspended, then the VM will

    start normally and not from a suspended state.

    51. What is the .vmsd file?

    This file is used with snapshots to store metadata and other information about each

    snapshot that is active on a VM. This text file is initially 0 bytes in size until a snapshot

    is created and is updated with information every time snapshots are created or deleted.

    Only one of these files exists regardless of the number of snapshots running, as they all

    update this single file. The snapshot information in this file consists of the name of the

    VMDK and vmsn file used by each snapshot, the display name and description, and the

    UID of the snapshot. Once your snapshots are all deleted this file retains old snapshot

    information but increments the snapshot UID to be used with new snapshots. It also

    renames the first snapshot to "Consolidate Helper," presumably to be used with

    consolidated backups.

    52. What is the .vmsn file?

    This file is used with snapshots to store the state of a virtual machine when a

    snapshot is taken. A separate .vmsn file is created for every snapshot that is created on a

    VM and is automatically deleted when the snapshot is deleted. The size of this file will

    vary based on whether or not you choose to include the VM's memory state with your

    snapshot. If you do choose to store the memory state, this file will be slightly larger than

    the amount of RAM that has been assigned to the VM, as the entire memory contents,

  • including empty memory, is copied to this file. If you do not choose to store the

    memory state of the snapshot then this file will be fairly small (under 32 KB). This

    file is similar in nature to the .vmss that is used when VMs are suspended.

    53. What is the .log file?

    These are the files that are created to log information about the virtual machine and

    are oftentimes used for troubleshooting purposes. There will be a number of these

    files present in a VM's directory. The current log file is always named vmware.log and up

    to six older log files will also be retained with a number at the end of their names (i.e.,

    vmware-2.log). A new log file is created either when a VM is powered off and back on or

    if the log file reaches the maximum defined size limit. The amount of log files that are

    retained and the maximum size limits are both defined as VM advanced configuration

    parameters (log.rotateSize and log.keepOld).

    54. What is the .vmxf file?

    This file is a supplemental configuration file that is not used with ESX but is

    retained for compatibility purposes with Workstation. It is in text format and is used

    by Workstation for VM teaming where multiple VMs can be assigned to a team so they

    can be powered on or off, or suspended and resumed as a single object.

    55. How to know registered VM on Host using CLI?

    List All VMs on the Host

    # vim-cmd vmsvc/getallvms

    Get Information for a Specific VM

    # vim-cmd vmsvc/get.guest 30

    Get Configuration for a Specific VM

    # vim-cmd vmsvc/get.config 30

    Get Summary for a Specific VM

    # vim-cmd vmsvc/get.summary 30

    Get Current Power State of a Specific VM

    # vim-cmd vmsvc/power.getstate 30

    Power On a Specific VM

    # vim-cmd vmsvc/power.on 30

    Power Off a Specific VM (Hard)

    # vim-cmd vmsvc/power.off 30

    Shutdown a Specific VM

    # vim-cmd vmsvc/power.shutdown 30

    Reboot a Specific VM

  • # vim-cmd vmsvc/power.reset 30

    List a Specific VMs Snapshots

    # vim-cmd vmsvc/get.snapshot 30

    Unregister a VM from a ESX Host

    # vim-cmd vmsvc/unregister 30

    Register a VM on a ESX Host

    # vim-cmd solo/registervm path/to/.vmx

    56. How many Hosts can we have in a Cluster in vCenter 5?

    HA does not limit the number of hosts in a cluster. Using more hosts in a cluster

    results in less overhead. (N+1 for 8 hosts vs N+1 for 32 hosts)

    Big clusters are good for DRS. More hosts equals more scheduling opportunities.

    Max number of hosts accessing a file = 8 .This is a constraint in an environment using

    linked clones like VMware View, vCloud Director.

    Max values in general (256 LUNs, 1024 Paths, 512 VMs per host, 3000 VMs per cluster)

    57. What are Resource Pool, Allocation and Priority?

    A VMware ESX Resource pool is a pool of CPU and memory resources. Inside the pool,

    resources are allocated based on the CPU and memory shares that are defined. This pool

    can have associated access control and permissions.

    58. What is PVLAN?

    A private VLAN is a technique in computer networking where a VLAN contains

    switch ports that are restricted, such that they can only communicate with a given

    "uplink". The restricted ports are called "private ports". Each private VLAN typically

    contains many private ports, and a single uplink. The uplink will typically be a port (or

    link aggregation group) connected to a router, firewall, server, provider network, or

    similar central resource.

    The switch forwards all frames received on a private port out the uplink port, regardless

    of VLAN ID or destination MAC address. Frames received on an uplink port are

    forwarded in the normal way (i.e., to the port hosting the destination MAC address, or to

    all VLAN ports for unknown destinations or broadcast frames). "Peer-to-peer" traffic is

    blocked. Note that while private VLANs provide isolation at the data link layer,

    communication at higher layers may still be possible.

    59. Which file is created during vMotion?

    A second vswp file gets created during a VMotion.

  • 60. Which one ver. of MS SQL required for Windows vCenter?

    Microsoft SQL Server 2008 Standard (R2) - 64-bit

    Microsoft SQL Server 2008 Standard (R2) - 32-bit

    Microsoft SQL Server 2008 Express (R2) - 64-bit

    Microsoft SQL Server 2008 Express (R2 SP1) - 64-bit

    Microsoft SQL Server 2008 Enterprise (R2) - 64-bit

    Microsoft SQL Server 2008 Enterprise (R2) - 32-bit

    Microsoft SQL Server 2008 Datacenter Edition (SP2) -64-bit

    Microsoft SQL Server 2008 Standard Edition (SP2) -32-bit

    61. What you understand by Appliances?

    A virtual appliance is a virtual machine image designed to run on a virtualization

    platform (e.g., VirtualBox, Xen, VMware Workstation, Parallels Workstation).

    Virtual appliances are a subset of the broader class of software appliances. Installation of

    a software appliance on a virtual machine creates a virtual appliance. Like software

    appliances, virtual appliances are intended to eliminate the installation, configuration and

    maintenance costs associated with running complex stacks of software.

    62. What are VMware vCloud Director 1.5 Config Maximums?

    Type: Virtual machines per vCloud Director

    Limit: 20000

    Description: The maximum number of virtual machines that may be resident in a vCloud

    instance.

    Type: Powered on VMs per vCloud Director

    Limit: 10000

    Description: Number of concurrently powered on virtual machines permitted per vCloud

    instance.

    Type: Virtual machines per vApp

    Limit: 128

    Description: The maximum number of virtual machines that can reside in a single vApp.

    Type: Hosts per vCloud Director

    Limit: 2000

    Description: Number of hosts that can be managed by a single vCloud instance.

    Type: vCenter Servers per vCloud Director

    Limit: 25

    Description: Number of vCenter servers that can be managed by a single vCloud

    instance.

    Type: Users per vCloud Director

    Limit: 10000

  • Description: The maximum number of users that can be managed by a single vCloud

    instance.

    Type: Organizations per vCloud Director

    Limit: 10000

    Description: The maximum number of organizations that can be created in a single

    vCloud instance.

    Type: vApps per organization

    Limit: 500

    Description: The maximum number of vApps that can be deployed in a single

    organization.

    Type: Virtual datacenters per vCloud Director

    Limit: 5000

    Description: The maximum number of virtual datacenters that can be created in a single

    vCloud instance.

    Type: Datastores per vCloud Director

    Limit: 1024

    Description: Number of datastores that can be managed by a single vCloud instance.

    Type: Networks per vCloud Director

    Limit: 7500

    Description: The maximum number of logical networks that can be deployed in a single

    vCloud instance.

    Type: Catalogs per vCloud Director

    Limit: 1000

    Description: The maximum number of catalogs that can be created in a single vCloud

    instance.

    Type: Media Items per vCloud Director

    Limit: 1000

    Description: The maximum number of media items which can be created in a single

    vCloud instance.

    63. Explain the entire process of P2V conversion?

    Below are the steps you should take to prepare your server for conversion.

    1) Install the Converter application on the server being migrated. If you are using the Enterprise version you can do this remotely, but my preference is to install Converter

    directly on to the server a potential complication caused by introducing another PC in

    the conversion process. If you have many machines to convert this is not always

    practical. The Converter application consists of two parts, the Agent component

    (Windows service) and the Manager component (front end GUI). If you are running

  • this on the server directly you need both components. Otherwise if you are running it

    remotely only the Agent component is needed.

    2) Once you install the application on the server a reboot will be required if the server OS is Windows NT 4.0 or 2000. This is because a special driver is installed for the

    cloning process on those OS's, Windows XP and 2003 utilize the Volume Shadow

    Copy service instead. Also, it's best to use a local administrator account when logging

    into the server to install the application.

    3) The following Windows services must be running for Converter to work properly: Workstation, Server, TCP/IP Netbios Helper and Volume Shadow Copy (Windows XP/2003, can be set to manual, just not disabled). Also, disable Windows

    Simple File Sharing if your source server is running Windows XP.

    4) Make sure the VMware Converter Windows service is running.

    5) Ensure you have at least 200 MB free on your source server's C drive. Mirrored or striped volumes across multiple disks should be broken; hardware RAID is OK since

    it is transparent to the operating system. Converter sometimes has issues converting

    dynamic disks, if you experience problems with them, then cold clone instead.

    6) Disable any antivirus software running on the source server.

    7) Shutdown any applications that are not needed on the server.

    8) Run chkdsk and defragment your source server's hard disks.

    9) Clean-up any temporary and unnecessary files on the source server. The less data that needs to be copied the better. This only applies when utilizing file level cloning (more

    on that later).

    10) Keep users off the server while cloning. Disable remote desktop and any shares if possible.

    11) Ensure required TCP/UDP ports are opened between the source server and VirtualCenter (VC) and VMware ESX. Even if you select VirtualCenter as your

    destination, the ports still need to be opened to the ESX server you choose. The

    source server first contacts VC to create the VM and then ESX to transfer the data to.

    Required ports are 443 and 902 (source to ESX/VC) and 445 and 139 (converter to

    source and source to Workstation/Server). These ports need to be opened on both OS

    firewalls and any network firewalls sitting between your source and destination

    servers.

    12) Ensure your network adapter speed/duplex matches your physical switch setting. This can have a dramatic effect on your conversion speed. When cold cloning it's best to

    set your physical switch port to Auto/Auto since this is what the Windows PE ISO

    will default to.

    13) If importing a VM or physical image the Windows version of the server running Converter must be equal to or greater than the source. So, if your source is Windows

    2003, the server running Converter cannot be Windows 2000.

    14) For cold cloning, the minimum memory requirement is 264 MB (will not work with less than this amount), the recommended memory is 364 MB. Converter also utilizes

    a RAM disk if you have at least 296 MB of memory available.

  • Making the conversion

    With these steps complete, we're ready to get started. Start the Converter Manager

    application and click the Import Machine button to start the Converter Wizard. Select

    your Source server, in this example we will choose Physical Computer. Select This Local

    Machine if running Converter on the source server, otherwise enter the hostname/IP and

    login information of the server to be converted. At the Source Data screen you have the

    option to select your disk volumes and re-size then larger or smaller if needed. Make sure

    you do not select any small utility partitions created by your hardware installation. What

    you decide here will determine which disk cloning method is used to copy your source

    data. If you do not change your drive sizes or increase them, then a block-level clone will

    be performed. If you decrease the size of your drives by any amount then a file-level

    clone will be performed instead.

    When a block-level clone is performed, data is transferred from the source server disk to

    the destination server disk block-by-block. This method is faster but results in more data

    being copied (even empty disk blocks are copied). When a file-level clone is performed,

    data is instead transferred file-by-file, which is slower but results in less data being

    copied. So if you only have 5 GB of data on a 40 GB drive, then only the 5 GB is copied.

    It's a trade-off between the two methods between faster transfer speed versus reduced

    data size which often results in about the same time to copy the data. One potential caveat

    with the file-level copy is if you have a server with a huge amount of small files, it can

    take days to copy the data, and will sometimes fail. I experienced a server with 200,000+

    2 K files in one directory which brought the conversion to a crawl. Once I removed these

    files it completed in a few hours.

    Next choose your destination server which is typically VirtualCenter (VC)/ESX. If you

    have a VC server managing a destination ESX server, it is best to choose the VC server

    first. Continue entering a VM name, host and datastore; at the Networks screen you can

    select one or more NIC's and networks to connect to.

    My preference is to first connect the VM to an Internal Only vSwitch so it is isolated

    from the source server and I can power it on while the source server is still up. Once I

    verify that the newly created VM is functioning properly and I go through the post-clone

    procedures, I shut down the source server and move the VM to the same network that the

    source server was on.

    Finally select whether or not to install VMware Tools, enter any OS customization if

    necessary, select whether or not to power on the VM right after the conversion completes

    and click the Finish button to start the conversion process. Once the conversion starts you

    can monitor the progress in the task progress window.

    64. Difference between hot clone and cold clone?

    Hot cloning: Convert physical machines while they are still running

    Cold cloning: Convert physical machines using a BootCD (download from VMware site)

  • 65. What is VMware VMotion & Storage VMotion (SVMotion)?

    With VMotion, VM guests are able to move from one ESX Server to another with no

    downtime for the users. What is required is a shared SAN storage system between the

    ESX Servers and a VMotion license.

    Storage VMotion (or SVMotion) is similar to VMotion in the sense that it moves VM

    guests without any downtime. However, what SVMotion also offers is the capability to

    move the storage for that guest at the same time that it moves the guest. Thus, you could

    move a VM guest from one ESX servers local storage to another ESX servers local storage with no downtime for the end users of that VM guest.

    66. What are the three port groups present in ESX4 server networking

    1. Virtual Machine Port Group - Used for Virtual Machine Network

    2. Service Console Port Group - Used for Service Console Communications

    3. VMKernel Port Group - Used for VMotion, iSCSI, NFS Communications

    67. What is the use of a Port Group?

    The port group segregates the type of communication.

    68. What is the type of communications which requires an IP address for sure?

    Service Console and VMKernel (VMotion and iSCSI), these communications does not

    happen without an ip address (Whether it is a single or dedicated)

    69. In the ESX Server licensing features VMotion License is showing as Not used, why?

    Even though the license box is selected, it shows as "License Not Used" until, you enable

    the VMotion option for specific vSwitch

    70. How the Virtual Machine Port group communication works?

    All the vm's which are configured in VM Port Group are able to connect to the physical

    machines on the network. So this port group enables communication between vSwitch

    and Physical Switch to connect vm's to Physical Machine's

    71. What is a VLAN?

    A VLAN is a logical configuration on the switch port to segment the IP Traffic. For this

    to happen, the port must be trunked with the correct VLAN ID.

    72. Does the vSwitches support VLAN Tagging? Why?

    Yes, The vSwitches support VLAN Tagging, otherwise if the virtual machines in an esx

    host are connected to different VLANS, we need to install a separate physical nic

  • (vSwitch) for every VLAN. That is the reason vmware included the VLAN tagging for

    vSwitches. So every vSwitch supports upto 1016 ports, and BTW they can support 1016

    VLANS if needed, but an ESX server doesnt support that many VMs. :)

    73. What is Promiscuous Mode on vSwitch ? What happens if it sets to accept?

    Accept: Placing a guest adapter in promiscuous mode causes it to detect all frames passed

    on the vSphere standard switch that are allowed under the VLAN policy for the port group

    that the adapter is connected to.

    Reject: Placing a guest adapter in promiscuous mode has no effect on which frames are

    received by the adapter.

    74. What is MAC address Changes? What happens if it is set to accept?

    Accept: Changing the MAC address from the Guest OS has the intended effect: frames to the

    new MAC address are received.

    Reject: If you set the MAC Address Changes to Reject and the guest operating system

    changes the MAC address of the adapter to anything other than what is in the .vmx

    configuration file, all inbound frames are dropped.

    If the Guest OS changes the MAC address back to match the MAC address in the .vmx

    configuration file, inbound frames are passed again.

    75. What is Forged Transmits? What happens if it is set to accept?

    Accept No filtering is performed and all outbound frames are passed.

    Reject Any outbound frame with a source MAC address that is different from the one currently set on the adapter are dropped.

    76. What are the core services of VC?

    VM provisioning, Task Scheduling and Event Logging

    77. Can we do vMotion between two datacenters? If possible how it will be?

    Yes we can do vMotion between two datacenters, but the mandatory requirement is the

    VM should be powered off.

    78. What is VC agent? And what service it is corresponded to? What are the minimum requirements for VC agent installation?

    VC agent is an agent installed on ESX server which enables communication between VC

    and ESX server. The daemon associated with it is called vmware-hostd , and the service

    which corresponds to it is called as mgmt-vmware, in the event of VC agent failure just

    restart the service by typing the following command at the service console

  • 79. "service mgmt-vmware restart " 80. How can you edit VI Client Settings and VC Server Settings?

    Click Edit Menu on VC and Select Client Settings to change VI settings

    Click Administration Menu on VC and Select VC Management Server Configuration to

    Change VC Settings

    81. What are the files that make a Virtual Machine?

    .vmx - Virtual Machine Configuration File

    .nvram - Virtual Machine BIOS

    .vmdk - Virtual Machine Disk file

    .vswp - Virtual Machine Swap File

    .vmsd - Virtual Machine Sna