01-vm configuration (sr6602)

24
Contents Configuring virtual machines·········································································· 1 About virtual machine hosting ···························································································································· 1 Architecture ················································································································································ 1 Supported vNICs ········································································································································ 2 Communication mechanisms ····················································································································· 3 Restrictions: Hardware compatibility with VMs ·································································································· 9 Restrictions and guidelines: Disk requirements and Linux command usage ····················································· 9 VM configuration tasks at a glance ···················································································································· 9 Enabling VM capability ····································································································································· 10 Allocating CPU cores to the VM plane ············································································································· 11 Allocating memory to the VM plane ················································································································· 11 Installing a VM·················································································································································· 12 Starting a VM ··················································································································································· 13 Resizing a VM ·················································································································································· 13 Attaching a drive to a VM ························································································································· 13 Detaching a drive from a VM···················································································································· 14 Attaching an SR-IOV VF NIC to a VM······································································································ 14 Assigning an SR-IOV VF NIC to a VLAN ································································································· 15 Detaching an SR-IOV VF NIC from a VM ································································································ 15 Attaching a MACVtap NIC to a VM ·········································································································· 15 Detaching a MACVtap NIC from a VM ····································································································· 16 Allocating CPU cores to a VM ·················································································································· 16 Binding vCPU cores to physical CPU cores····························································································· 17 Setting the maximum amount of memory that can be allocated to a VM ················································· 17 Allocating memory to a VM ······················································································································ 17 Assigning an IP address to a physical NIC and setting the NIC mode ···················································· 18 Editing the XML file of a VM ····················································································································· 18 Maintaining a VM ············································································································································· 19 Enabling VM auto-startup························································································································· 19 Stopping a VM·········································································································································· 19 Suspending a VM ····································································································································· 19 Resuming a suspended VM ····················································································································· 20 Creating a snapshot for a VM ·················································································································· 20 Deleting VM snapshots ···························································································································· 20 Rolling back a VM ···································································································································· 21 Exporting a VM········································································································································· 21 Uninstalling a VM ····································································································································· 21 Verifying and maintaining VMs ························································································································ 22

Upload: others

Post on 30-Dec-2021

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 01-VM configuration (SR6602)

Contents

Configuring virtual machines ·········································································· 1

About virtual machine hosting ···························································································································· 1 Architecture ················································································································································ 1 Supported vNICs ········································································································································ 2 Communication mechanisms ····················································································································· 3

Restrictions: Hardware compatibility with VMs ·································································································· 9 Restrictions and guidelines: Disk requirements and Linux command usage ····················································· 9 VM configuration tasks at a glance ···················································································································· 9 Enabling VM capability ····································································································································· 10 Allocating CPU cores to the VM plane ············································································································· 11 Allocating memory to the VM plane ················································································································· 11 Installing a VM·················································································································································· 12 Starting a VM ··················································································································································· 13 Resizing a VM ·················································································································································· 13

Attaching a drive to a VM ························································································································· 13 Detaching a drive from a VM ···················································································································· 14 Attaching an SR-IOV VF NIC to a VM ······································································································ 14 Assigning an SR-IOV VF NIC to a VLAN ································································································· 15 Detaching an SR-IOV VF NIC from a VM ································································································ 15 Attaching a MACVtap NIC to a VM ·········································································································· 15 Detaching a MACVtap NIC from a VM ····································································································· 16 Allocating CPU cores to a VM ·················································································································· 16 Binding vCPU cores to physical CPU cores ····························································································· 17 Setting the maximum amount of memory that can be allocated to a VM ················································· 17 Allocating memory to a VM ······················································································································ 17 Assigning an IP address to a physical NIC and setting the NIC mode ···················································· 18 Editing the XML file of a VM ····················································································································· 18

Maintaining a VM ············································································································································· 19 Enabling VM auto-startup ························································································································· 19 Stopping a VM ·········································································································································· 19 Suspending a VM ····································································································································· 19 Resuming a suspended VM ····················································································································· 20 Creating a snapshot for a VM ·················································································································· 20 Deleting VM snapshots ···························································································································· 20 Rolling back a VM ···································································································································· 21 Exporting a VM ········································································································································· 21 Uninstalling a VM ····································································································································· 21

Verifying and maintaining VMs ························································································································ 22

Page 2: 01-VM configuration (SR6602)

1

Configuring virtual machines About virtual machine hosting

A virtual machine (VM) is a complete computer system running in an isolated environment with full hardware system functions simulated by software. A VM can provide the same services as a physical computer. When creating a VM, you need to assign part of the CPU, memory, and hard disk resources on the physical device that hosts the VM to the VM. Each VM can install an operating system like a physical computer for service processing.

The device can act as an ICT converged gateway that integrates IT and CT. It supports VM deployment through x86 virtualization technology to complete interaction of service data for enterprise users.

Architecture The device can host one or multiple VMs. The maximum number of hosted VMs on the device depends on the device's CPU and memory usage. Figure 1 shows the logical architecture of the device.

Figure 1 Architecture of the device

As shown in Figure 1, the device architecture contains the following components: • VMs—Deployed as needed on the device to offer new applications or services. The operating

system of a VM is called the guest OS. • Router—The routing module.

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap

MACVtap VFVF

VF

VF

PF

VLAN Interface…

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap

Page 3: 01-VM configuration (SR6602)

2

• Linux kernel. • MACVLAN+Bridge—MACVLAN and Bridge modules of the kernel, which offer MACVtap

interfaces and data forwarding service to VMs. • MACVtap—A virtual interface provided by the MACVLAN module. MACVtap interfaces by

default belong to VLAN 1. • SR-IOV NIC—A NIC built with single route input/output virtualization (SR-IOV) technology to

support hardware-based virtualization. The device has two SR-IOV NICs. • PF—A physical interface on an SR-IOV NIC. Typically, a physical NIC is a PF. Physical

functions (PFs) by default belong to VLAN 1. • VF—The SR-IOV technology virtualizes a PF into multiple virtual functions (VFs). VMs use the

VFs virtualized from the PF as their virtual NICs. By default, the VFs do not belong to a VLAN. • Hardware switch—The hardware switching module. • LAN—Layer 2 Ethernet ports. By default, the ports are access ports. • LAN (inner)—Virtual LAN interfaces that exist only when the device is in ICT mode. By default,

interfaces are XGE 0/0/32 and XGE 0/0/33. To use VMs, you must set the link type of these interfaces to trunk.

• MGE—The management Ethernet interface. • WAN—Layer 3 Ethernet port. • VLAN interface.

Supported vNICs The VMs on the device can use MACVtap and SR-IOV VF NICs for network connectivity.

MACVtap NIC A MACVtap NIC provides software-based forwarding. It is slower than an SR-IOV VF NIC. However, you do not need to install any driver for the MACVtap NICs.

The MACVLAN module of the Linux kernel offers eight MACVtap interfaces. You can assign them as MACVtap NICs to VMs. The Bridge module forwards service data of VMs to physical NICs through the MACVtap interfaces. The system has reserved eight MAC addresses to the MACVtap NICs. When you attach a MACVtap NIC to a VM, you must assign a MAC address to the NIC.

SR-IOV VF NIC An SR-IOV NIC provides hardware-based forwarding. It forwards traffic at higher speeds than MACVtap NICs. The device provides two SR-IOV NICs.

An SR-IOV VF NIC is a VF interface created on a physical SR-IOV NIC. You can create multiple SR-IOV VF NICs on an SR-IOV NIC and assign them to VMs. With an SR-IOV VF NIC, a VM can send service data directly to the physical NIC, and the physical NIC will forward the data.

NOTE: To use SR-IOV VF NICs for VM connectivity, you must install an SR-IOV VF NIC driver in the guest OS of VMs. If the guest OS is not compatible with the driver, you can only use the MACVtap NIC for VM connectivity.

Page 4: 01-VM configuration (SR6602)

3

Communication mechanisms Communication scenarios

Table 1 Communication scenarios and used NICs

Scenario Ports or interfaces

VM-VM traffic forwarding • Between MACVtap NICs • Between a MACVtap NIC and an SR-IOV VF NIC • Between SR-IOV VF NICs

VM-WAN traffic forwarding • Between a MACVtap NIC and a WAN port • Between an SR-IOV VF NIC and a WAN port

VM-LAN traffic forwarding • Between a MACVtap NIC and a LAN port • Between an SR-IOV VF NIC and a LAN port

VM-VM traffic forwarding Figure 2 illustrates the forwarding path for the traffic between the MACVtap NICs of two VMs. The MACVLAN and Bridge modules forward the traffic.

Figure 2 MACVtap-MACVtap traffic forwarding

Figure 3 illustrates the forwarding path for the traffic from a MACVtap NIC on one VM to an SR-IOV VF NIC on another VM. 1. The source MACVtap NIC forwards the traffic to the MACVLAN and Bridge modules. 2. The MACVLAN and Bridge modules forward the traffic to the SR-IOV NIC through a PF

interface. 3. The SR-IOV NIC forwards the traffic to the destination VF interface.

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap VFVF

VF

VF

PF

VLAN Interface

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap MACVtap

Page 5: 01-VM configuration (SR6602)

4

The path of SR-IOV VF-to-MACVtap traffic forwarding is the reverse of the MACVtap-to-SR-IOV VF traffic forwarding path.

Figure 3 MACVtap-SR-IOV VF traffic forwarding

Figure 4 illustrates the forwarding path for the traffic between the SR-IOV VF NICs of two VMs. The SR-IOV NIC forward the traffic through VF interfaces.

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap VFVF

VF

VF

…VLAN Interface

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap

MACVtap

PF

Page 6: 01-VM configuration (SR6602)

5

Figure 4 SR-IOV VF-SR-IOV VF traffic forwarding

VM-WAN traffic forwarding Figure 5 illustrates the forwarding path for the traffic from a MACVtap NIC on a VM to the WAN port. 1. The MACVtap NIC forwards the traffic to the MACVLAN and Bridge modules. 2. The MACVLAN and Bridge modules forward the traffic to the SR-IOV NIC through a PF. 3. The SR-IOV NIC forwards the traffic to the hardware switch through the internal LAN interface. 4. The hardware switch forwards the traffic to the routing module. 5. The routing module forwards the traffic to the WAN port through the VLAN interface.

The path of WAN-to-MACVtap traffic forwarding is the reverse of the MACVtap-to-WAN traffic forwarding path.

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap

MACVtap VFVF

PF

VLAN Interface

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap

VF

VF

Page 7: 01-VM configuration (SR6602)

6

Figure 5 MACVtap-WAN traffic forwarding

Figure 5 illustrates the forwarding path for the traffic from an SR-IOV VF NIC on a VM to the WAN port. 1. The VM forwards the traffic to the SR-IOV NIC through a VF interface. 2. The SR-IOV NIC forwards the traffic to the hardware switch through the internal LAN interface. 3. The hardware switch forwards the traffic to the routing module. 4. The routing module forwards the traffic to the WAN port through the VLAN interface.

The path of WAN-to-SR-IOV VF traffic forwarding is the reverse of the SR-IOV VF-to-WAN traffic forwarding path.

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap VFVF

VF

VF

…VLAN Interface

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap

MACVtap

PF

Page 8: 01-VM configuration (SR6602)

7

Figure 6 SR-IOV VF-WAN traffic forwarding

VM-LAN traffic forwarding Figure 7 illustrates the forwarding path for the traffic from a MACVtap NIC on a VM to a LAN port on the device. 1. The MACVtap NIC forwards the traffic to the MACVLAN and Bridge modules. 2. The MACVLAN and Bridge modules forward the traffic to the SR-IOV NIC through a PF. 3. The SR-IOV NIC forwards the traffic to the hardware switch through the internal LAN interface. 4. The hardware switch forwards the traffic to the LAN port.

The path of LAN-to-MACVtap traffic forwarding is the reverse of the MACVtap-to-LAN traffic forwarding path.

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap

MACVtap VFVF

VF

VF

PF

VLAN Interface

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap

Page 9: 01-VM configuration (SR6602)

8

Figure 7 MACVtap-LAN traffic forwarding

Figure 8 illustrates the forwarding path for the traffic from an SR-IOV VF NIC on a VM to a LAN port on the device. 1. The VM forwards the traffic to the SR-IOV NIC through a VF interface. 2. The SR-IOV NIC forwards the traffic to the hardware switch through the internal LAN interface. 3. The hardware switch forwards the traffic to the LAN port.

The path of LAN-to-SR-IOV VF traffic forwarding is the reverse of the SR-IOV VF-to-LAN traffic forwarding path.

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap VFVF

VF

VF

…VLAN Interface

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap

MACVtap

PF

Page 10: 01-VM configuration (SR6602)

9

Figure 8 SR-IOV VF-LAN traffic forwarding

Restrictions: Hardware compatibility with VMs VM commands are supported only by the SR6602-I and SR6602-IE routers. By default, VM capability is disabled.

Restrictions and guidelines: Disk requirements and Linux command usage

Make sure the disks used to deploy VMs have sufficient storage space and their file system format is EXT4.

To obtain help information for standard Linux commands, use the --help keyword. For example, to obtain help information for the virsh commands, enter virsh --help.

The virsh, ip, and ifconfig commands are standard Linux commands. Required parameters are enclosed into a pair of angle brackets (<>). Optional parameters are enclosed into a pair of brackets.

VM configuration tasks at a glance To configure and manage VMs, perform the following tasks: 1. Enabling VM capability 2. Allocating CPU cores to the VM plane

VM1 VMn

Hardware Switch

Route

Linux Kernel

MACVLAN+Bridge

SRIOV NIC

MACVtap

MACVtap VFVF

VF

PF

VLAN Interface

LAN(inner)

LAN LAN LAN LAN LAN LAN

MGE

WAN

MACVtap

MACVtap

VF

Page 11: 01-VM configuration (SR6602)

10

3. Allocating memory to the VM plane 4. Installing a VM 5. Starting a VM 6. Resizing a VM

Attaching a drive to a VM Detaching a drive from a VM Attaching an SR-IOV VF NIC to a VM Assigning an SR-IOV VF NIC to a VLAN Detaching an SR-IOV VF NIC from a VM Attaching a MACVtap NIC to a VM Detaching a MACVtap NIC from a VM Allocating CPU cores to a VM Binding vCPU cores to physical CPU cores Setting the maximum amount of memory that can be allocated to a VM Allocating memory to a VM Assigning an IP address to a physical NIC Editing the XML file of a VM

7. Maintaining a VM Enabling VM auto-startup Stopping a VM Suspending a VM Resuming a suspended VM Creating a snapshot for a VM Deleting VM snapshots Rolling back a VM Exporting a VM Uninstalling a VM

Enabling VM capability About this task

Perform this task to enable VM capability and allocate CPU and memory resources to the VM plane.

Restrictions and guidelines Reboot the device for this task to take effect.

With VM capability, the device supports both routing and VM virtualization.

If you do not specify the vcpu-pool cpu-count or vmem-pool memory-size option, the default settings of the option apply.

Procedure 1. Enter system view.

system-view

2. Enable VM capability. ict mode enable [ vcpu-pool cpu-count vmem-pool memory-size ]

By default, VM capability is disabled.

Page 12: 01-VM configuration (SR6602)

11

The default settings of the vcpu-pool cpu-count and vmem-pool memory-size options are as follows: The number of CPU cores allocated to the VM plane is the total number of CPU cores on the

device minus 2. The amount of memory allocated to the VM plane is the total amount of memory on the

device minus 3 GB.

Allocating CPU cores to the VM plane About this task

Perform this task to adjust the total number of CPU cores allocated to the VM plane.

Restrictions and guidelines Among all CPU cores, the device allocates CPU cores to the VM plane in descending order of CPU core numbers. For example, the device has eight CPU cores numbered 0 to 7. If you allocate five CPU cores allocated to the VM plane, the device allocates the CPU cores numbered 3 to 7 to the VM plane. When binding vCPU cores to physical CPU cores, you must specify the correct physical CPU core numbers.

The value range for CPU core numbers is 0 to the total number of CPU cores on the device minus 2.

Reboot the device for this task to take effect. If you allocate zero CPU cores to the VM plane, the VM plane is not accessible after the device reboots.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Allocate CPU cores to the VM plane. set vcpu-pool cpu-count

By default, the number of CPU cores allocated to the VM plane is the total number of CPU cores on the device minus 2.

Allocating memory to the VM plane About this task

Perform this task to adjust the amount of memory allocated to the VM plane.

Restrictions and guidelines Reboot the device for this task to take effect. If you allocate zero GB of memory to the VM plane, the VM plane is not accessible after the device reboots.

The maximum amount of memory you can allocate to the VM plane is the total amount of memory on the device minus 3 GB.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Set the amount of memory allocated to the VM plane.

Page 13: 01-VM configuration (SR6602)

12

set vmem-pool memory-size

By default, the amount of memory allocated to the VM plane is the total amount of memory on the device minus 3 GB.

Installing a VM About this task

The following VM installation methods are available: • Install a VM by using a .pkg or .xml file—You can bulk-install VMs by using a .pkg or .xml VM

file. The file contains all files that make up a VM. All VMs created from the file have the same parameter settings as the source VM, including their guest OS, CPU, and memory settings. After a VM is installed, you can tune its settings as needed.

• Install a VM by manually specifying its parameters—You can install VMs by specifying VM parameters at the CLI of the device. Each VM has its own parameter settings, including the guest OS, CPU, and memory settings.

Prerequisites Use the qemu-img create command to create a disk image for a VM before you install the VM.

Restrictions and guidelines If you are using a .pkg or .xml file to install a VM, export an existing .pkg or .xml file as described in "Exporting a VM."

You can install a VM from a USB flash drive. If a USB flash drive is used, make sure the following requirements are met: • The file system format of the USB flash drive is EXT4. • You can use only a .pkg file, and you must store the file in the VmImages folder in the root

directory of the USB flash drive. The folder name is case-sensitive.

For a VM, its memory size is determined by the number of CPU cores. For example, if you are using a VM to provide H3C vFW services, use the following guidelines when you specify memory for the VM: • If the vFW requires only one CPU core, allocate 2 GB of memory to the VM. • If the vFW requires two CPU cores, allocate 4 GB of memory to the VM. • If the vFW requires four CPU cores, allocate 8 GB of memory to the VM.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Create a disk image. qemu-img create [ -f fmt ] filename [ size ]

4. Install a VM. Choose one of the following methods: Install a VM from a USB flash drive.

Install the USB flash drive and reboot the device. Install a VM from the CLI. Choose one of the following methods:

− Save a .pkg file on the device and execute the following command. import pkg-path

− Save a .xml file on the device and execute the following command.

Page 14: 01-VM configuration (SR6602)

13

virsh define < file > Install a VM based on the specified parameters.

virsh define-by-cmd < domname > < vcpucount > < memsize > < vncport > < disksource > < disksubdriver > < disktargetbus > [ --cdromsource < string > ]

Starting a VM Restrictions and guidelines

To start a VM successfully, you must make sure the VM has sufficient memory. In addition to the memory for VMs, you must also make sure sufficient memory is available for the system to run.

If the system does not have sufficient memory, it automatically stops the VM that uses the most memory.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Start a VM. virsh start < domain >

Resizing a VM After you install a VM, you can resize the VM as needed by adding or removing its resources such as disks and memory.

Attaching a drive to a VM About this task

Perform this task to attach a disk or CD-ROM drive to a VM.

Restrictions and guidelines After you add a disk to a VM, you must partition and format that disk on the VM before you can use it.

You can mount only one disk to a vFW VM.

To change the startup disk or operating system for a VM, use the virsh edit command to change the boot order of disks or the operating system image. The lower the boot order number, the higher the boot priority. The boot order number range is 0 to 99. A value of 0 means do not boot a drive. By default, the boot order number of a disk is 1, and the boot order number of the operating system image is 8.

Some earlier operating systems do not support the --live keyword. With such an operating system, you cannot attach drives to a VM when the VM is running. Centos 7.4 and later support the --live keyword. With such an operating system, you can attach drives to a VM when the VM is running.

Procedure 1. Enter system view.

system-view

Page 15: 01-VM configuration (SR6602)

14

2. Enter VMM view. vmm

3. Create a disk image. qemu-img create [ -f fmt ] filename [ size ]

4. Attach a drive to a VM. virsh attach-disk < domain > < source > < target > [ --targetbus < string > ] [ --subdriver < string > ] [ --cache < string > ] [ --type < string > ] [ --config ] [ --live ]

Detaching a drive from a VM Restrictions and guidelines

This task does not delete the image file of a detached drive. To delete the image file, execute the delete command in user view. For more information about this command, see file system management commands in Fundamentals Command Reference.

Some earlier operating systems do not support the --live keyword. With such an operating system, you cannot detach drives from a VM when the VM is running. Centos 7.4 and later support the --live keyword. With such an operating system, you can detach drives from a VM when the VM is running.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. detach a drive from a VM. virsh detach-disk < domain > < target > [ --config ] [ --live ]

Attaching an SR-IOV VF NIC to a VM About this task

An SR-IOV VF NIC is a VF interface created on a physical SR-IOV NIC. You can create multiple SR-IOV VF NICs on an SR-IOV NIC and assign them to VMs. With an SR-IOV VF NIC, a VM can send service data directly to the physical NIC, and the physical NIC will forward the data.

Restrictions and guidelines When you attach an SR-IOV VF NIC to a VM, you must specify the PCIe address of the NIC. To view the PCIe addresses of SR-IOV VF NICs, execute the display sriov-vf-pciaddr command.

For an SR-IOV VF NIC to operate on a VM, you must install an SR-IOV VF NIC driver on the VM. If a VM does not have a driver integrated in its operating system, install one according to Linux requirements.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Attach an SR-IOV VF NIC to a VM.

Page 16: 01-VM configuration (SR6602)

15

virsh attach-sriov < domain > < pciaddr >

Assigning an SR-IOV VF NIC to a VLAN About this task

By default, a VF interface is not in any VLANs. For a VM to communicate with external networks, you must assign its SR-IOV VF NIC to a VLAN and configure the VLAN interface as the gateway.

Restrictions and guidelines If you set the VLAN ID of an SR-IOV VF NIC to 0, the SR-IOV VF NIC does not belong to any VLAN.

To change the VLAN of an SR-IOV VF NIC, you must first remove the SR-IOV VF NIC from the original VLAN.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Assign an SR-IOV VF NIC to a VLAN. ip link set DEVICE vf NUM vlan VLANID

Detaching an SR-IOV VF NIC from a VM Restrictions and guidelines

If you perform this task on a running VM, you must restart the VM for the configuration to take effect. If you perform this task on a stopped VM, the configuration takes effect after you start the VM.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Detach an SR-IOV VF NIC from a VM. virsh detach-sriov < domain > < pciaddr >

Attaching a MACVtap NIC to a VM About this task

The MACVLAN module of the Linux kernel offers eight MACVtap interfaces. You can assign them as MACVtap NICs to VMs. The Bridge module forwards service data of VMs to physical NICs through the MACVtap interfaces.

The system has reserved eight MAC addresses to the MACVtap NICs. When you attach a MACVtap NIC to a VM, you must assign a MAC address to the NIC. By default, a MACVtap interface belongs to VLAN 1. To use MACVtap NICs, you must configure VLAN-interface 1 as the gateway.

Restrictions and guidelines A MACVtap NIC supports the following network modes: • VEPA—In this mode, the MACVtap NICs of the same physical NIC can communicate with each

other through an external switch.

Page 17: 01-VM configuration (SR6602)

16

• Bridge—In this mode, the MACVtap NICs of the same physical NIC can directly communicate with each other.

• Private—In this mode, the MACVtap NICs of the same physical NIC cannot communicate with each other.

By default, a MACVtap NIC operates in VEPA mode. However, the device supports only the bridge mode in the current software version. After you attach a MACVtap NIC to a VM, you must use the virsh edit command to change the NIC network mode to bridge.

When you attach a MACVtap NIC to a VM, you must assign a MAC address to the NIC. To view the MAC addresses reserved for MACVtap NICs, execute the display mac-for-vmminterface command.

Multiple MACVtap NICs cannot share a MAC address.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Attach a MACVtap NIC to a VM. virsh attach-interface < domain > < type > < source > [ --mac < string > ] [ --model < string > ][ --config ] [ --live ]

By default, a MACVtap is in VEPA network mode.

Detaching a MACVtap NIC from a VM Restrictions and guidelines

When you detach a MACVtap NIC from a VM, you must specify the MAC address of the NIC. To view the MAC addresses of MACVtap NICs, execute the virsh domiflist command.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Detach a MACVtap NIC from a VM. virsh detach-interface < domain > < type > [ --mac < string > ] [ --config ] [ --live ]

Allocating CPU cores to a VM About this task

This task sets the number of CPU cores allocated to a VM.

Restrictions and guidelines Some earlier operating systems do not support the --live keyword. With such an operating system, you cannot allocate more CPU cores to a VM when the VM is running. Centos 7.4 and later support the --live keyword. With such an operating system, you can allocate more CPU cores to a VM when the VM is running.

You cannot reduce the number of CPU cores allocated to a VM when the VM is running.

Page 18: 01-VM configuration (SR6602)

17

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Set the number of CPU cores allocated to a VM. virsh setvcpus < domain > < count > [ --maximum ] [ --config ] [ --live ]

Binding vCPU cores to physical CPU cores About this task

Perform this task to bind the vCPU cores of a VM to physical CPU cores on the device.

Restrictions and guidelines If you bind multiple vCPU cores of a VM to only one physical CPU core, the VM might fail to start up because of CPU resource conflict. As a best practice to ensure correct VM startup, bind the vCPU cores to different physical CPU cores.

To view the bindings between vCPU cores and physical CPU cores, execute the virsh vcpuinfo command.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Bind a vCPU core and a physical CPU core for a VM. virsh vcpupin < domain > [ --vcpu < number > ] [ --cpulist < string > ] [ --config ] [ --live ]

Setting the maximum amount of memory that can be allocated to a VM Restrictions and guidelines

If you perform this task on a running VM, you must restart the VM for the configuration to take effect. If you perform this task on a stopped VM, the configuration takes effect after you start the VM.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Set the maximum amount of memory that can be allocated to a VM. virsh setmaxmem < domain > < size > [ --config ]

Allocating memory to a VM 1. Enter system view.

Page 19: 01-VM configuration (SR6602)

18

system-view

2. Enter VMM view. vmm

3. Allocate memory to a VM. virsh setmem < domain > < size > [ --config ] [ --live ]

Assigning an IP address to a physical NIC and setting the NIC mode About this task

You can use VNC remote terminal software to log in to the GUI of a VM after the VM is installed. For this purpose, specify the VNC server address and VNC port number. The IP address specified in this task is used as the VNC server address.

Enable promiscuous mode in a scenario that requires network analysis. • In promiscuous mode, the SR-IOV VF NICs of the physical NIC can receive all packets that

pass through the physical NIC regardless of the destination address of the packets. • If promiscuous mode is disabled, the SR-IOV VF NICs can receive only broadcast IP packets

and packets whose destination IP address is an IP address of the SR-IOV VF NICs.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Assign an IP address to a physical NIC and set the NIC mode. ifconfig < interface > [ < address > ] [ netmask < address > ] [ [ - ] promisc ]

Editing the XML file of a VM About this task

Perform this task to change VM parameters, including the boot order of drives, the VNC port number, and the network mode of MACVtap NICs.

Restrictions and guidelines Enter /parameter to search for the specified parameter. Enter n to search for the next. Enter i to enter the view to edit the parameter and press Esc to quit the parameter editing view. Enter :wq to save the configuration when quitting the parameter editing view, and enter :q! to not save the configuration when quitting the parameter editing view.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Edit the .xml file of a VM. virsh edit < domain > [ --skip-validate ]

Page 20: 01-VM configuration (SR6602)

19

Maintaining a VM Enabling VM auto-startup About this task

Perform this task to enable a VM to start up automatically when the device starts.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Enable VM auto-startup. virsh autostart < domain >

4. (Optional.) Disable VM auto-startup. virsh autostart < domain > --disable

Stopping a VM Restrictions and guidelines

CAUTION: A force stop might cause data loss. Do not force a VM down unless necessary.

If you shut down the device, the system will attempt to stop each VM within two minutes one by one. If the system fails to stop a VM within two minutes, it will force that VM down.

After you stop a VM, the system will attempt to stop the VM within two minutes. If the system fails to do so because of an abnormal process, access the VM, manually shut down the process, and stop the VM again. If the stop operation still fails, force the VM down.

If a VM does not have an operating system, you must execute the virsh destroy command to force it down.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Stop a VM. virsh shutdown < domain >

4. (Optional.) Force a VM down. virsh destroy < domain >

Suspending a VM About this task

Perform this task to place a VM in Paused state.

Page 21: 01-VM configuration (SR6602)

20

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Suspend a VM. virsh suspend < domain >

Resuming a suspended VM 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Resume a suspended VM. virsh resume < domain >

Creating a snapshot for a VM About this task

This task creates a copy of the running configuration for a VM at the time you perform this task. If you have created a snapshot for a VM, the subsequent snapshots you created for the VM are the child snapshots of the first snapshot. The first snapshot is also called the current snapshot.

Restrictions and guidelines Perform this task only on VMs in shutdown state.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Create a snapshot for a VM. virsh snapshot-create-as < domain > [ --name < string > ] [ --description < string > ]

Deleting VM snapshots 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Delete VM snapshots. virsh snapshot-delete < domain > [ --snapshotname < string > ] [ --current ] [ --children ] [ --children-only ]

Page 22: 01-VM configuration (SR6602)

21

Rolling back a VM About this task

This task rolls back a VM from a snapshot to restore the VM to the state when the snapshot was created.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Roll back a VM. virsh snapshot-revert < domain > [ --snapshotname < string > ]

Exporting a VM About this task

Perform this task to export a VM to a .pkg file in the specified path.

Prerequisites You must stop a VM by using the virsh shutdown command before you can export it.

Restrictions and guidelines Make sure you have access permissions to the target path and the target path has sufficient storage space. If the .pkg file is saved on a USB flash drive, make sure the file system format of the USB flash drive is EXT4.

Procedure 1. Enter system view.

system-view

2. Enter VMM view. vmm

3. Export a VM. export domain-name pkg-path

Uninstalling a VM Prerequisites

You must stop a VM by using the virsh shutdown command before you can uninstall it.

Restrictions and guidelines After you uninstall a VM, the hard disks allocated to the VM still retain the files created for the VM. To release storage space, you must use the delete command to manually delete the hard disks. For more information about deleting hard disks, see file system management in Fundamentals Configuration Guide.

Procedure 1. Enter system view.

system-view

Page 23: 01-VM configuration (SR6602)

22

2. Enter VMM view. vmm

3. Uninstall a VM. virsh undefined < domain >

Verifying and maintaining VMs Perform display tasks in any view, and perform the other tasks in VMM view. • Display VM SR-IOV VF NIC information.

display domain-sriov-vf domain-name

• Display VM capability information. display ict mode

• Display the MAC addresses reserved for MACVtap NICs. display mac-for-vmminterface

• Display the PCIe addresses of SR-IOV VF NICs. display sriov-vf-pciaddr

• Display the number of CPU cores allocated to the VM plane. display vcpu-pool

• Display the amount of memory allocated to the VM plane. display vmem-pool

• Display physical NIC information. ifconfig [ -a ] [ -v ] [ -s ] < interface >

• Display detailed physical NIC information. ip link show [ DEVICE ]

• Display detailed information about a disk image. qemu-img info filename

• Display detailed information about a VM. virsh dominfo < domain >

• Display disk image information for a VM. virsh domblklist < domain > [ --inactive ] [ --details ]

• Display NIC information for a VM. virsh domiflist < domain > [ --inactive ]

• Display VMs. virsh list [ --all ] [ --autostart ] [ --inactive ] [ --no-autostart ]

• Display the current snapshot of a VM. virsh snapshot-current < domain > [ --name ]

• Display all snapshots for a VM. virsh snapshot-list < domain >

• Display the number of CPU cores allocated to a VM. virsh vcpucount < domain > [ --maximum ] [ --active ] [ --live ] [ --config ]

• Display detailed information about vCPU cores for a VM. virsh vcpuinfo < domain > [ --pretty ]

Page 24: 01-VM configuration (SR6602)

23

• Display the bindings between vCPU cores and physical CPU cores for a VM. virsh vcpupin < domain > [ --vcpu < number > ] [ --cpulist < string > ] [ --config ] [ --live ]

• Display the VNC port number of a VM. virsh vncdisplay < domain >