deploying v2pc - cisco · 2-2 cisco virtualized video processing controller deployment guide...

28
CHAPTER 2-1 Cisco Virtualized Video Processing Controller Deployment Guide 2 Deploying V2PC This chapter provides information on deploying V2PC. It includes the following topics: V2PC Deployment Requirements, page 2-1 V2PC Deployment Sizing, page 2-1 Deployment Testing Requirements, page 2-4 Deploying the V2PC System, page 2-4 Sample JSON File, page 2-22 V2PC Deployment Requirements Hardware: UCS B200-M3/M4 Provider: VMware Note All ESXi Host UCS blade servers should have the same hardware specifications. VMware EXSi version and patch level requirements may vary by release. Be sure to check the release notes for your release for specific EXSi version and patch level requirements. V2PC master repository node should be deployed as 2X-Large (8 CPU, 32 GB RAM, 40 GB Disk storage) ELK node should be deployed as 2X-Large with 500 GB disks space (8 CPU, 32 GB RAM, 500 GB Disk storage) Note Cisco Media Origination System (MOS) provides important guidelines for configuring UCS server network and interface policies to optimize the traffic flow through the MCE workers for Live, VoD, and cDVR applications. For details, see UCS Configuration in the Cisco Media Origination System User Guide – Software Version 2.5.1. V2PC Deployment Sizing The following tables provide V2PC sizing requirements for each of the components in the deployment.

Upload: others

Post on 17-Jan-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Cisco Virtualized

C H A P T E R 2

Deploying V2PC

This chapter provides information on deploying V2PC. It includes the following topics:

• V2PC Deployment Requirements, page 2-1

• V2PC Deployment Sizing, page 2-1

• Deployment Testing Requirements, page 2-4

• Deploying the V2PC System, page 2-4

• Sample JSON File, page 2-22

V2PC Deployment Requirements• Hardware: UCS B200-M3/M4

• Provider: VMware

Note • All ESXi Host UCS blade servers should have the same hardware specifications.

• VMware EXSi version and patch level requirements may vary by release. Be sure to check the release notes for your release for specific EXSi version and patch level requirements.

• V2PC master repository node should be deployed as 2X-Large (8 CPU, 32 GB RAM, 40 GB Disk storage)

• ELK node should be deployed as 2X-Large with 500 GB disks space (8 CPU, 32 GB RAM, 500 GB Disk storage)

Note Cisco Media Origination System (MOS) provides important guidelines for configuring UCS server network and interface policies to optimize the traffic flow through the MCE workers for Live, VoD, and cDVR applications. For details, see UCS Configuration in the Cisco Media Origination System User Guide – Software Version 2.5.1.

V2PC Deployment SizingThe following tables provide V2PC sizing requirements for each of the components in the deployment.

2-1 Video Processing Controller Deployment Guide

Page 2: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCV2PC Deployment Sizing

* Two required per service: one pair for MCE and another pair for MPE.

** Two required per device type: one set for SCE-StateCacheEndpoint and one set for VOD service.

Note Legacy deployments using Cisco Media Origination System (MOS) do not require V2PC Masters or an ELK node, but instead, require VMs for the Platform and Application Manager (PAM) and Centralized Logging Server (CLS). See the User Guide for your MOS release for complete deployment information for these nodes.

Open Port Assignments

By default, each component of the V2PC deployment has an associated set of assigned ports. Before deploying V2PC, review the port assignments listed in Ports Opened in V2PC Release 3.3, page A-1 to confirm that there are no conflicts with any existing ports in your network. As noted in the appendix, some port assignments can be modified.

Virtual Disk Provisioning

When creating a VM at startup, VMware first checks the storage and memory reserved for the VM. If VMware cannot verify that storage and memory are sufficient, VM creation fails, and the user must clear the VM from the V2PC GUI and the vCenter infrastructure manually.

To help avoid VM creation failure, VMware provides the option of deploying V2PC and its worker nodes with virtual disks with one of two provisioning modes:

• Thick provisioning reserves VM storage (vmdk) and memory. This avoids over-committing host resources.

• Thin provisioning allocates only the amount of VMware storage and memory space needed to store the data on a virtual disk, allowing for over-committing of host resources.

During V2PC installation, one of these provisioning modes is defined by a setting in the V2PC base image (described below). All of the nodes in a given VM must use either thick or thin provisioning.

Table 2-1 V2PC Sizing Requirements

Component Flavor NamevCPUs RAM

Hard Drive Partition 1

Hard Drive Partition 2

Network Interfaces

V2PC Masters (3) 2X-Large 8 32 GB 40 GB — 1 X 10 GE

MCE 2X-Large 8 32 GB 40 GB — 3 X 10 GE

MPE 2X-Large 8 32 GB 40 GB — 3 X 10 GE

Repository 2X-Large 8 32 GB 40 GB — 1 X 10 GE

IPVS Nodes (2) * X-Large 8 16 GB 40 GB — 2 X 10 GE

REDIS Nodes (2) ** X-Large 8 16 GB 40 GB — 2 X 10 GE

HAProxy Nodes (2) ** X-Large 8 16 GB 40 GB — 2 X 10 GE

AM (2) X-Large 8 16 GB 40 GB — 1 X 10 GE

ELK Node 2X-Large 8 32 GB 40 GB 512 GB 1 X 10 GE

2-2Cisco Virtualized Video Processing Controller Deployment Guide

Page 3: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCV2PC Deployment Sizing

Note V2PC Release 3.3 does not support CPU reservation.

V2PC itself does not support converting VMs from one type of provisioning to another after installation. VMware does, however, provide instructions for manually converting VM hard disks from thin to thick provisioning. For details, see the following VMware knowledgebase article:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2014832

Setting the Provisioning Mode

In the V2PC image template, the V2PC cluster JSON file includes a vdiskProvisionMode parameter that controls the selection of thick or thin provisioning. During installation, the base image is imported according to the specified provisioning mode.

In the base image template, vdiskProvisionMode is set to thick. All nodes created from this base image have thick-provisioned disk and RAM sizes that are reserved from the ESX host memory pool.

If desired, resources can be over-committed by modifying the V2PC image template and uploading the image with vdiskProvisionMode set to thin, as shown in the following example:

{"imageDFormat": "vmdk","vmSourcePath": "/sw/v2p/images","vm_type": "template","hostname": "na","imageCFormat": "bare","imgTag": "cisco-centos-7.0","deployment": "pod1","datastore": "datastore2","vmName": "v2p-btest","vmSourceName": "centos7.ovf","vdiskProvisionMode": "thin"},

This parameter is settable during installation through the V2PC cluster JSON wizard that generates the v2p-cluster.json file. For details, see Install the Launcher VM from the OVAs, page 2-6.

Adding Thick Provisioning During Upgrade

During an upgrade to V2PC Release 3.3, thick provisioning can be added when importing the image and package from the repository VM. Thick provisioning is selected by default, and the software includes documentation and a manifest template to simplify the process of importing a V2PC base image in thick mode.

Note See V2PC Upgrade, page 5-1 for upgrade instructions.

The image tag for the imported base image is separate from the image tag associated with the master, ELK, and repository nodes. The upgrade utility is enhanced to upgrade system packages on the worker nodes with a particular image tag. When using this utility to upgrade V2PC system nodes (master, ELK, and repository), you must manually upgrade the worker nodes created with imported base image.

2-3Cisco Virtualized Video Processing Controller Deployment Guide

Page 4: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeployment Testing Requirements

Note Only new nodes created from the newly imported base image will be thick provisioned during upgrade. Existing deployed V2PC nodes with hard disks configured for thin provisioning are not converted to thick provisioning.

Separate Logging Volume

Beginning with V2PC Release 3.3, the V2PC base image has two logical volumes:

• The first logical volume is mounted at “/” and holds the root file system.

• The second logical volume is mounted at “var/log/” and hold the log files.

The use of a separate logical volume for log files limits the size of the logging directory. This prevents exhaustion of the logging directory space, if it should occur due to excess logging by some applications, from causing instability of the virtual machine due to lack of free storage space on the root file system.

Deployment Testing RequirementsThe minimal functionality testing lab deployments require the following:

• Launcher (can be turned off after deployment)

• Repository x 1

• Master x 1 (supported production environments require 3 master nodes)

• ELK 1

• Template x 1

• MCE x Scale required for Capture

• MPE x Scale required for playback or delivery to CDN or client

• AM x 1 minimum, AM x 2 recommended for redundancy

• IPVS, HAProxy, Redis (2 nodes each required for each Live, VOD, or cDVR workflow)

Deploying the V2PC SystemTo fully deploy the V2PC, perform the following tasks in order:

• Configure VMware, page 2-4

• Install the Launcher VM from the OVAs, page 2-6

• Run the Launcher Bootstrap Script, page 2-12

• Verify the Deployment, page 2-20

Configure VMware

Configuring the VMware is required before deploying the V2PC. Follow the instructions below to configure VMware.

2-4Cisco Virtualized Video Processing Controller Deployment Guide

Page 5: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Before You Begin

Confirm that you have the following:

• Recommended Hardware: UCS Chassis with B200-M3 or B200-M4 Blades Servers

• VMware ESXi hypervisor and vCenter

Note VMware EXSi version and patch level requirements may vary by release. Be sure to check the release notes for your release for specific EXSi version and patch level requirements.

Configure VMware

Step 1 Install VMware vCenter as follows:

Note The installation procedure shown here is for a Windows host. A compatible VMware vCenter Server ISO image is required for the installation. See the Release Notes for details.

a. Download the ISO image for the appropriate VMware vCenter Server version and patch level.

b. Mount the VMware vCenter ISO image by copying the .iso image file to the physical server (Linux box) and mounting the image. For example:

mkdir /tmp/mntmount -o loop /downloads/VMware-VCSA-all-6.0.0-2656757.iso /tmp/mnt cd /tmp/mnt

c. Copy the extracted files to the Windows host. Alternatively, you can mount the ISO image directly on the Windows host, using a tool like WinISO.

d. Open the vcsa folder and double-click the VMware-ClientIntegrationPlugin-<version> file.

e. Install the vcsa-setup.

Step 2 Configure vCenter as follows:

a. Make a note in advance of the data center, folder, resource pool, cluster, and host name(s) to be used when configuring vCenter.

b. Log in to vCenter using a web browser or vSphere client.

c. Create a data center on vCenter by navigating to Home > Inventory > Datastores and Datastore Clusters, right-clicking vCenter, selecting New Datacenter, and entering a unique name for the data center (for example, v2pc-c3b12-datacenter).

d. Create a new VM folder by navigating to Home > Inventory > VMs and Templates, right-clicking Datacenter, selecting New Folder, and entering a unique name for the folder (for example, v2pc-folder).

e. Add a new cluster to the new data center by navigating to Home > Inventory > Hosts and Clusters, right-clicking Datacenter, and selecting New Cluster.

f. Edit the new cluster to enable DRS by navigating to Cluster Settings > Cluster Features and checking Turn On vSphere DRS. Click Next repeatedly until finished.

g. Add a host to the new cluster by navigating to Home > Inventory > Hosts and Clusters, right-clicking the cluster, and selecting Add Host. Repeat as needed to add additional hosts.

h. Add a resource pool to the cluster by navigating to Home > Inventory > Hosts and Clusters, right-clicking the cluster, and selecting New Resource Pool.

2-5Cisco Virtualized Video Processing Controller Deployment Guide

Page 6: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Step 3 Configure NTP client in ESXi host as follows:

a. Log in to the vCenter client and select the ESXi host.

b. Navigate to Configuration > Time Configuration > Properties > Options and configure NTP settings.

c. Start the NTP service using the host option under Startup policy for the NTP Daemon.

Note NTP must be in sync with all ESXi hosts.

Install the Launcher VM from the OVAs

The Launcher has the OVF properties required to configure the networking for the VM. To install the Launcher VM, first copy the repo.iso, coreOS.ova, and centos.zip files to the Launcher, and then run the V2P Wizard as described below to generate the JSON file.

Note See the Release Notes for V2PC Release 3.3 for information on image downloads and retrieving the latest files.

To download and deploy the Launcher OVA, follow the instructions below.

Step 1 Download the Launcher OVA file launcher-3.2.0-8971.ova to the VM to be configured.

When available, the Launcher OVA is available from the Cisco V2PC Software Downloads page at:

http://www.cisco.com/c/en/us/support/video/virtualized-video-processing-controller/tsd-products-support-series-home.html

Note The Launcher VM has only one network interface. This VM should have access to the network where V2P components are deployed. You can add another network interface to this VM, but doing so requires manual configuration.

Step 2 Choose the vCenter IP address from the navigation menu at left, and then from the main menu, choose File > Deploy OVF Template to open the V2P Wizard.

2-6Cisco Virtualized Video Processing Controller Deployment Guide

Page 7: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Figure 2-1 Deploying the OVA Template

Step 3 Browse to the launcher OVA file location, then click Next.

Figure 2-2 Deploy OVA Template - Source

Step 4 Confirm default OVF settings, then click Next.

2-7Cisco Virtualized Video Processing Controller Deployment Guide

Page 8: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Figure 2-3 Deploy OVA Template - OVF Template Details

Step 5 Accept the End User License Agreement (EULA), then click Next.

Figure 2-4 Deploy OVA Template - EULA

Step 6 Provide the VM name, then click Next.

2-8Cisco Virtualized Video Processing Controller Deployment Guide

Page 9: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Figure 2-5 Deploy OVA Template - Name and Location

Step 7 Select the Host or Cluster on which to run the template, then click Next.

Figure 2-6 Deploy OVA Template - Host/Cluster

Step 8 Select a Resource Pool for the template, then click Next.

2-9Cisco Virtualized Video Processing Controller Deployment Guide

Page 10: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Figure 2-7 Deploy OVA Template - Resource Pool

Step 9 Select the Datastore, then click Next.

Figure 2-8 Deploy OVA Template - Storage

Step 10 Accept Thick Provision as the disk format, then click Next.

Note Additional steps are needed when upgrading from V2PC 3.2.3. See V2PC Upgrade, page 5-1 for details.

2-10Cisco Virtualized Video Processing Controller Deployment Guide

Page 11: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Figure 2-9 Deploy OVA Template - Disk Format

Step 11 Select the Network, then click Next.

Figure 2-10 Deploy OVA Template - Network Mapping

Step 12 On the Properties screen, specify Launcher Details.

2-11Cisco Virtualized Video Processing Controller Deployment Guide

Page 12: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Figure 2-11 Deploy OVA Template - Properties

Run the Launcher Bootstrap Script

The Launcher bootstrap script deploys the repository, master, and ELK servers and deploys VM templates based on the JSON file provided.

After applying power to the Launcher VM, execute the script as follows:

Step 1 Access the Launcher via SSH using the appropriate login credentials.

• Default username: root

• Default password: cisco

Step 2 Download or transfer files to the Launcher either using secure copy (scp) or directly via wget, and then copy the Launcher-Dockers, CentOS07, CoreOS, and REPO ISO files to the Launcher VM.

The Launcher bootstrap script file, v2p-cluster.json, is available from the Cisco V2PC Software Downloads page, accessible from:

http://www.cisco.com/c/en/us/support/video/virtualized-video-processing-controller/tsd-products-support-series-home.html

Step 3 Unpack the Launcher.tar file as shown in the following example:

[root@localhost]# tar -xvf v2p-launcher-docker-b620.tarv2p-launcher-docker-b620/v2p-launcher-docker-b620/docker_run.shv2p-launcher-docker-b620/READMEv2p-launcher-docker-b620/setup.shv2p-launcher-docker-b620/v2p-launcher-3.3.3-b620.tar

Note The .tar file name should match the name of the file copied to the Launcher VM in Step 2.

Step 4 Change to the v2p-launcher-docker directory as shown in the following example:

2-12Cisco Virtualized Video Processing Controller Deployment Guide

Page 13: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

[root@localhost]# cd v2p-launcher-docker-b620/

Step 5 Confirm that the docker daemon is running as shown in the following example:

#systemctl status docker.service

If the daemon is not running, start it as shown in the following example:

#systemctl start docker.service

Step 6 Run setup.sh to create the docker container as shown in the following example:

# ./setup.sh /root/centos7-2016-09-14_21-35.zip /root/coreos_production_vmware_ova.ova/root/v2p-repo-3.3.3-br_v2pc_3.3.3-16843.iso

Note All file names should match the names of the files copied to the Launcher VM in Step 2.

Step 7 Use the following Docker commands to confirm that the /root/data folder in the Docker container includes a volume mapped from the host/root/v2p-launcher-docker/data directory:

docker ps get docker id

docker exec -it 'docker_id' bash

The following V2PC image files will be available automatically in root/data/:

[root@a6010902077d data]# ls -l

total 4551992

-rw-r--r-- 1 root root 772931518 Apr 27 17:58 centos7-2016-09-14_21-35-16843.zip-rw-r--r-- 1 root root 794658304 Mar 9 20:22 centos7-disk1.vmdk-rw-r--r-- 1 root root 127 Mar 9 20:21 centos7.mf-rw-r--r-- 1 root root 33546 Mar 9 20:21 centos7.ovf-rw-r--r-- 1 mos mos 169 Jun 28 2016 coreos_production_vmware_ova.mf-rw-r--r-- 1 mos mos 11795 Jun 28 2016 coreos_production_vmware_ova.ovf-rw-r--r-- 1 mos mos 280276992 Jun 28 2016 coreos_production_vmware_ova_image.vmdk-rw-r--r-- 1 root root 2813313024 Apr 27 18:01 v2p-repo-3.3.3-br_master-16843.iso

[root@a6010902077d data]#

Note Files may be copied to this directory on the host to be made available via root/data in the container.

Step 8 Execute the provisioning script from the /root/ directory as shown in the following example:

[root@267e3834697d ~]# cd /root[root@267e3834697d ~]# ./v2p-provision.sh

Step 9 Execute the v2p-wizard to define the deployment environment.

./v2p-wizard

====== Infra Provider for V2PC Platform ======

Enter Infrastructure Provider:

[choices: [(0, 'vmware'), (1, 'openstack')]]

[default: vmware]: vmwarevmware

2-13Cisco Virtualized Video Processing Controller Deployment Guide

Page 14: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

====== VMWare Compute Infrastructure ======Enter vCenter IP

[default: None]: 200.2.0.166200.2.0.166

Enter vCenter Port

[default: 443]: 443443

Enter user name

[default: [email protected]]: [email protected]@ssov2pc.local

====== Password Wizard ======

Enter Password

password: mypassword

Re-enter password: mypassword

====== VMWare Cluster Infrastructure ======

Enter Region Name

[default: region-0]: region-0region-0

Enter VMware datacenter

[choices: [(0, 'v2pc_Datacenter')]]

[default: None]: v2pc_Datacenterv2pc_Datacenter

Enter Datacenter cluster name

[choices: [(0, 'v2pc_cluster')]]

[default: None]: v2pc_clusterv2pc_cluster

Enter Datacenter resource pool name

[choices: [(0, 'Resources'), (1, 'Resources'), (2, 'v2pc_resource_pool')]]

[default: None]: v2pc_resource_poolv2pc_resource_pool

====== VMWare Storage Infrastructure ======

Enter Datacenter VM Folder

[choices: [(0, 'Discovered virtual machine'), (1, 'v2pc_folder'), (2, 'cel-aavella'), (3, 'win-comcast-devops'), (4, 'vcenter6-2-comcast-devops'), (5, 'ext-dns-sanity'), (6, 'http-proxy'), (7, 'mos-ut-cicd-pam'), (8, 'ut-cicd-ext-dns'), (9, 'vle-solutions-vmware'), (10, 'centos-solutions-openstack'), (11, 'v2pc-launcher-200.2.0.192'), (12, 'vle-aavella'), (13, 'nas-sanity'), (14, 'vle-solutions-openstack')]]

2-14Cisco Virtualized Video Processing Controller Deployment Guide

Page 15: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

[default: None]: v2pc_folderv2pc_folder

Enter template storage host for template (select one host)

[choices: [(0, '200.2.0.159'), (1, '200.2.0.160')]]

[default: None]: 200.2.0.160200.2.0.160

Enter datastore to use on template storage host (select one datastore)

[choices: [(0, 'datastore11')]]

[default: None]: datastore11datastore11

====== VMWare Network Infrastructure ======

Enter management network label

[choices: [(0, 'VLAN2002'), (1, 'VLAN2003'), (2, 'VLAN2004'), (3, 'VLAN30')]]

[default: VM Network]: VLAN2002VLAN2002

Enter network subnet mask:

[default: 255.255.255.0]: 255.255.255.0255.255.255.0

Enter network gateway IP:

[default: ]: 200.2.0.1200.2.0.1

====== V2PC Domain, NTP and DNS configuration ======

Enter DNS IP (This is not MOS DNS. This DNS should resolve internet hostnames.):

[default: 1.1.1.1]: 1.1.1.11.1.1.1

Enter V2P domain name:

[default: v2pc.com]:

v2pcexample1.comv2pcexample1.com

Enter NTP IP:

[default: 2.2.2.2]: 2.2.2.22.2.2.2

====== V2PC External DNS Configuration ======

Enter External DNS IP (This is same as MOS External DNS IP):

[default: 192.168.0.25]: 200.2.0.167200.2.0.167

Enter External DNS domain (This is same as MOS External domain.):

2-15Cisco Virtualized Video Processing Controller Deployment Guide

Page 16: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

[default: v2p-external.com]: comcast.comcomcast.com

Enter External DNS algorithm:

[default: hmac-md5]: hmac-md5hmac-md5

Enter External DNS key:

[default: invalidkey]: 8TFXt/13/VOR/tPRID8eYg==8TFXt/13/VOR/tPRID8eYg==

Do you want to validate External DNS configuration

[choices: [(0, 'NO'), (1, 'YES')]]

[default: NO]: NONO

====== Master HA ======

Master Deployment:

[choices: [(0, 'single'), (1, 'ha')]]

[default: single]: haha

====== V2PC Nodes with Master HA ======

Enter first master node IP:

[default: ]: 200.2.0.170200.2.0.170

Enter master node name prefix e.g. v2p-master-

[default: v2p-master-]: v2p-master-v2p-master-

Enter datastore for first V2P Master node

[choices: [(0, 'datastore11'), (1, 'datastore1')]]

[default: None]: datastore11datastore11

Enter Second Master IP:

[default: ]: 200.2.0.171200.2.0.171

Enter datastore for second V2P Master node

[choices: [(0, 'datastore11'), (1, 'datastore1')]]

[default: None]: datastore11datastore11

Enter Third Master IP:

[default: ]: 200.2.0.172

2-16Cisco Virtualized Video Processing Controller Deployment Guide

Page 17: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

200.2.0.172

Enter datastore for third V2P Master node

[choices: [(0, 'datastore11'), (1, 'datastore1')]]

[default: None]: datastore11datastore11

====== V2P Repository VM configuration ======

Enter V2P Repository VM IP:

[default: ]: 200.2.0.173200.2.0.173

Enter repo node name prefix e.g. v2p-repo

[default: v2p-repo]: v2p-repov2p-repo

Enter datastore for V2P Repo node

[choices: [(0, 'datastore11'), (1, 'datastore1')]]

[default: None]: datastore11datastore11

Enter NPM uplink server. Standard NPM registry is : "https://registry.npmjs.org/"

[default: None]: NoneNone

Enter outgoing http/https Proxy for repository VM. Standard out going proxy is : "http://proxy.esl.cisco.com:8080"

[default: None]: NoneNone

====== V2P ELK VM configuration ======

Enter V2P ELK VM IP:

[default: ]: 200.2.0.174200.2.0.174

Enter elk node name prefix e.g. v2p-elk-

[default: v2p-elk]: v2p-elkv2p-elk

Enter datastore for V2P ELK node

[choices: [(0, 'datastore11'), (1, 'datastore1')]]

[default: None]: datastore1datastore1

====== V2PC CentOS Image configuration ======

Enter V2P base image OVF file location:

[default: /root/data/centos7.ovf]: /root/data/centos7.ovf/root/data/centos7.ovf

2-17Cisco Virtualized Video Processing Controller Deployment Guide

Page 18: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Enter V2P base image tag:

[default: cisco-centos-7.0]: cisco-centos-7.0cisco-centos-7.0

Enter V2P base image/template name:

[default: v2p-base-image]: v2p-base-imagev2p-base-image

Enter V2P ISO file location:

[default: None]: /root/data/v2p-repo-3.3.3-br_master-16843.iso/root/data/v2p-repo-3.3.3-br_master-16843.iso

====== CoreOS image V2PC Platform ======

Do you want to add CoreOS image:

[choices: [(0, 'NO'), (1, 'YES')]]

[default: NO]:

YESYES

====== CoreOS image details ======

Enter CoreOS image OVF file location:

[default: /root/data/coreos_production_vmware_ova.ovf]: /root/data/coreos_production_vmware_ova.ovf /root/data/coreos_production_vmware_ova.ovf

Enter CoreOS image tag:

[default: cisco-coreos-3.0]: cisco-coreos-3.0cisco-coreos-3.0

Enter CoreOS image/template VM name:

[default: v2p-coreos-image]: v2p-coreos-imagev2p-coreos-image

Generating V2P Cluster file for Vmware.

SUCCESS: Generated V2P Cluster config file.

Location: /opt/cisco/v2p/v2pc/python/vm_manager/wizard/v2p-cluster.json

Execute the below command to start the bootstrap process.

>>>cd /opt/cisco/v2p/v2pc/python/vm_manager/bootstrap/

>>>python bootStrapv2pMulti.py -c /opt/cisco/v2p/v2pc/python/vm_manager/wizard/v2p-cluster.json

V2P cluster JSON file is in directory /opt/cisco/v2p/v2pc/python/vm_manager/wizard/

Step 10 Answer the remaining questions with specifics for vCenter.

Step 11 Run the bootstrap using the .json file created by the wizard, as shown in the following example:

[root@267e3834697d ~]# cd /opt/cisco/v2p/v2pc/python/vm_manager/bootstrap/

2-18Cisco Virtualized Video Processing Controller Deployment Guide

Page 19: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

[root@267e3834697d ~]# python bootStrapv2pMulti.py –c/opt/cisco/v2p/v2pc/python/vm_manager/wizard/v2p-cluster.jsonopt/cisco/v2p/v2pc/python/vm_manager/wizard//v2p-cluster.json /config file /opt/cisco/v2p/v2pc/python/vm_manager/wizard//v2p-cluster.jsonLog file /var/log/opt/cisco/v2pc/v2p-bootstrap.log

This process takes 30-40 minutes depending on how many nodes must be deployed in VMware.

Note vCenter also displays each node as they are provisioned.

Step 12 To observe the progress of the cluster deployment, start another bash session with the container using the following command(s):

(host)#docker ps

Step 13 While this command is executing, perform the following steps to monitor the log file:

a. Open a separate SSH session to the Launcher VM, locate the Docker container, access the container shell, and then tail the install log as shown in the following example:

[root@v2p-launcher ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTSNAMES267e3834697d engci-docker.cisco.com:5007/v2p-launcher:3.3.3-b620 "/bin/bash" 26minutes ago Up 26 minutes distracted_saha[root@v2p-launcher ~]# docker exec -it 267e3834697d /bin/bash[root@267e3834697d ~]# tail -f /var/log/opt/cisco/v2pc/v2p-bootstrap.log2016-09-22 23:17:51,864 bootStrapv2pMulti.py L155(108)[INFO]:image section for this pod{u'datastore': u'FreeNas-iScsi',u'imgTag': u'cisco-centos-7.0',u'name': u'v2p-base-template10233',u'packages': [{u'src': {u'format': u'iso',u'local_file': u'/root/data/v2p-repo-3.3.3-br_v2pc_3.3.3-16843.iso',u'remote_file': u'/home/v2pc/repo-3.3.3-16843.iso'},

b. Monitor the Vcenter resource pool as each VM is created.

The following output will appear when bootstrap is complete:

==== ATTENTION PLEASE SAVE THE BELOW V2PC FILES ====SSH private key: /root/.ssh/v2pcssh.keyV2PC Service Manager(SM) token file: /etc/opt/cisco/mos/public/token.json***output omitted***SUCCESS: V2PC Cluster creation completed.Login to the V2PC GUI https://<master_ip>:8443 for further configuration.To configure a V2PC Provider using default values, execute the below command.>>>cd /opt/cisco/v2p/v2pc/python/vm_manager/bootstrap>>>python infra_generator.py create_default -m <SM leader node IP>[root@267e3834697d bootstrap]#

Step 14 Save the files.

Note Recommended practice is to save these files on a management station. The container may not be active for retrieving them later without manually starting or resuming this specific container ID on the Docker or Launcher host.

2-19Cisco Virtualized Video Processing Controller Deployment Guide

Page 20: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Verify the Deployment

After successfully executing the bootstrap install script, the following nodes are created in VCenter:

• Repo x 1

• Master x 1 (x3 for HA)

• ELK x 1

• CentOS Template x 1

• CoreOS Template x 1

To verify that you can log into V2PC, perform the following steps:

Step 1 Open a web browser and enter https://<IP_or_name_of_Master:8443>. The V2PC login screen appears.

Step 2 Enter the appropriate login credentials to log in to V2PC:

• Default username: admin

• Default password: default

The V2PC Dashboard should appear as shown in the following illustration.

Figure 2-12 V2PC Dashboard

Note Any intermittent alarms generated during bootstrapping should now be cleared. If any alarms appear at this point, they are probably actual alarms, and should be handled accordingly.

2-20Cisco Virtualized Video Processing Controller Deployment Guide

Page 21: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCDeploying the V2PC System

Deployment Limitations

The following deployment limitations apply for the V2PC:

• Not tested in deployments where the Datastore is an external store (like, NFS). In this scenario, the datastore names could be the same on all the hosts.

• Not tested in deployments where networking is setup using distributed vSwitch and Port Groups.

• Host in maintenance mode are not ignored.

• Region name is currently hard-coded to region-0.

• Only one zone/POD information is collected.

• If an ESX host has multiple datastores, then the script picks up one only of the datastores. Users can manually edit the JSON file to address this issue.

• The VM folder is not verified to be of the VM and Template type.

Install the VMP Bundle

V2PC Release 3.3 uses components of Cisco Virtualized Media Packager (VMP) that are installed from a separate compressed image (tarball). This tarball, called the VMP bundle, is available from the V2PC product support page at:

http://www.cisco.com/c/en/us/support/video/virtualized-video-processing-controller/tsd-products-support-series-home.html

The VMP bundle contains the following components, each itself provided as a compressed tarball:

• cisco-ce (Application Instance Controller) + RPMs

• cisco-pe (AIC) + RPMs

• cisco-am (AIC) + RPMs

• cisco-sce (AIC)

• cisco-mfc (Live Media Flow Controller)

• cisco-vod-mfc (VOD MFC)

To access the VMP bundle, go to the V2PC product support page and click Download Software.

After installing V2PC, import the VMP bundle into the V2PC repository VM as follows:

Step 1 Log in to the V2PC repository VM via SSH as user v2pc.

Step 2 Download the appropriate VMP bundle from the V2PC product support page to a temporary folder on the repository VM.

Step 3 Import the VMP bundle to the repository as shown in the following example:

$ /opt/cisco/v2p/v2pc/python/v2pPkgMgr.py --import --sourcebundle /tmp/vmpBundle/vmp-2.11.1-v2p-bundle-3.3.3-16826.tarProcessing application upgrade bundle /tmp/vmpBundle/vmp-2.11.1-v2p-bundle-3.3.3-16826.tar.........

VMP Unbundling

Successfully processed application upgrade bundle /tmp/vmpBundle/vmp-2.11.1-v2p-bundle-3.3.3-16826.tar

2-21Cisco Virtualized Video Processing Controller Deployment Guide

Page 22: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCSample JSON File

Next Steps

Once deployed, further configuration is performed via the V2PC GUI. See the Cisco Virtualized Video Processing Controller (V2PC) User Guide for details.

Sample JSON File{ "regions": ["region-0"], "region-0": { "name": "region-0", "type": "primary", # for secondary region use "worker" "description": "", "address": "", "city": "", "state": "", "country": "" }, "pods": ["pod1"], # POD is equivalent to a Zone "pod1": { "provider": "vmware", # [ vmware ] "controllerIP": "172.20.x.c", # VMware: IP/hostname of vCenter server (version 5.5 u3a, > 6)

"protocol": "http|https", # Vmware: not applicable

"apiVersion": "v2.0", # Vmware: not applicable

"port": 443, # Vmware: vCenter port (default 443)

"tenant": "admin", # Vmware: not applicable

"user": "admin", # VMware: user name for vCenter server

"password": "abcd", # VMware: password for vCenter server

"datacenter": "Datacenter_SJC", # VMware: Datacenter name of vCenter

"cluster": "campus_1", # VMware: cluster name on vCenter vCenter --> Datacenter -- > Cluster

"resourcePool": "rp1", # VMware: resource pool name on vCenter

2-22Cisco Virtualized Video Processing Controller Deployment Guide

Page 23: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCSample JSON File

"templateFolder": "v2p_folder", # VMware: vCenter --> Inventory --> VMs and Template (View) Create a New Folder under your datacenter. Give the name of this new folder here.

"vmFolder": "v2p_folder", # Same as templateFolder. Don't leave it blank for VMware

"datastore_host": "", # Optional ENG use only

"datastores": [{"name": "datastore01", "folder": "v2p_folder"}], # VMware: Create one entry for each host in the cluster.

The name should be the datastore name on that host.

The folder name is same as vmFolder above.

"domain": "v2pc.com", # V2PC cluster Domain name (This is not the external DNS domain name)

"net_mgmt": "vlan-1722", # VMware: The vswitch label for management network

"ntp": ["171.68.38.65"], # List of NTP servers "region": "region-0", # Region this zone belongs to

"images": [ # List of base images/templates { "vendor": "cisco", # vendor name "name": "v2p-centos-7", # VMWare: template name

This should be same as vmName in template_node "imgTag": "cisco-centos-7.0.0", # Unique tag name format <vendor>-<os>-<version> The image tag should be same for all nodes in this JSON file "storeName": "datastore01", # name from datastores list above The image would be uploaded to that datastore "provider": "pod1", # Zone information for this image "repoIP": "172.20.x.r", # V2P Repo IP "repoPort": "5001", # Do not change "packages": [ { "type": "system", # Do not change "version": "3.2.0", # V2P software version "src": { "format": "iso", "local_file": "/sw/v2p/images/v2p-repo-3.3.3-16843.iso", # Location of V2P software ISO file "remote_file": "/home/v2pc/v2p-repo-3.3.3-16843.iso"

# use /home/v2pc/<repo name> as above

repo file name should be the same } } ], "_systemRepoListComment": "Repo list will be auto populated"

2-23Cisco Virtualized Video Processing Controller Deployment Guide

Page 24: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCSample JSON File

# Do not change } ] }, "compute_nodes": [ { "vm_type": "template", # Do not edit "deployment": "pod1", # Zone to deploy the image/template "vmName": "v2p-centos-7", # Name of template/image. Should be same as image->name "hostname": "template", # Do not edit "vmSourcePath": "/sw/v2p/images/", # Directory where V2P base image OVA=(mf, ovf, vmdk) "vmSourceName": "centos7-disk1.vmdk", # For VMware: specify the ovf file

"datastore": "datastore01", # name of store in datastores from POD section "imgTag": "cisco-centos-7.0.0", # Same as in image->imgTag "imageDFormat":"vmdk", # Do not edit "imageCFormat":"bare", # Do not edit "vdiskProvisionMode": "thick" # Vmware: default thick. }, Reserve storage and memory on host.

"thin": thin provision of storage(vmdk)will not reserve memory

{ "vm_type": "master", "deployment": "pod1", "dns": [ # DNS IP List "127.0.0.1", The first IP should always be 127.0.0.1 "171.70.168.183" ], "firstMasterHost": "172.20.207.82", # IP of the first master node address. Used in master HA. "templateName": "v2p-centos7", # should match the node->template->vmName "imgTag": "cisco-centos-7.0.0", # should match the image->imgTag "vmName": "master01.node.datacenter.consul", # Could replace the datacenter with your datacenter name not required to change it. "hostname": "master1", # Any valid hostname. not required to change this name.

"fqdnName": "172.20.x.m1", # IP of the master. In case of NAT this is an external IP "privateFqdnName": "172.22.x.m1", # IP of the master. In case of NAT this in an internal IP In NAT the internal IP is the one assigned to the VM If non NAT'ed environment these two IPs are the same

"isMaster": true, # Do not edit "deploy_enabled": true, # Whether or not to deploy this master node "flavor":"m1.medium", # VMware: not used

"numCPU": 4, # CPU cores. Do not edit. Only used by Vmware "memory": 8192, # Memory in MB. Currently set to 8GB.

Do not edit. Only used by Vmware. "disk": 0, # Storage in GB.

# VMware: The primary disk is 40GB

2-24Cisco Virtualized Video Processing Controller Deployment Guide

Page 25: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCSample JSON File

fixed. The second disk block deviceof size GB is attached to the VM.User needs to make use of this blockdevice.(/dev/sdb or /dev/vdb)

"datastore": "datastore01", # datastore name inPOD->datastores->name

"mosDNS": [ # This is only used by MOS Application { This is an external DNS server which is not managed by V2PC. "ip": "192.2.0.25", # IP reachable from master node "hostname": "extdns", # not used. Keep it "domain": "mosdomain.com", # This domain name should be different

from V2PC domain POD->domain"key": "EA07/q61zERW7tziZupaUw==", # edit the key as per your deployment "algo": "hmac-md5" # do not edit. } ], "networkInterfaces": [ { "dhcpEnabled": false, # Do not edit. VMware : We use static IP

"subnet": "vlan-1722-subnet", # VMware: subnet mask eg: 255.255.0.0

"net_type": "net_mgmt", # Do not edit "ip": "172.20.x.m1", # IP address to be assigned to this node "gateway": "172.20.x.g1", # VMware: Specify the gateway IP

"hasDefaultGW": true, "label": "vlan-1722" # VMWare: vswitch label name

} ], "artifactoryBaseUrl": "", # Do not edit. "repositoryVersion": "" # Do not edit. }, { # V2PC Software repository node Same instructions as master node. Only highlighting repo specific things "vm_type": "repo", # V2PC Software repository node "deployment": "pod1", # node to be deployed in this zone/pod "dns": [ "127.0.0.1", "171.70.168.183" ], "firstMasterHost": "172.20.207.82", # IP of the first master node "templateName": "v2p-centos7", # same as node->template->vmName "imgTag": "cisco-centos-7.0.0", # same as image->imgTag "vmName": "v2p-repo", # VM Name in VMware "hostname": "v2p-repo", "fqdnName": "172.20.x.r1", # IP of repo "privateFqdnName": "172.22.x.r1", "isMaster": false, # do not edit "deploy_enabled": true, # do not edit "flavor":"m1.medium", "numCPU": 4, # 4 cores "memory": 8196, # 8GB "datastore": "datastore01", # should match pod->datastores[]->name

2-25Cisco Virtualized Video Processing Controller Deployment Guide

Page 26: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCSample JSON File

"networkInterfaces": [ # Same as networkInterfaces section in master node { "dhcpEnabled": false, "subnet": "vlan-1722-subnet", "net_type": "net_mgmt", "ip": "172.20.x.r1", # IP address of repo VM "gateway": "172.20.x.gr", "hasDefaultGW": true, "label": "vlan-1722" } ], "artifactoryBaseUrl": "", # do not edit "repositoryVersion": "", # do not edit "repo_iso": "/sw/v2p/images/v2p-repo-3.3.3-16843.iso", # location of V2P ISO file "outgoing_https_proxy": "http://proxy.esl.cisco.com:8080/", # Outgoing proxy if required from Repo VM. only required if you need to download pkgs from internet on Repo VM. "npm_uplink": "https://registry.npmjs.org/" # npm uplink URL (OPTIONAL). # If "npm_uplink" is set to "", # then public npm registry access is disabled. # If "npm_uplink" is omitted, # then public npm registry access is enabled # (default URL = "https://registry.npmjs.org/") }, { "vm_type": "elk", # V2P ELK (Elastic-Logstash-Kibana) node # Same instructions as above. Only highlight ELK specific configs "deployment": "pod1", "dns": [ "127.0.0.1", "171.70.168.183" ], "firstMasterHost": "172.20.x.m1", # IP of the first master node "templateName": "v2p-centos7", "imgTag": "cisco-centos-7.0.0", "vmName": "v2p-elk", "hostname": "v2p-elk", "fqdnName": "172.20.x.e1", # IP address of ELK node. Should match the networkInterfaces IP "privateFqdnName": "172.20.x.e1", "isMaster": false, "deploy_enabled": true, # Whether or not to deploy this ELK node "flavor":"m1.medium", "numCPU": 4, "memory": 8096, "datastore": "datastore01", "networkInterfaces": [ { "subnet": "vlan-1722-subnet", "net_type": "net_mgmt", "ip": "172.20.x.e1", "gateway": "172.20.207.65", "hasDefaultGW": true,

2-26Cisco Virtualized Video Processing Controller Deployment Guide

Page 27: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCSample JSON File

"label": "vlan-1722" } ], "repositoryVersion": "", # Do not edit "npm_repository": "", # Do not edit "outgoing_https_proxy": "" # Do not edit } ]}

2-27Cisco Virtualized Video Processing Controller Deployment Guide

Page 28: Deploying V2PC - Cisco · 2-2 Cisco Virtualized Video Processing Controller Deployment Guide Chapter 2 Deploying V2PC V2PC Deployment Sizing * Two required per service: one pair for

Chapter 2 Deploying V2PCSample JSON File

2-28Cisco Virtualized Video Processing Controller Deployment Guide