integrating juniper networks qfx5100 switches … juniper networks qfx5100 switches and junos space...

53
Integrating Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay Implementation Guide July 2017

Upload: hoangxuyen

Post on 19-Jun-2018

237 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Integrating Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

Implementation Guide

July 2017

Page 2: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

2 © 2017 Juniper Networks, Inc.

Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

Copyright © 2017, Juniper Networks, Inc. All rights reserved.

Page 3: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 3

Contents INTRODUCTION ........................................................................................................................................ 5

VMWARE NSX VSPHERE VERSION 6.3 ENHANCEMENTS ....................................................................... 5

SECURITY ...................................................................................................................................................................................... 5 AUTOMATION ................................................................................................................................................................................ 5 APPLICATION CONTINUITY............................................................................................................................................................. 5

KEY CONCEPTS .......................................................................................................................................... 6 NETWORK VIRTUALIZATION .......................................................................................................................................................... 6 VMWARE NSX .............................................................................................................................................................................. 6

Data Plane Component .......................................................................................................................................................... 7 Control Plane Component ...................................................................................................................................................... 7 Management Plane Components .......................................................................................................................................... 7

VXLAN TUNNEL ENDPOINT (VTEP) ............................................................................................................................................ 7 QFX5100 SWITCH (UNDERLAY SWITCH) ...................................................................................................................................... 7

REQUIREMENTS ....................................................................................................................................... 8

HARDWARE REQUIREMENTS ........................................................................................................................................................ 8 NSX REQUIREMENTS ................................................................................................................................................................... 8 INTEROPERABILITY MATRIX ......................................................................................................................................................... 8

TOPOLOGY ................................................................................................................................................. 9

CONFIGURATION OVERVIEW .................................................................................................................. 9 PHYSICAL DEVICES ........................................................................................................................................................................ 9 LOGICAL DEVICES ........................................................................................................................................................................ 10

CONFIGURATION PROCEDURE ............................................................................................................... 11 INSTALL VMWARE ESXI SOFTWARE ON INTEL NUCS.................................................................................................................. 11 INSTALL VCENTER ....................................................................................................................................................................... 14 CREATE NEW DATA CENTER AND CLUSTERS ................................................................................................................................ 15 CREATE A VIRTUAL DISTRIBUTED SWITCH (VDS) ........................................................................................................................ 16 CONFIGURE THE QFX5100 SWITCH ............................................................................................................................................ 18 DEPLOY NSX MANAGER .............................................................................................................................................................. 18

Before You Begin Installing NXS Manager ........................................................................................................................ 18 REGISTER VMWARE NSX MANAGER WITH VCENTER .................................................................................................................. 19 DEPLOY THE NSX CONTROLLER CLUSTER .................................................................................................................................. 20 PREPARE HOSTS AND CLUSTERS ................................................................................................................................................. 22 CONFIGURE VXLAN .................................................................................................................................................................. 23 DEFINE THE SEGMENT ID POOL .................................................................................................................................................. 25 CREATE TRANSPORT ZONES ......................................................................................................................................................... 25 DEPLOY LOGICAL SWITCHES ....................................................................................................................................................... 26

Test VTEP IP Interface Connectivity .................................................................................................................................. 27 DEPLOY THE DISTRIBUTED LOGICAL ROUTER AND EDGE SERVICES GATEWAY ............................................................................ 29

Edge Services Gateway ....................................................................................................................................................... 29 Distributed Logical Router ................................................................................................................................................. 30

CREATE THE STATIC ROUTE ......................................................................................................................................................... 31

JUNOS SPACE NETWORK DIRECTOR INTEGRATION ............................................................................ 32 UNDERSTANDING DATA FLOWS .............................................................................................................35

INTRA-VNI TRAFFIC FLOW.......................................................................................................................................................... 35 INTER-VNI TRAFFIC FLOW ......................................................................................................................................................... 36

INTEGRATION VALIDATION .................................................................................................................... 37

Page 4: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

4 © 2017 Juniper Networks, Inc.

PHASE 1 – VCENTER AND NSX MANAGER ................................................................................................................................... 37 PHASE 2 – NSX MANAGER CLI ................................................................................................................................................... 41 PHASE 3 – NSX CONTROL PLANE ............................................................................................................................................... 44 PHASE 4 – NSX DATA PLANE ...................................................................................................................................................... 47 PHASE 5 – QFX5100 SWITCH .................................................................................................................................................... 50

APPENDIX – SOLUTION COMPONENTS ..................................................................................................53

Page 5: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 5

Introduction This implementation guide provides step-by-step instructions required to integrate and manage Juniper Networks QFX5100 devices with VMware NSX. Junos OS supports VMware NSX interaction on QFX Series switches which enables an NSX-based network administrator to dynamically provision and manage QFX Series devices from their console. Additionally, this implementation guide includes comprehensive information for planning, designing, executing, and validating this integration, and provides recommendations. It also provides useful information about Junos Space integration, including installation guidelines and monitoring capabilities to provide a single view of the virtual layer and the Juniper Networks underlay.

This implementation guide assumes the reader has basic knowledge and experience with Juniper products, VMware’s server virtualization products, and an understanding of the Enterprise Data Center software-defined networking (SDN) markets.

VMware NSX vSphere Version 6.3 Enhancements As of February 2, 2017, VMware released NSX for vSphere version 6.3. This version adds capabilities and enhancements focusing on the following three functional areas:

• Security • Automation • Application Continuity

Security • Security tagging enables you to assign and clear multiple tags for a given virtual machine (VM) through

API calls. • Role-based access control helps to create a clear demarcation of the two administrator roles:

Enterprise Administrator and NSX Administrator. • Application Rule Manager simplifies the process of creating security groups and whitelist firewall rules

for existing applications. • Service Composer publish status is available to check whether a policy is synchronized which provides

increased visibility of security policy translations into DFW rules on the host. • Improved interoperability between vCloud Director 8.20 and NSX 6.3.0 helps service providers offer

advanced networking and security services to their customers. vCloud Director 8.20 with NSX 6.3.0 exposes native NSX capabilities supporting multiple customers and customer self-service.

Automation • An enhanced auto-recovery mechanism for the netcpa process ensures continuous data path

communication. The automatic netcpa monitoring process also auto-restarts in case of any problems and provides alerts through the syslog server.

Application Continuity • Multi-data center environment enhancements. • Operational enhancements for improved availability. • L2VPN performance enhancements for cross data center/cloud connectivity.

Page 6: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

6 © 2017 Juniper Networks, Inc.

Key Concepts Network Virtualization Network virtualization is a data center defined completely by software, similar to virtualizing compute and storage aspects. With server virtualization, the hypervisor (software abstraction layer), abstracts the physical hardware of the host machine and enables you to create guest or VMs that use the host machine’s resources transparently. Similarly, network virtualization enables you to create network services (consisting of switches, routers, firewalls, and load balancers) over an underlay comprised of multiple physical devices. This allows you to create unique networks in seconds! Additionally, since you create them using software and they are independent of the underlying physical infrastructure, you can move them simultaneously with the virtual machines to which they connect.

VMware NSX NSX is VMware’s network virtualization platform, as shown in Figure 1. Its network hypervisor provides software abstraction for various network services, such as: logical switches, logical routers, logical firewalls, logical load balancers, and logical VPNs. Besides the ability to create overlay networks, the most popular NSX use case is micro-segmentation for security which provides firewall controls between East-West traffic inside a data center. Additionally, NSX (or any network virtualization platform) also provides automation and disaster recovery.

Figure 1: VMware NSX Platform Components

Page 7: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 7

Data Plane Component NSX vSwitch–Based on the vSphere Distributed Switch (VDS) and contains additional components to enable services. It is the software that operates in the server hypervisor and creates a layer of abstraction between the servers and the physical network. The NSX vSwitch provides flexibility to place virtual workloads on any available ESXi server independent of the physical network infrastructure.

Control Plane Component NSX Controller cluster–NSX control plane runs in the NSX Controller cluster. The NSX Controller cluster manages the distributed switching and routing modules in the hypervisors, and maintains information about the VMs, hosts, logical switches, and VXLANs. It is the central control plane for all logical switches within the network. Additionally, it offloads the task of BUM traffic handling for VXLANs from the underlying physical network (in unicast and hybrid modes).

Management Plane Components NSX Manager–Centralized management component of NSX. It is installed as a virtual appliance in the vCenter server environment.

NSX Edge–Can be installed and function as an edge services gateway (ESG) or as a distributed logical router (DLR). An NSX Edge also provides services such as DHCP, NAT, and load balancing to a virtual network. It enables East-West traffic between the VMs on the same host in different subnets without accessing the physical network, and enables North-South traffic for VMs to access the public networks.

VXLAN Tunnel Endpoint (VTEP) VXLAN (Virtual Extensible LAN) is an overlay technology that tunnels Layer 2 (L2) packets over a Layer 3 (L3) network. vSwitches use VXLAN tunnels to communicate between VMs on different ESXi hosts. VTEPs are the tunnel endpoints responsible for encapsulating and de-encapsulating the L2 packets. You enable VTEP functionality on the ESXi host using software by installing the vmkernel module (or VIB). When VTEP functionality is performed by an external hardware device (such as the QFX5100 switch), that device is referred to as the hardware VTEP.

QFX5100 Switch (Underlay Switch) Not all workloads are virtualized in a typical data center. There may also be several physical servers [bare-metal servers (BMS)] which do not contain a vSwitch and cannot perform the VTEP functionality. To enable these non-virtualized workloads to communicate with virtualized workloads, you can select from the following options:

• Option 1 (for small deployments): Configure a software appliance running with an instance of open vSwitch. The NSX controller can then map the VLANs on the physical port to the logical network.

• Option 2 (for large deployments): For a large number of BMSs that need to communicate with the virtualized workloads, or if they need to pass large amounts of data, configure a high throughput hardware device where the top-of-rack (TOR) switch can provide the VTEP functionality to the BMS. For this option, the TOR switch is called a hardware VTEP. To implement the control plane functionality, the hardware VTEP communicates with the NSX controller using the OVSDB protocol. It receives remote MAC addresses from the NSX controller and also sends its locally learned MAC

Page 8: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

8 © 2017 Juniper Networks, Inc.

addresses to the controller, which are then sent to the other VTEPs in the network. In case of the QFX5100 switch, the controller also automatically pushes the access port configuration to the switch based on the logical switch parameters.

This guide describes the first option for small deployments.

Requirements Hardware Requirements

• One QFX5100 device (this guide uses QFX5100 48-S) • Three Intel NUCs NUC6i5SYH (Intel Core i5 6th Gen/1.8 GHz) with 32 GB RAM as ESXi servers • A Windows Jump station to access the setup because all servers are in an isolated network • Three Samsung 512 GB SSDs

NSX Requirements For system requirements to install NSX, click VMware NSX 6.3 Installation guide. Use this as a reference to set up the testbed (see Figure 2).

Figure 2: VMware vSphere ESXi Reference

Interoperability Matrix To verify the compatibility matrix, click VMware Interoperability Matrices.

Page 9: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 9

Topology See Figure 3 for the solution topology.

Figure 3: Topology

Configuration Overview Physical Devices To set up the physical devices, connect the Windows Jump station to the QFX5100 device, and connect the ESXi (1, 2, and 3) servers to the QFX Series device (see Figure 3).

1. Connect the ESXi hosts to the QFX5100 device. The ESXi hosts are connected to the ge-0/0/0 (ESXi 1), ge-0/0/2 (ESXi 2), and ge-0/0/4 (ESXi 3) ports.

2. Connect the Windows Jump station to the QFX5100 device. Use a Windows VM to connect to QFX5100 device port xe-0/0/47. The Windows VM was hosted on a switch. (For this example, the Windows VM was assigned some VLAN and was hosted on a switch. Note: While configuring the xe-0/0/47 port (the port connected to Windows VM eth1), ensure that you configure the port as a trunk port.

3. To test the connections, ping from the Windows Jump station to the 3 servers. Because the link between the Windows VM and QFX5100 device is in some VLAN, include the links connecting from the QFX5100 device to the 3 servers in the same VLAN. Additionally, remember to configure the VLAN.

4. Configure the interface mode trunk on xe-0/0/47.

Page 10: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

10 © 2017 Juniper Networks, Inc.

5. Configure the interface mode access on the interfaces connecting to the servers: ge-0/0/0, ge-0/0/2, and ge-0/0/4.

6. Configure the VLAN to l3.interface irb. Note: Make sure all the devices are in same subnet.

The complete configuration for the QFX5100 switch is described in the following sections.

Logical Devices The logical devices are configured in the vCenter (see Figure 4). To replicate a real world data center, this implementation guide uses two clusters:

• Management and Edge Cluster−This cluster has one host, and contains and provides resources to the following management components: vCenter Server Appliance, NSX Manager, NSX Controllers, Edge Services Gateway, and DLR Control VM’s.

• Compute Cluster−This cluster has two ESXi hosts. For this example, the compute cluster uses virtual machines to demonstrate the NSX functionality, and provides resources to the Application and Web VMs.

A Virtual Distributed Switch (VDS) is also created. A VDS is a vCenter feature and a prerequisite for NSX installation. The NSX vSwitch is based on VDS, which provide uplinks for host connectivity to the top-of-rack (ToR) physical switches.

To simplify a deployment, each cluster of hosts is associated with only one VDS, even though some of the VDSs span multiple clusters. In this implementation guide topology, a single Management VDS is used (see Figure 4).

Figure 4: Single Management VDS

Page 11: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 11

Configuration Procedure Use the following procedure to integrate and manage a Juniper Networks QFX5100 device with VMware NSX:

1. Ensure physical connectivity as per the topology diagram (see Figure 3). 2. Install VMware ESXi software on Intel NUCs servers. 3. Install vCenter. 4. Create new data center and clusters in vCenter. 5. Create a Virtual Distributed Switch (VDS). 6. Configure the QFX5100 switch. 7. Deploy NSX Manager. 8. Register VMware NSX Manager with vCenter. 9. Deploy the NSX Controller cluster 10. Prepare hosts and clusters. 11. Configure VXLAN. 12. Define the segment ID pool. 13. Create transport zones. 14. Deploy logical switches. 15. Deploy the Distributed Logical Router and Edge Services Gateway. 16. Create the static route.

Install VMware ESXi Software on Intel NUCs Note: Before installing the ESXi hypervisor on the Intel NUC, you must update the BIOS. Click here to retrieve the BIOS update. This example uses recovery BIOS update [SY0059.BIO] and it requires a USB flash drive.

1. Create the ESXi 6.0 Update 2 ISO using a live USB. To convert an NTFS file system-based USB to a live USB

using a MAC, you must convert it to a FAT32 file system. 2. Using a MAC, open Disk Utility and select the Erase option. From the drop-down list, select the MS-DOS

(FAT) format, and then click Erase. Wait until the Erase procedure completes, then click Done.

Page 12: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

12 © 2017 Juniper Networks, Inc.

Wait until the Erase procedure completes, then click Done.

3. Download and open UNetBootin from http://unetbootin.github.io/.

Note: This App does not open if you double-click it. To open, you must right-click and select Open. Enter your password at the prompt.

4. From the UNetBootin dialog box, select Disk Image Option. 5. From the Diskimage drop-down list, select ISO and browse to select the destination where you have stored

the VMware ESXi Hypervisor 6.0 U2.iso image. Note: Ensure that you plug the USB flash drive stick into the MAC.

Page 13: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 13

6. Click OK to start downloading, extracting, and copying files, and installing the bootloader. After the

installation completes, the bootable USB is ready to use. Click Exit.

7. Click OK to start downloading, extracting, and copying files, and installing the bootloader. After the

installation completes, the bootable USB is ready to use. Click Exit. 8. Install the ESXi 6.0 Update 2 ISO that is shipped out-of-the-box from VMware. No additional custom drivers

are required. 9. Select Samsung SSD as the disk to install to. 10. Configure the username and password, the management network, and then restart the management

network.

Page 14: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

14 © 2017 Juniper Networks, Inc.

11. From vSphere Web Client, select Hosts and Clusters->Host->Manage->Settings->Security Profile->Services. This screen shows which services (daemons) are running on the ESXi host. Verify that Direct Console UI, ESXi Shell, and SSH are running.

Install vCenter To install VCenter:

1. Deploy vCenter 6.0 OVF and name it appropriately. 2. Create an IP address and gateway. 3. Allot 8 GB memory and 2 vCPUs.

Page 15: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 15

Note: A potential issue may exist with using Mozilla Firefox where the following error displays: “Windows Session login authentication has failed as a result of an error caused by VMware Client Integration Plugin.” Refer to VMware KB 2125623.

Create New Data Center and Clusters To create new data center and clusters in vCenter:

1. Open vCenter and select Hosts and Clusters. 2. Click Create datacenter, enter “My_Datacenter” as the name, and click OK. 3. Click Create cluster and create the first of two clusters using the name: “Management & Edge Cluster”.

Leave both vSphere HA and DRS turned off. 4. Create the second cluster by right-clicking on the Datacenter and name it “Compute Cluster”.

Page 16: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

16 © 2017 Juniper Networks, Inc.

5. After creating both clusters, add the ESX hosts to the appropriate cluster. Repeat this for the three hosts. After completing, your environment should display as follows:

Create a Virtual Distributed Switch (VDS) Best Practice: VMware recommends that you plan and prepare your vSphere distributed switches before installing NSX for vSphere.

1. Create a Virtual Distributed Switch (VDS). A VDS is a vCenter feature and a prerequisite for NSX installation. The NSX vSwitch is based on VDS, which provides uplinks for host connectivity to the top-of-rack (ToR) physical switches.

2. You can attach a single host to multiple VDSs. A single VDS can span multiple hosts across multiple clusters. For each host cluster that will participate in NSX, you must attach all hosts within the cluster to a common VDS. To simplify the deployment, associate each cluster of hosts with only one VDS (although some of the VDSs span multiple clusters). For this implementation guide topology, a single VDS-Mgmt is shown:

Page 17: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 17

Note: If the ESXi servers only have 1 NIC, an error may generate in the vCenter->Network Adapter Settings shown as follows:

This error indicates that the Mgmt_VDS does not have an active uplink. To correct the error, you must either set up a physical link for those adaptors or migrate the vSwitch VMKs over to the virtual distributed switch. For more information on how to fix the issue, refer to this blog. In summary, you should:

a. Disable (config.vpxd.network.rollback) the feature in vCenter Server Advanced Setting to bypass the single NIC issue.

b. Change the Distributed Port Group of VDS-Mgmt. For example, change Static Binding to Ephemeral Binding enabling you to change the VM network of the vCenter Server.

Page 18: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

18 © 2017 Juniper Networks, Inc.

Configure the QFX5100 Switch To configure the QFX5100 switch:

1. Enter the following CLI configuration commands:

Note: Because the link between the Windows VM and QFX5100 is in VLAN 117, enter the links connecting from the QFX5100 switch to the three servers in the same VLAN.

2. Configure the VLAN. 3. Configure the interface mode trunk on xe-0/0/47, and the interface mode access on the interfaces

connecting to the servers: ge0/0/0, ge-0/0/2, and ge-0/0/4. 4. Configure the VLAN to l3.interface irb, and ensure that all of the devices are in same subnet.

Deploy NSX Manager The following are hardware requirements for NSX Manager:

• vRAM:16 GB • Disk size: 60 GB • vCPU: 4

Before You Begin Installing NXS Manager 1. Ensure that the required ports are open. Click here for the NSX required ports and protocols. The NSX

Manager installation is a virtual appliance in the vCenter server environment. 2. Verify the IP address, gateway, DNS server IP address, and NTP server IP address for the NSX Manager

to use:

Page 19: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 19

3. Install the Client Integration Plug-in.

Register VMware NSX Manager with vCenter 1. After you install NSX Manager and ensure that the NSX management service is running, you must

register a vCenter Server with NSX Manager. For every instance of NSX Manager, there is one vCenter Server:

Settings for the Lookup Service URL and vCenter Server display. If configured correctly, a green dot and ‘Connected’ display next to Status:

Page 20: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

20 © 2017 Juniper Networks, Inc.

2. Click the ‘Summary’ tab to verify that the following services are running: vPostgres, RabbitMQ, NSX Management Service, and SSH Service:

3. Once deployed, access the NSX Manager through an Internet browser. If deployed correctly,

Networking & Security Plugin displays in the vSphere Web Client:

Deploy the NSX Controller Cluster Best Practice: VMware recommends that you deploy NSX Controllers in odd numbers only, three or greater. It is not recommended to deploy less than three because there is no redundancy for the NSX Controller.

Once deployed, the NSX Controllers display as VMs. IP addresses assigned to the controller pool originate from the same subnet as the NSX Manager:

Page 21: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 21

Once deployed, a list of the NSX Controller nodes displays. Each controller has an IP address (most likely derived from an IP Pool) and a status of ‘Connected’ with a green check mark:

Each controller also lists two peers and its software version.

Page 22: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

22 © 2017 Juniper Networks, Inc.

Prepare Hosts and Clusters To prepare the environment for network virtualization, you must install network infrastructure components on all clusters for each vCenter Sever, where needed. The host preparation process deploys the required software on all hosts in the cluster.

Host preparation is when NSX Manager:

• Installs NSX kernel modules on ESXi hosts that are members of vCenter clusters. These kernel modules provide services such as: VXLAN bridging, distributed routing, and distributed firewall.

• Builds the NSX control plane and management plane fabric:

Page 23: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 23

Configure VXLAN Each host (ESXi server) functions as a VXLAN tunnel endpoint (VTEP) and has a vmkernel (vmk) interface that has a VTEP IP address assigned to it.

1. Select the Logical Network Preparation tab. The VXLAN Transport option is highlighted by default. 2. Expand the Clusters & Hosts column. Each of the hosts should display a Configuration Status of Ready

and a VTEP VMkernel (likely acquired from a VTEP IP Pool configured on the NSX Manager).

Note: The host preparation installation status continues to display “Not Ready” even after clicking Resolve. Installing or uninstalling the VXLAN agent fails and generates the error: “Agent VIB module not installed”.

To troubleshoot the error, determine whether the VIBs are installed on the ESXi hosts. If no the VIB modules exist, then the Agent VIB modules were not installed. To manually install the VIB modules, refer to NSX for vSphere 6.3.0 Release Notes, and search for “Upgrade Notes related to NSX and vSphere - Stateless environments” for more detail.

Page 24: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

24 © 2017 Juniper Networks, Inc.

Select the correct ESXi version VXLAN.zip file.

You may need to upload the VXLAN.zip file onto the ESXi datastore by entering the command: esxcli software via install -d /vmfs/volumes/<datastore>/vxlan.zip.

If the error still persists, refer to this Knowledge Base article.

Page 25: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 25

Define the Segment ID Pool 1. Select the Segment ID option. The VXLAN Segment ID pools (VNI segments) are listed. 2. For this implementation guide example, select ID pool 5000-5999.

Note: NSX has the ability to create a massive number of segments for your environment.

Note: No multicast addresses are used in this example because the transport zone is configured for unicast. Multicast addresses are only required for Hybrid or Multicast transport zones.

Create Transport Zones 1. Select the Transport Zones option. A list of the transport zones displays. 2. For this implementation guide example, select the only zone configured for unicast.

Note: Any vSphere Clusters that you want to include in this NSX deployment, must be part of the transport zone.

Page 26: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

26 © 2017 Juniper Networks, Inc.

Deploy Logical Switches Logical switches create logical broadcast domains or segments to which you can logically wire participating end hosts. Each logical switch is created as a distributed port group and is mapped to a unique VNI. The segment ID is assigned based on a round-robin selection from the segment ID pool. Because logical switches are distributed port groups, they are available across all hosts that belong to the distributed virtual switch.

Best Practice: It is recommended that the transport zone associated with the logical switch also spans across the same hosts.

For this implementation guide example, three logical switches are created: App_LS (VNI 5000), Web_LS (VNI 5001), and Transit_LS (VNI 5002).

When you attach a VM to the logical switch, it displays as a distributed port group assigned to the attached VM network adapter.

Page 27: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 27

The App_LS logical switch and the Web_LS logical switch both have two VMs connected, as shown below:

Test VTEP IP Interface Connectivity Verify the MTU has been increased to support VXLAN encapsulation.

Ping the vmknic interface IP address which is located on the host's Manage->Networking->Virtual switches page in the vCenter Web Client.

Page 28: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

28 © 2017 Juniper Networks, Inc.

Page 29: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 29

Deploy the Distributed Logical Router and Edge Services Gateway An NSX Edge appliance can have two types of interfaces: internal and uplink.

The Edge Services Gateway (ESG) provides routing for both East-West and North-South traffic. The ESG’s uplink interface is connected to the vSphere Distributed Switch, and its internal interface is connected to the Edge Transit Switch.

The Distributed Logical Router (DLR) provides East-West distributed routing with tenant IP address space and data path isolation. The DLR’s uplink interface is connected to the Edge Transit Switch, and its internal interfaces are connected to the App_LS and Web_LS. For more information, see Figure 5.

Edge Services Gateway

Page 30: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

30 © 2017 Juniper Networks, Inc.

Distributed Logical Router

Page 31: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 31

For more details about how the interfaces connect to the logical switches, see Figure 5.

Figure 5: Logical Switch Interfaces Connections

Create the Static Route 1. For this implementation guide example, manually add a static route on all of the host VMs. This

enables you to use the Edge Services Gateway as the Layer 3 Gateway. 2. For hosts connected to the App_LS logical switch, add a static route to the Web_LS logical switch’s

network so that the gateway becomes the internal interface of the Distributed Logical Router connected to the App_LS logical switch.

Page 32: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

32 © 2017 Juniper Networks, Inc.

Junos Space Network Director Integration Junos Space Network Director integration provides a single view of the virtual layer and the Juniper Networks underlay. You can use this as a valuable tool for troubleshooting both overlay components and physical network devices from the GUI. You can customize the dashboards, and add widgets such as ranking VMs by bandwidth utilization, and top overlay networks.

Use this guide to install Junos Space. During the installation, remember the following key items:

• Junos Space requires two IP addresses: one CLI IP (eth0) and one Web GUI IP (eth3) (as described in the installation guide).

• Configure a Junos Space virtual appliance as the first node. When prompted for “Will this Junos Space system be added to an existing cluster? [y/n]”, enter “n”.

• After the installation completes, apply changes and enter quit.

Use this guide to install Junos Space Network Director. During the installation, remember you need to add an appropriate DMI schema for your device model. For device model schemas, click here and here.

The following are examples of Network Director GUI views that may be useful for troubleshooting:

• Complete Datacenter View: Shows detailed information of physical and virtual infrastructure and provides visibility into overlay and underlay networks.

Page 33: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 33

• Also, provides the ability to monitor the datacenter status, and review top hosts by bandwidth

utilization and top VMs bandwidth utilization.

Page 34: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

34 © 2017 Juniper Networks, Inc.

• Top Overlay Networks View:

• Top VMs by Bandwidth Utilization View:

Page 35: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 35

• Hypervisor Server Connectivity View for all three ESXi hosts:

Understanding Data Flows Intra-VNI Traffic Flow Intra-VNI traffic is traffic flow between VMs connected to the same logical switch (see Figure 6).

1. The source VM (App1 on Host 2) sends IP packets towards the source VTEP (VTEP-2). At this time, the packets do not have a VXLAN header.

2. VTEP-2 encapsulates these packets using VNI 5000. 3. Traffic is then sent over the QFX5100 device. 4. VTEP-3 receives the traffic, de-encapsulates the packets from VTEP-2 thereby removing the VXLAN

header. 5. VTEP-3 then sends the packets towards the destination VM (App 2 on Host 3).

Page 36: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

36 © 2017 Juniper Networks, Inc.

Figure 6: Intra-VNI Traffic Flow

Inter-VNI Traffic Flow Inter-VNI traffic is traffic flow between VMs connected to different logical switches (see Figure 7).

Layer 3 gateway functionality is required for inter-VNI communication which can be provided by either the NSX-ESG or the DLR (as it is for this implementation guide example).

Note: Layer 3 gateway functionality can also be deployed with a hardware gateway or by Juniper’s MX Series router.

By using the MX Series device as the Layer 3 gateway, it provides the ability to integrate with the EVPN (Ethernet VPN) core network for Datacenter Interconnect (DCI). This functionality is not available with NSX.

To use the ESG or MX Series device as the Layer 3 gateway, you must manually add a default/static route to all end hosts (VMs).

1. The source VM (App1 on Host 2) sends IP packets towards its default gateway. 2. At VTEP 2, the packets are encapsulated using VXLAN 5000. 3. The packets are then forwarded towards the DLR. 4. The DLR kernel module routes the traffic between LIF1 and LIF2 directly on the vSphere host wherever

the source VM resides. 5. The DLR then de-encapsulates the packets and re-encapsulates them using VNI 5001. It then sends the

packets towards VTEP 3. 6. VTEP-3 de-encapsulates the packets. 7. The packets are then sent to the destination VM (Web 1 on Host 3).

Page 37: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 37

Figure 7: Inter-VNI Traffic Flow

Integration Validation To validate the integration between the Juniper Networks QFX5100 device and VMware NSX, use the phases and steps shown below.

Phase 1 – vCenter and NSX Manager 1. Use the Remote Desktop Protocol (RPD) from your machine to connect to the Windows Jump station.

2. Click “Continue” to verify the certificate warning. 3. Verify that NSX Manager is communicating directly with the vCenter Server. This procedure ensures

proper NSX Management plane functionality. Log into the NSX Manager Appliance through a Web browser:

Page 38: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

38 © 2017 Juniper Networks, Inc.

4. Select the Manage vCenter Registration option.

5. Settings for the Lookup Service URL and vCenter Server display. If configured correctly, a green dot and

‘Connected’ display as Status.

6. Click the ‘Summary’ tab and verify that these services are running: vPostgres, RabbitMQ, NSX

Management Service, and SSH Service.

7. Log into the vSphere Web Client and select Licensing. Select the ‘Assets’ tab, and then select

‘Solutions.’ Verify that your NSX environment is licensed.

Page 39: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 39

8. Open Networking & Security from the vSphere Web Client. 9. Select Installation and then the Management tab. The NSX Manager displays and includes the IP

address, associated vCenter Server, and version. Listed below the NSX Manager area are the NSX Controller nodes. Each node should contain the following: • An IP address (most likely derived from an IP pool) • ‘Connected’ Status with a green check mark • Two peers • Software version

10. Select the Host Preparation tab and expand each of the clusters and hosts. Each cluster should have a

green check mark for the following: Installation Status, Firewall, and VXLAN.

Page 40: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

40 © 2017 Juniper Networks, Inc.

11. Select the Logical Network Preparation tab. The VXLAN Transport option is the default selection. 12. Expand the clusters and hosts. Each host should have a Configuration Status of Ready and a VTEP

VMkernel (likely derived from a VTEP IP pool configured using NSX Manager).

13. Select the Segment ID option. The VXLAN Segment ID pool (VNI segments) displays. Select ID pool

5000-5999. Note: NSX has the ability to create a massive number of segments for your environment.

Page 41: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 41

Note: No multicast addresses are used in this example because the transport zone is configured for unicast. Multicast addresses are only required for Hybrid or Multicast transport zones.

14. Select the Transport Zones option. A list of the transport zones displays. 15. For this implementation guide example, select the only zone configured for unicast.

Note: Any vSphere Clusters that you want to include in this NSX deployment, must be part of the transport zone.

Phase 2 – NSX Manager CLI Before executing CLI commands on the NSX Manager appliance through an SSH session, verify that the SSH service is running on the NSX Manager.

1. Access the console of the NSX Manager appliance. 2. Review the management interface of the appliance, and verify that this IP address matches the IP

address displayed in the vSphere Web Client.

3. Review the capacity of the local drives (should be two drives). Verify there is free space available.

Page 42: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

42 © 2017 Juniper Networks, Inc.

4. Review the list of NSX Controllers, including the IP addresses and status. Each of the three controllers

should have a State of RUNNING. The IP addresses should match the IP addresses displayed in the vSphere Web Client.

5. Review the clusters managed by NSX. Enter the show cluster all command. Use this command to

locate the Cluster ID of a particular cluster.

Note: The ‘Cluster Id’ value is used in other CLI commands instead of the actual Cluster Name.

The ‘Firewall Status’ column indicates that the distributed firewall is operating correctly. Micro-Segmentation does not function correctly without a distributed firewall.

6. Review Compute-Cluster-A’. For this implementation guide example, use ‘domain-c7’ instead of the actual name of the cluster.

Page 43: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 43

A list of ESXi hosts associated with this particular cluster displays, as well as, each hosts’ installation status. Each host should have a status of ‘Enabled’. Use the ‘Host Id’ information to verify particular information about a host; you do not use the actual FQDN or IP address of the host in the CLI.

Note: You must understand how to find the Host Id for the hosts.

7. Enter the show logical-switch list all command to review information about the logical switches: Name, UUID, VNI, and Transport Zone Name and ID.

8. Enter the show logical-router list all command to review information about the logical

router(s): Edge Id, Vdr Name, Vdr Id and the number of Lifs (Logical Interfaces) on the router. Only one logical router (Distributed Logical Router) displays for this example.

9. Enter the show edge all command to review information about the Edge (Perimeter) router

instance: Edge ID, Name, Size of the control VM, Version, and Status. Both the Edge Gateway and the DLR display.

Page 44: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

44 © 2017 Juniper Networks, Inc.

10. To review more detailed information of the router settings, include the specific Edge ID of the router

instance in the CLI command:

Phase 3 – NSX Control Plane To validate the control plane functionality, you execute CLI commands for each NSX controller. To note the differences between the CLI command output, execute the CLI command on each controller individually.

1. Open a console session (SSH) on each of the three NSX Controllers. After connecting to and authenticating each controller, the NSX Controller version and build number display.

2. Enter the show status CLI command to display: disk usage, memory usage, and uptime of the

controller.

Page 45: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 45

3. Enter the show control-cluster status CLI command to display the local controller status and

the five roles performed by the cluster. All five roles should display a status of ACTIVATED. The cluster ‘Join status’ should indicate complete. Additionally, the Cluster ID and Node UUID display. The Cluster ID contains the Node UUID of the first controller deployed.

Status for NSX_Controller-1

Page 46: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

46 © 2017 Juniper Networks, Inc.

Status for NSX_Controller-2

Status for NSX_Controller-3

4. Enter the show control-cluster connections CLI command to display whether a controller is listening for a particular role.

Note: The ‘persistence_server’ role (server/2878) is listening on only one of the three controller nodes. You can this command to locate which of the three controllers currently owns this role.

5. Enter the show control-cluster start-up nodes CLI command to display all three controllers

by their IP addresses. Each controller IP address (likely acquired by a Controller IP Pool) displays.

6. Enter the show control-cluster logical-switches vni xxxx CLI command to display which

controller is responsible for a specific VNI. Enter this command on each controller. The controller IP address listed to the right of the VNI ID is the responsible controller.

Page 47: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 47

7. Based on the output of the previous CLI command, Controller-1 (192.168.0.110) is responsible for VNI

5001. As a result, enter the show control-cluster logical-switches vtep-table 5001 on Controller-1.

The IP address and the MAC address are associated with the VTEP VMkernel adapter on that particular ESXi host.

8. From the vSphere Web Client, select Networking & Security->Installation->Logical Network Preparation->VXLAN Transport. Expand each cluster to display the ESXi hosts and their VTEP IP addresses. From the table, verify that IP address 192.168.0.21 is associated with the ESXi host ‘esxi-1’ in the Management-Cluster. You can verify the information by locating the VMkernel on that particular host in the vSphere Web Client.

9. Enter the show control-cluster logical-switches mac-table xxxx CLI command review the MAC table information. If no output is returned, then the VM associated with that VNI (Logical Switch) is not powered on.

The output from the command displays the MAC address of the VM(s) attached to VNI 5001 (NSX logical switch) and the IP address of the ESXi host’s VTEP IP address. This is the host running the VM and providing the connection.

Phase 4 – NSX Data Plane To validate the NSX data plane functionality, you execute CLI commands on the ESXi hosts. To note the differences between the CLI command output, execute the CLI command on each ESXi host individually. You execute some of the CLI commands to verify data plane functionality, and other CLI commands to verify communication between the data plane and control/management planes.

Page 48: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

48 © 2017 Juniper Networks, Inc.

1. Verify the VIB packages installed correctly, particularly the ‘esx-vsip’ and ‘esx-vxlan’ VIBs. Enter the following commands to verify data plane functionality:

2. Verify the modules are loaded on the ESXi host. These modules provide VXLAN, firewall, DLR, and

bridging functionality. Enter the following commands to verify data plane functionality:

3. Ensure the ‘vsfwd’ agent is running. Specifically, this service connects to the NSX Manager to retrieve

configuration details, and is then proxied to the ‘netcpa UWA’ (User World Agent) for the NSX Manager. Enter the following command to verify data plane communication with the management plane:

4. Verify the ESXi host communication with the NSX Manager. The output from this command should

match the IP address of the NSX Manager. Enter the following command to verify data plane communication with the management plane.

5. Verify the ‘vsfwd’ is communicating with the NSX Manager (from which it derives configuration

parameters). The ‘vsfwd’ agent displays several connections to the NSX Manager on port 5671. The first IP address listed is the local ESXi host address; the next IP address is the NSX Manager IP address and port 5671. The connections are ‘ESTABLISHED’ and display the ‘vsfwd’ agent. Enter the following command to verify communication between the data plane and the management plane.

Page 49: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 49

6. Ensure that ‘netcpa’ agent is running. This agent connects the Controller Nodes (cluster) to acquire

logical routing and switching information. When communicating with the NSX Manager, this is proxied through the ‘vsfwd UWA’. Enter the following command to verify communication between the data plane and control plane.

7. Ensure the ‘netcpa’ agent is connected to the correct controller to retrieve logical routing and

switching information. Three connections display in the command output. The source is the local IP address of the ESXi host followed by the IP address of a controller on port 1234. This connection is used by the ‘netcpa-worker’. Enter the following command to verify communication between the data plane and control plane:

8. Display information about the controllers. This file contains the IP addresses and thumbprint provided

by the NSX Manager to the ESXi host during the installation phase. If this file becomes corrupt or is accidentally edited, place the ESXi host in Maintenance Mode, delete this file, and then reboot. The file regenerates after a successful host reboot.

Page 50: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

50 © 2017 Juniper Networks, Inc.

Phase 5 – QFX5100 Switch 1. Enter the show configuration | display set command to display the QFX5100 switch

configuration:

2. Enter the show ethernet-switching table command to display: name of the routing instance,

name of the VLAN, MAC addresses learned on logical interfaces, MAC flags display the status of MAC addresses learned on a logical interface (D indicates that a Dynamic MAC address is configured) and name of the logical interface. The Age field is not supported.

Page 51: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 51

3. Enter the show ethernet-switching table interface ge-0/0/0 command to display the

MAC addresses learned on logical interface ge-0/0/0.0:

Page 52: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

52 © 2017 Juniper Networks, Inc.

4. Enter the show ethernet-switching table interface ge-0/0/2 command to display MAC addresses learned on logical interface ge-0/0/2.0:

5. Enter the show ethernet-switching table interface ge-0/0/4 command to display MAC

addresses learned on logical interface ge-0/0/4.0:

Page 53: Integrating Juniper Networks QFX5100 Switches … Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vSphere Version 6.3 Overlay with

Implementing an NSX vSphere Version 6.3 Overlay with a QFX5100 Underlay

© 2017 Juniper Networks, Inc. 53

Appendix – Solution Components Table 1 lists detailed information of the components used in this guide, such as: IP addresses, physical/logical interface, MAC address, port group, state, VLAN ID.

The components used are: three ESXi hosts, NSX Edge (DLR and ESG), Ubuntu VMs (App and Web), Junos Space, vCenter Server, NSX Manager, three NSX Controllers.

Table 1: Components Used in this Guide