integration of hypervisors aci...

Post on 31-Mar-2018

236 Views

Category:

Documents

6 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Integration of Hypervisors and L4-7 Services into an

ACI FabricAzeem Suleman, Principal Engineer, Insieme Business Unit

Agenda

• Introduction to ACI

• Review of ACI Policy Model

• Hypervisor Integration

• Layer 4-7 Services Integration

• Conclusion

Introduction to ACI

ACI Fabric

Scale-Out Penalty Free Overlay

App DBWeb

Outside

(Tenant VRF)

QoS

Filter

QoS

Service

QoS

Filter

Application Policy

Infrastructure

Controller

APIC

Cisco ACILogical Network Provisioning of Stateless Hardware

Service Producers EPG “Users”EPG “Files”

Leaf Nodes

Spine Nodes

EPG “Internet”

AVS

Service Consumers

ACI Nomenclature

Review of the ACI Policy Model

Bridge Domain (BD)

• Unique layer 2 (L2) or layer 3 (L3) forwarding domain

• Can contain one or more subnets (if unicast routing is enabled)

• Each bridge domain must be linked to a context (VRF)

Equivalent Network Construct:

• If a BD is configured as L2 forwarding domain

• It will have one or more associated VLANs

• Each VLAN will be equal to EPG

• If a BD is configured as L3 forwarding domain

• This is equivalent to a SVI with one or more subnets per BD

NOTE: BD can span across multiple switches

Object Relationship

Tenant

Context

BD

Subnet A

Subnet B

BD

Subnet C

Context

BD

Subnet B

Subnet C

End Point Group (EPG)

• Set of host(s) that behave the same

• Behavior describes as all host(s) representing application or application components independent of other network constructs

HTTPS

Service

HTTPS

Service

HTTPS

Service

HTTPS

Service

HTTP

Service

HTTP

Service

HTTP

Service

HTTP

Service

EPG - Web

POLICY MODEL

Application Network Profile (ANP)

• Application Network Profile(s) are group of EPGs and the policies that define the communication between them

Inbound/Outbound

PoliciesInbound/Outbound

Policies

Application Network Profile

POLICY

MODEL

=

EPG - WEB EPG - APP EPG - DB

Integration with Multiple Hypervisors

Hypervisor Integration Agenda

• Hypervisor Integration Overview

• VMware vCenter Integration

• Microsoft SCVMM & Azure Pack Integration

• OpenStack Integration

Hypervisor Interaction with ACITwo Modes of Operation

• ACI Fabric as an IP-Ethernet Transport

• Encapsulations manually allocated

• Separate Policy domains for Physical and Virtual

VLAN 10 VLAN 10 VXLAN 10000

Non-Integrated Mode

• ACI Fabric as a Policy Authority

• Encapsulations Normalised and dynamically provisioned

• Integrated Policy domains across Physical and Virtual

APP WEB DB

Integrated Mode

DB

vCenter DVS SCVMM

Relationship is formed between APIC and Virtual Machine Manager (VMM)

Multiple VMMs likely on a single ACI Fabric

Each VMM and associated Virtual hosts are grouped within APIC

Called VMM Domain

There is 1:1 relationship between a Virtual Switch and VMM DomainVMM Domain 1

Hypervisor Integration with ACIControl Channel - VMM Domains

vCenter AVS

VMM Domain 2 VMM Domain 3

Hypervisor Integration Agenda

• Hypervisor Integration Overview

• VMware vCenter Integration

• Microsoft SCVMM & Azure Pack Integration

• OpenStack Integration

VMware IntegrationThree Different Options

+

Distributed Virtual Switch

(DVS)vCenter + vShield

Application Virtual Switch

(AVS)

• Encapsulations: VLAN

• Installation: Native

• VM discovery: LLDP

• Software/Licenses: vCenter with Enterprise+ License

• Encapsulations: VLAN, VXLAN

• Installation: Native

• VM discovery: LLDP

• Software/Licenses: vCenter with Enterprise+ License, vShield Manager with vShield License

• Encapsulations: VLAN, VXLAN

• Installation: VIB through VUM or Console

• VM discovery: OpFlex

• Software/Licenses: vCenter with Enterprise+ License

ACI Basics: APIC EPG to vSphere Port Group

Port Group – Web

VXLAN 5001

Port Group – App

VXLAN 5002

Port Group – DB

VXLAN 5003

Virtual Distributed Switch EPG Web

Policy

EPG App

Policy

EPG DB

APIC

ACI Hypervisor Integration – VMware

Hypervisor Integration with ACIEndpoint Discovery

DVS Host

APIC

VMM

Control

(vCenter API)

Data Path

LLDP

Virtual Endpoints are discovered for reachability & policy purposes via 2 methods:

Control Plane Learning:

- Out-of-Band Handshake: vCenter APIs

- Inband Handshake: OpFlex-enabled Host (AVS, Hyper-V, etc.)

Data Path Learning: Distributed switch learning

LLDP used to resolve Virtual host ID to attached port on leaf node (non-OpFlex Hosts)

OpFlex

Host

Control

(OpFlex)

Data Path

APIC Admin

VI/Server Admin Instantiate VMs,

Assign to Port Groups

L/B

EPG

APPEPG DB

F/W

EPG

WEB

Application Network Profile

Create Application Policy

WebWebWeb App

HYPERVISOR HYPERVISOR

VIRTUAL DISTRIBUTED SWITCH

WEB PORT GROUP APP PORT GROUP DB PORT GROUP

vCenter

Server / vShield

8

5

1

9ACI

Fabric

Automatically Map

EPG To Port Groups

Push Policy

Create VDS2

Cisco APIC and

VMware vCenter Initial

Handshake

6

DB DB

7Create Port

Groups

ACI Hypervisor Integration – VMware DVS/vShield

APIC

3

Attach Hypervisor

to VDS

4Learn location of ESX

Host through LLDP

Southbound

OpFlex API

VMVM VM VM

N1KV VEM

vSphere

Hypervisor Manager

OpFlex Control protocol

- Control channel

- VM attach/detach, link state notifications

VEM extension to the fabric

vSphere 5.0 and above

BPDU Filter/BPDU Guard

SPAN/ERSPAN

Port level stats collection

Application Virtual Switch (AVS)Integration Overview

APIC Admin

VI/Server Admin Instantiate VMs,

Assign to Port Groups

L/B

EPG

APP

EPG

DBF/W

EPG

WEB

Application Network Profile

Create Application Policy

WebWebWeb App

HYPERVISOR HYPERVISOR

Application Virtual Switch (AVS)

WEB PORT GROUP APP PORT GROUP DB PORT GROUP

vCenter

Server

8

5

1

9ACI

Fabric

Automatically Map

EPG To Port Groups

Push Policy

Create AVS

VDS2

Cisco APIC and

VMware vCenter Initial

Handshake

6

DB DB

7Create Port

Groups

ACI Hypervisor Integration – AVS

APIC

3

Attach Hypervisor

to VDS

4Learn location of ESX

Host through OpFlex

OpFlex Agent OpFlex Agent

ACI Hypervisor Integration – VMware AVS

Name of VMM Domain

Type of vSwitch (DVS or AVS)

Associated Attachable Entity Profile (AEP)

VXLAN Pool

vCenter Administrator Credentials

vCenter server information

Switching mode (FEX or Normal)

Multicast Pool

Micro-segmentation: VM Attribute based Grouping

VM Attribute

Guest OS

VM Name

VM (id)

VNIC (id)

DVS

DVS Port-group

Data centre

MAC

IP Address Prefix

• Flexible Attribute based Grouping for VMs

• Enables Micro-Segmentation based on VM attributes

• Supported on vSphere with AVS

EPG: VM name contains “web”

Hypervisor Integration Agenda

• Hypervisor Integration Overview

• VMWare vCenter Integration

• Microsoft SCVMM & Azure Pack Integration

• OpenStack Integration

Microsoft Interaction with ACITwo modes of Operation

• Policy Management: Through APIC

• Software / License: Windows Server with HyperV, SCVMM

• VM Discovery: OpFlex

• Encapsulations: VLAN

• Plugin Installation: Manual

Integration with SCVMM

APIC

Integration with Azure Pack

APIC

• Superset of SCVMM

• Policy Management: Through APIC or through Azure Pack

• Software / License: Windows Server with HyperV, SCVMM, Azure Pack (free)

• VM Discovery: OpFlex

• Encapsulations: VLAN

• Plugin Installation: Integrated

+

APIC Admin

(Basic Infrastructure)

Azure Pack Tenant

3

6

ACI

Fabric

Push Network

Profiles to APIC

Pull Policy on leaf

where EP attaches

Indicate EP Attach to attached leaf

when VM starts

1

2

HYPERVISOR HYPERVISOR HYPERVISOR

ACI Azure Pack Integration

APIC

Get VLANs allocated

for each EPG

Create Application

Policy

7

Azure Pack \ SPF

SCVMM PluginAPIC Plugin OpFlex Agent OpFlex Agent OpFlex Agent

Instantiate VMs

5

1

4Create VM Networks

4

Web WebWebWeb AppApp DB DB

Summary

• Micro-segmentation in Microsoft Hyper-V

• Static IP pool automation through SCVMM and Azure Pack

• SCVMM integration

• WAP integration

• Multiple BDs in the same VRF (for WAP virtual private plan)

• Layer3 out in the user tenant (for WAP virtual private plan)

Hypervisor Integration Agenda

• Hypervisor Integration Overview

• VMWare vCenter Integration

• Microsoft SCVMM & Azure Pack Integration

• OpenStack Integration

Initial Focus on Networking

(Neutron)

OpenStack Components

(Neutron)

Tenant

Network Security Group

Security Group

Rule

Network:

externalRouter

PortSubnet

Core APIL3 + External

Net Extension

Sec Grp

Extension

OpenStack Neutron Networking Model

Tenant

Bridge DomainContext

(VRF)

Subject

App ProfileOutside

Network

Subnet

Endpoint Group

Contract

Cisco ACI Model

OpenStack Driver Options

Neutron API and Modular Layer 2 (ML2) Group-Based Policy

RouterSecurity

GroupNetwork

OpenStack Controller

APIC ML2

Plug-in performs conversion from Neutron toCisco® APIC policy model

Group-based policy native drivers interfaces directly with APIC policy model

Rule SetPolicy

Group

Policy

Group

OpenStack Controller

GBP APIC Driver

ADCFW

Group-Based Policy

OpFlex Extends Cisco ACI to Hypervisor

Pre-OpFlex Implementation

Native Neutron approach using OVS agent

OpenStack Controller

APIC Driver OVS Driver

Hypervisor Open vSwitch OVS Agent

Project 1 Project 2 Project 3

vm1

vm2

vm3

vm4vm5

VLAN

• VLAN per network

and group to ToR

• VXLAN within

Cisco ACI™

• Physical domain in

Cisco ACI

• No Cisco® APIC

GUI integration

• Supports unmodified

OVS and OVS agent

OpFlex and OVS

OpFlex agent directly manages OVS and integrates with APIC

• VLAN or VXLAN per

network and policy group

to ToR

• OpFlex proxy runs in leaf,

and OpFlex agent

manages OVS

• Hypervisor-local traffic has

policy and switching,

routing handled locally

• VMM domain and GUI

integration with APIC

• Distributed support for NAT,

metadata server proxies,

and DHCP

OpenStack Controller

APIC Driver

Hypervisor Open vSwitch OVS Agent

Project 1 Project 2 Project 3

vm1

vm2

vm3

vm4vm5

VXLAN and VLAN

OpFlex

Proxy

Summary: OpenStack

• Multiple OpenStack driver options:

‐ Cisco® APIC native group-based policy

‐ Neutron ML2

• Operations, troubleshooting, and visibility for physical and virtual

‐ Endpoint statistics, health, and faults in APIC

• Hypervisor local enforcement security policies

‐ Security groups (ML2 driver) through IP address tables

‐ Group-based policies through OpenFlow in Open vSwitch

• Distributed NAT support on each computing node

‐ Floating IP address

‐ Source NAT (sNAT) (through hypervisor host IP address)

• Distributed Neutron services per computing node

‐ Layer 3 and anycast gateway, metadata, and Dynamic Host

Configuration Protocol (DHCP)

• Multiple Virtual Routing and Forwarding (VRF) instance support

• Support for VLAN and VXLAN to Cisco ACI™ fabric

• Solution high availability: Support for virtual port channel *vPC) and

multiple APICs

OpenStack Controller

APIC Driver

Hypervisor Open vSwitch OpFlex Agent

Project 1 Project 2 Project 3

vm1

vm2

vm3

vm4vm5

VXLAN and VLAN

OpFlex

Proxy

Layer 4-7 Services Integration

Challenges with Network Service Insertion

Service Insertion In traditional Networks

Router

Router

Switch

LB

FW

Configure firewall network parameters

Configure Network to insert Firewall

Configure firewall rules as required by the application

Configure Router to steer traffic to/from Load Balancer

Configure Load Balancer Network Parameters

Configure Load Balancer as required by the application

vFW

servers

Service insertion takes days

Network configuration is time consuming and error prone

Difficult to track configuration on services

• No integration (same as today)

• Unmanaged (network-only automation)

• Managed (full automation)

L4-7 Integration Options

Network Service Insertion

WEBEXTERNAL Consumes Web Contract

HTTP: Accept, Service Graph

FWLB

Contract provides a mechanism to add Network Services through associating a Service Graph

APIC configures network service functions on devices like firewall, Load Balancers through a device packages

Consumer Provider

A Service Graph identifies a set of network service functions required by an application

A device package can be uploaded on APIC at run time

Adding new network service support through device package does not require APIC reboot

Provides

• By using the Service graph you can

• install a service, such as a firewall once and

• deploy it multiple times in different logical topologies

• The benefits of the service graph are:

• a configuration template that can be reused multiple times

• Automatic management of VLAN assignments

• collecting Health scores from the device

• collecting statistics from the device

• updating ACLs and Pools automatically with endpoint discovery

The Advantages of the Service Graph

Layer 4-7 Services Integration

Do I really need a Service graph?

Without Service Graph

Network admin:

• configures the ports, VLANs to connect the FW or the LB

• FW admin day 0: configures ports and VLANs

• FW admin day 1: configures ACLs and so on

• The three configurations are spread over multiple phases / days

With Service Graph

ACI admin:

• configures the ports, VLANs to connect the FW or the LB

• FW admin day 0: configures ports and VLANs

• FW admin day 1: configures ACLs and so on

• All configurations are performed in a single step.

A Different Operational Models

APIC

Configurations with Service Graph

• All configurations performed in a single operation:

• Fabric configuration: Bridge Domains, VLANs, Routing, EPGs

• Firewall configuration: VLANs, Interfaces

• ACLs

Network-only Stitching

With Network-only Stitching ACI Only Configures the Fabric Not the L4L7 Device

Create Tenants, VRF BD EPG

Associate vNIC or physical port

Create contracts

Device Not managed by ACI

Network Stitching - unmanaged L4 L7 Device

Uncheck “Managed”

Fill in the info

• Name: Concrete Device Name

• Service Type: Firewall, ADC, IPS etc

• Device Type: Physical or Virtual

• Domain

• Mode

• Some customer have requirements that APIC only completes network automation for service devices. (For example, customer have existing orchestrator or tool for configuring L4-L7 service appliances or a device package is not available for L4-L7 device)

• Network only switching feature adds the flexibility for customer to use only network automation for service appliance. The configuration of the L4-L7 device is completed by L4- L7 admin so a Device Package is not required.

Network Only Stitching

2: configure L4-L7 service appliance

1: configure ACI Fabric for

L4-L7 service appliance

L4-L7 Admin

Service Graph APIC-to-L4 L7 communication Device Package

APIC Talks to the L4 L7 Device

API

API

No Requirements for New Protocols

L4L7 Device language

APIC Requires a Device Package Device Package

Configuration Model (XML File)

Python Scripts

Configuration Model

Device Interface: REST/CLI

APIC Script Interface

Python Scripts

Script Engine

APIC – Policy Manager

Service Devices

• Service functions are added to APIC through device package

• Device Package contains a device model and device python scripts

• Device Model defines Service Function and Configuration

• Device scripts translates APIC API callouts to device specific callouts

• Script can interface with the device using REST, SSH or any mechanism

Device Package Example

Following functions can be configured through APIC

Device Information Extracted Out of Device Package

Vendor Info, Software Version Info and Model Info of Service Device

Info on how many interfaces types the appliance has (Inside, Outside and Mgmt for e.g.)

Functions (Or Services) provided by the Service DeviceSLB, SSL, Responder

Only Configuration needed on the L4L7 Device is Management Access

Enable SSH

Enable HTTP access

Configure Credentials

Terminology:

The Guiding Principle of Service Graph is

• to Connect “functions” not Boxes.

• E.g. a Load Balancer can provide various functions:

• Load balancing

• SSL offloading etc…

• This may be academic, but this is the abstraction that ACI provides

Key Concepts in Service Insertion

• Concrete Device: it represents a service device, e.g. one load balancer, or one firewall

• Logical Device: represents a cluster of 2 devices that operate in active/standby mode for instance.

• Service Graph: defines a sequence of “functions” connected: e.g. a firewall from Checkpoint followed by a load balancing from “F5”.

• Logical Device Context: specifies upon which criteria a specific device in the inventory should be used to render a service graph

• Device Package:

• defines things such as how to label “connectors” for a function, and how to translate “names” from ACI to the specific device.

• E.g. a load balancer “function” has predefined connectors called:• “external”

• “internal”

• “management”.

ACI Service Graph Definitions

Terminal Terminal

Permit ip tcp * dest-ip <vip> dest-port 80Deny ip udp *

Virtual-ip <vip>Port 80 Lb-aglorithm: round-robin

Ipaddress <vip> port 80

Connectors (VLANs)

Consumer ProviderFunction Firewall

Function SSL offload

Function Load Balancer

Service Graph: “web-application”

Connectors (VLANs)

“L4L7 Parameters”

ACI Rendering a Service Graph

Co

nn

ecto

rs (

VLA

Ns)

Function Firewall

Function SSL offload

Function Load Balancer

Co

nn

ecto

rs (

VLA

Ns)

EPG outside EPG web

Contract webtoapp

• “Generic” representation of the expected traffic flow

• Defines

• Connection Points (connections and terminals)

• Nodes

L4-L7 Service Graph Template

• The Service Graph Template defines the sequence of nodes/functions

• Example Load Balancer or Load Balancer followed by a Firewall

The Service Graph Template

Templates Must be “Applied” For it to Be “Rendered”

• Concrete Device: it represents a service device, e.g. one load balancer, or one firewall. Can be physical or virtual

• Logical Device: represents a cluster of 2 devices that operate in active/standby mode for instance.

Concrete and Logical Devices

SLB

Service Graph Function Node

Concrete Device Concrete Device

Logical Device

• Selects the right device cluster and interfaces based on selectors:

• Service Graph Template Name

• Contract Name

• Node Name

Device Selection Policies (or Logical Device Context)

Function Firewall

Function Load Balancer

Graph Template

Logical Devices

Rendered/deployed Graph

EPG outside EPG web

Contract

Deployed Graph Instances

L4 L7 Parameters

L4 L7 Parameters

API

L4L7 Device language

externaif IP Address

L4L7 Parameters

L4 L7 Parameters Function ProfileEntering the L4L7 parameters is tedious and error proneThe Function Profile solves this problem

Each Function Profile is a collection of L4 L7 parameters

Deployment Steps and Data Plane Considerations

• Preparation:• Create the necessary Physical and Virtual Domains

• Configure the Basic Management access on the L4L7 Device

• Import Device Package

• Create the necessary Bridge Domains/ VRFs

• Create EPGs and Contracts

• Configure Logical and Concrete Device

• Create or import a function profile

• Create a Graph Template (and use a function profile)

OR

• Create a Graph Template and enter L4 L7 parameters by hand

• Deploy the Graph Template• Create the Device Selection Policy

• Associate to a contract

Service Insertion Deployment Steps

Basics of ACI ForwardingHow to Create a L2 Domain?

• Create a Bridge Domain

• Keep Unicast Routing Enabled

• Associate the Bridge Domain with a VRF

• The association with the VRF is because of the object model

• The hardware won’t program any VRF if the Bridge Domain is configured only as L2

Bridge Domain 1

VRF

Bridge Domain 2

Bridge Domain 1 Bridge Domain 2

Consumer Side Provider Side

You Still Need to Create Bridge Domains and VRFs

VRF / Object model Relation

BD1 BD2

ACI Create Tenant, VRF, BD and EPG

• Goto: the L4L7 is the default gateway for the servers

• Gothrough: the L4L7 is just a transparent/L2 device, the next-hop or the outside BD provides the default gateway

• One-arm: the BD of the servers is the default gateway

Three Main Deployment Modes

Except for One-arm Mode you Need to Start with Two Bridge Domains

Bridge Domain 1

10.10.10.x 20.20.20.x

10.10.10.5 20.20.20.5

EPG outside EPG web

Bridge Domain 2

Bridge Domain Outside Bridge Domain Inside

Client EPG Server EPG

Service Graph

Contract

ProviderConsumer

For Consistency with ACI Policy Model

ARP Flooding

Unknown Unicast Flooding

No IP Routing

ARP flooding

Unknown Unicast Flooding

No IP Routing

Provider SideConsumer Side

Default Gateway for the Servers

For Consistency with ACI Policy Model

VRFGoto Mode

ACI Behind the scenes

Shadow

EPG

Internal Contracts

Contract (defined by the user)

EPG outside EPG webShadow

EPG

VLAN Assignment Physical Appliance

• VLANs are automatically created on the ACI interfaces

• VLANs are also automatically created on the L4L7 device

one VLAN per each BD it is attached to

VLAN Assignment Virtual Appliance

• In case of Virtual Appliances

• vNICs are automatically assigned to the shadow port-groups

• VLANs are automatically created on the ACI interfaces

• VLANs are also automatically created on the L4L7 device

• YOU CANNOT REUSE THE SAME GRAPH ON DIFFERENT BDs

No trunking on vNICs

Create Service Graph TemplateCreate L4-L7 Device

ACI

Fabric

EPGClient

EPGWeb

E1/9E1/9

VLAN 110

VLAN 111

Device Type: Physical

Select Path

In this case, ASA use one physical interface

for consumer and provider.

BD2BD1

EPGweb

192.168.2.1

consumer provider

192.168.2.100192.168.1.1/24

192.168.1.100

EPGclient

vlan110 vlan111

Select VLAN Encap for each interface

Dynamic Endpoint Attach

• APIC dynamically detect new endpoint, then the endpoint is automatically added to the pool member of VIP

Dynamic Attach Endpoint with Load Balancers

EPGConsumer

EPGProvider

20.20.20.1

VIP: 10.10.10.200

20.20.20.100/2410.10.10.100/24

Web-Pool

20.20.20.2

New

20.20.20.3

New

You Can Enable Endpoint Attachment Notification in the Graph

F5 - Endpoints are Automatically Added to the Pool

Multi-context

• When you select Multi-context it means that the same appliance can be exported to multiple Tenants

• This only works with PHYSICAL APPLIANCES

• The Virtual Appliance may also let you create multiple partitions but

• How are the vNICs shared if the Virtual Appliance is on multiple Tenant?

• It cannot be shared because there cannot be a trunk with VLANs on the same vNIC

Multi-context Support

• We can partition a single physical ASA into multiple virtual firewall, known as security/virtual contexts. Each context acts as an independent device, with its own security policy, interfaces and management IP. ACI doesn’t create the ASA contexts, they must be predefined.

• With F5 Partitions are automatically created and ACI Tenants are automatically mapped to an F5 partition.

Multi-context Support in ASA and in F5

Data Plane Separation

ACI configures sub interfaces automatically

Context 1

Context 2

VLAN 1006

VLAN 1040

VLAN 1073

VLAN 1074

APIC creates sub-interfaces based on dynamically allocated VLAN from a pool, and in the System context it

assigns Port-channel sub-interfaces to appropriate user context, Contexts A, B, and C

Data Plane Separation

ACI configures interfaces as trunks

Partition 1

Partition 2

VLAN 1006

VLAN 1040

VLAN 1073

VLAN 1074

Sharing Service Devices

• ACI lets you configure objects in tenant common that can be used by other Tenants. E.g. filters, BDs, VRFs and also Logical and Concrete Devices

• Tenants can attach EPGs to these objects for instance

ACI Shared Services

Tenant CommonTenant Sales Tenant Sales2

• You can define Logical and Concrete Devices in Tenant Common and use them from other Tenants

Tenant CommonTenant Sales Tenant Sales2

ACI Shared Services – Tenant level

• With Multi-context Devices, you can share a device defined in Tenant common and use it from more than one Tenant.

Sharing Devices with Multi-Context L4 L7 Devices

Tenant Common Tenant Sales Tenant Sales2

Partition 1 Partition 2

How To Undo a Service Graph

How to Undo a Configuration?

• If you delete the Template, the graph is removed but there may be stale objects

• You need to remove some of the objects created da service graph…

OR

• There is a wizard to do the deletion of all objects created by the Apply wizard.

• Right click on a graph (one created with the template) and select "Remove Related Objects Of Graph Template"

Conclusion

• ACI is a highly flexible, programmable and integrated data centrenetwork fabric

• ACI allows ease of connectivity via policy of physical and virtual devices

• ACI allows the automation of tedious tasks such as L4 to L7 Integration

• ACI has advanced troubleshooting capability for the network fabric and connected services

Conclusion

Q & A

Complete Your Online Session Evaluation

Learn online with Cisco Live!

Visit us online after the conference

for full access to session videos and

presentations.

www.CiscoLiveAPAC.com

Give us your feedback and receive a

Cisco 2016 T-Shirt by completing the

Overall Event Survey and 5 Session

Evaluations.– Directly from your mobile device on the Cisco Live

Mobile App

– By visiting the Cisco Live Mobile Site http://showcase.genie-connect.com/ciscolivemelbourne2016/

– Visit any Cisco Live Internet Station located

throughout the venue

T-Shirts can be collected Friday 11 March

at Registration

Thank you

top related