ccna data center dcict - · pdf fileccna data center dcict 20 october 2014 09:22 ccna data...

32
Video 1 - Course Introduction - 640-916 - DCICT 90 mins 65-75 questions Exam Topics - cisco.com http://www.cisco.com/web/learning/exams/docs/640-91 6_dcict.pdf Architecture - Nexus/MDS Family - Fabric Interconnect/Storage - Key Monitoring for Nexus - Virtual Port Channels - OTV - Overlay Transport Virtualisation/Layer 2 Loop Prevention - Virtualisation - Nexus 1000v - SAN - FiberChannel - FCoE - Data Center Bridging - FEX (Nexus 2000) - UCS C Series/B Series + UCS Manager - Video 2 - The Cisco Data Center Architecture LAN/SAN Segregation? Layered Arch Data Center 3.0 LAN/SAN Segregation? Why is this the case? Security Bandwidth - LAN normally inferior to SAN Flow Control - More to follow.. Performance - SAN would achieve more than LAN SAN cannot tolerate Ethernet style Flow Control With FiberChannel, the Flow Control method is slightly different… Kind of like a credit system, the receiver will only send when it receives a 'credit'. 10Gbps Ethernet and FC of SAN = FCoE Insert traffic into Ethernet frame as such The Layered/Modular Approach Scalability Resiliency Access Layer - C Series/B Series UCS Aggregation Layer Core Layer - 10Gbps Fast Bandwidth Environment/High CCNA Data Center DCICT 20 October 2014 09:22 CCNA Data Center DCICT - 640-916 Page 1

Upload: trinhcong

Post on 16-Feb-2018

221 views

Category:

Documents


4 download

TRANSCRIPT

Video 1 - Course Introduction - 640-916 - DCICT

90 mins 65-75 questions

Exam Topics - cisco.com

http://www.cisco.com/web/learning/exams/docs/640-916_dcict.pdf

Architecture-

Nexus/MDS Family-

Fabric Interconnect/Storage-

Key Monitoring for Nexus-

Virtual Port Channels-

OTV - Overlay Transport Virtualisation/Layer 2 Loop Prevention

-

Virtualisation-

Nexus 1000v-

SAN-

FiberChannel-

FCoE-

Data Center Bridging-

FEX (Nexus 2000)-

UCS C Series/B Series + UCS Manager-

Video 2 - The Cisco Data Center Architecture

LAN/SAN Segregation?Layered ArchData Center 3.0

LAN/SAN Segregation? Why is this the case?SecurityBandwidth - LAN normally inferior to SANFlow Control - More to follow..Performance - SAN would achieve more than LAN

SAN cannot tolerate Ethernet style Flow Control

With FiberChannel, the Flow Control method is slightly different… Kind of like a credit system, the receiver will only send when it receives a 'credit'.

10Gbps Ethernet and FC of SAN = FCoEInsert traffic into Ethernet frame as such

The Layered/Modular Approach

ScalabilityResiliency

Access Layer - C Series/B Series UCSAggregation LayerCore Layer - 10Gbps Fast Bandwidth Environment/High

CCNA Data Center DCICT20 October 2014 09:22

CCNA Data Center DCICT - 640-916 Page 1

Core Layer - 10Gbps Fast Bandwidth Environment/High Speed DC technologies

Traditional SANSwitches connect directly into storage

'Collapsed core' - core/aggregation + accessLayer 3 throughout - Routed to access layerLayer 2 and Layer 3 combination

Cisco Data Center 3.0Virtualisation - Hypervisor/Nexus 1000VA/VN Link -Multiple VMs on same pipe - Listen to traffic on individual VMs

Unified Fabric - FCoE/DCB/iSCSI - LAN/SAN Agg

Unified Computing - UCS - Servers into LAN/SAN infrastructure

Video 3 - Meet The Nexus Family

7k5k2k

The Nexus 7000 Series7009 - 7 slots for I/O and 2 Sup Slots7010 - 8 + 27018 - 16 + 2Layer 3 and Layer 2 CapabilitiesDCB/FCoEISSU - In Service Software Upgrades - Upgrade IOS in a stateful mannerVDC - Multiple Nexus devices out of 1 machineModular!ResilencyControl and Data Plane SeperationRBACEEM - Embedded Event ManagerCall Home - Contact TAC on your behalf!2 Supervisor2 CMP2 Arbs2 fan modules

7009 - 3D View from Cisco.comLockable doorSystem LEDsIntegrated Cable ManagementCross VentilationCommon equipment accessible from rearHot Swappable fan trays - Handles and locksDual AC inputs

Nexus 7710

Lots of HA and resilency!

CCNA Data Center DCICT - 640-916 Page 2

Lots of HA and resilency!

Nexus 7718

70Tbps throughput!Layer 3/Layer 2FCoEDCBModular and hot swappableRemarkable device!

Nexus 7000 Series Supervisor ModuleSup 1 - Original Supervisor - Aug 2013 - EOLCMP - Connectivity Management Processor - Kind of like an iLO/DRACCMP Ethernet Port

Cisco Nexus 7700 Supervisor 2E = Replacement for Sup 132GB of RAM! Dual Quad-Core CPUVery small piece of kit

Nexus 7000 Series comes with BASE licenseAdditional license packs:

Enterprise LAN - Dynamic Routing/Multicast Routing

-

Adv Enterprise LAN - Virtual Device Contexts/Cisco Trust Sec

-

MPLS License - MPLS-

Transport Services - OTV/LISP-

Enhanced Layer 2 Services - Cisco FabricPath-

Digitally signed text file that you install onto Nexus device: 'install license' - command

120 day period for feature testing! Don't run any production technologies on this license :)

F2E - 48 Ports 10GE I/O Module

Fabric Module - Ingredient between SUP and I/O.Up to 5 fabric modules active together 230-550Gbps.

Rear of the Nexus

Support VOQ - More to follow… To ensure we don't overwhelm any port on the Nexus system

Power RedundancyExample - 7010

Combined - No red-

Power Supply Red (N+1)-

Input Source Red (Grid resilency)-

Power Supply plus input source (Complete)-

Cisco Power Calculator - tools.cisco.com/cpc-

The Nexus 5000 Series

CCNA Data Center DCICT - 640-916 Page 3

The Nexus 5000 Series5010 - 520Gbps5020 -5548 - Layer 3 Capable - Add Layer 3 switching with N55-D160L3 daughter card (hot)5596 - Layer 3 Capable - 1.92Tbps - 96 1Gb Port Density -Layer 3 hot swappable mobdule N55-M160L3 daughter card

DCB + FCoEGEMs - Generic Expansion Module Slots - Additional of number of Ethernet or FCoE ports

Flexible!

The Nexus 2000 SeriesTop of Rack switch ontop of UCS C Series stack2000 mgmt - end of racks via 5000 series parent switch2000 = FEX Device/child/TOR

Redundancy? - 2 x 2000 TOR devices - EOR - 2 x 5000 devices - dual uplinks to each piece of kit. Servers can have redundant connections to the 2ks in each rack

2148 - 4 x 10Gbps Fabric Ports to parent/1 Port channel -4 ports per bundle/48 ports @ 1Gb/No FCoE/No host portchannels

2224 - 2 x 10Gbps Fabric Ports/1 Port channel/24 ports/24 port channels - 8 ports per bundle

2248 - 4 x 10Gbps Fabric Ports to parent/1 Port channel/48 1Gb ports/24 different host port channels

2232 - FCoE/DCB support! 8 x 10Gbps Fabric Ports/1 port channel to parent/32 host ports/16 host port channels

In the exam environment, Cisco would expect you to know this information! Frustrating! Create some flashcards to assist with this for the exam!

Video 4 - Meet the MDS Family

Multilayer Director Switches - SAN switch

9500912491489222

The 9500 Series

9506 - 192 FC9509 - 336 FC9513 - 528 FC

Non-Blocking - VOQHigh Bandwidth 2.2Tbs/160Gbps (16ISL)

CCNA Data Center DCICT - 640-916 Page 4

High Bandwidth 2.2Tbs/160Gbps (16ISL)Low latency - Less than 20micro seconds per hopMultiprotocol - FC/FCoE/FCIP/iSCSIScalable - VSAN/VLANSecure - Port SecurityHA - Loads of HA! Dual sup/Dual clock/fans/PSU etc…

9500 Sup Modules

Sup 2 and Sup 2-A (FCoE Support)Intercrossbar Fabric - Traffic cop!

Feature Based (Ent Security/SAN Over IP-FCIP/Mainframe License

-

Module Based-

For complete list - visit cisco.com-

Licensing

Cisco MDS 91248 default licensed ports16 on-demand ports

NPV - Scale SAN and not consume domain IDsMore to follow on SAN!

Cisco MDS 914848 line rate 8Gbps FC ports16-32-48 port based dependant on license8 ports for inc licensingNPV Support - Assist of scaling of SAN

Cisco MDS 9222Exp slot for wide variety of modules18 FC ports at 4Gbps4 Gigabit Ethernet - FCIP/iSCSIFlexible!

Video 5 - Monitoring The Nexus

MgmtSetup ScriptISSUCoPPImportant Commands

RJ-45 Serial on Supervisor Engine-

The Nexus Switch Console Port

Setup Script on a NexusEnforce secure password standard?Create Login IDSecure password complexity requirementsRole - Operator or Admin?SNMP StringOOB MgmtSetup script is long and detailed!

The CMP (Conn Mgmt Proc)

CCNA Data Center DCICT - 640-916 Page 5

The CMP (Conn Mgmt Proc)7000 - Sup Eng 1

Dedicated OS-

OOB-

LEDs-

Local Auth-

CLI - how to connect to CMP?'attach'

Remote AccessSSH enabled by defaultClient/ServerV4/V6

ssh server enabletelnet server enable

The Management VRF

Set of routing structures that can be applied to an interface to virtualise the interface. The interface can have multiple routing tables essentially..

Default VRF and Management VRF'Ping 4.2.2.2 vrf management' - example

Upgrade with no disruption-

4.2.1-

Kickstart, BIOS, System, FEX and I/O Modules-

ISSU - In Service Software Upgrade

Previous the Supervisors used to failover Active and Standby…The 5000 series has a single Supervisor, the control plane will momentarily be off line and the data plane continues to forward traffic.

5000 - Download software from cisco.com1.TFTP copy kickstart and system image to bootflash on 5k

2.

Show incompatibility3.Show install all impact4.Install all5.Show install all status6.

ISSU

Control Plane Policing

Data Plane - Packets that are flowing through DCMgmt Plane - snmp/cliControl Plane - All the control functions - L2 STP/LACP L3 -OSPF/BGP examples..

CoPP - Allows us to go in and set limits on activity ingress and egress to the control plane.

Default CoPP - Strict, Mod, Lenient or None!

CCNA Data Center DCICT - 640-916 Page 6

Default CoPP - Strict, Mod, Lenient or None!

Where•Show run ipqos (all)•Show run interface all•Show module•Show logging ?•

Key CLI Commands

Video 6 - Exploring vPCs and their verification

Why?•How?•Verify?•

Virtual Port Channels

Etherchannels are normally switch to the same destination switch

vPCs can be built to different upstream devices

5k to 2 x 7k switches

How do they work?

CCNA Data Center DCICT - 640-916 Page 7

7ks are vPC peers connected by peer link

Primary and Secondary vPC peersSynchronizing mac address tables/IGMP entries

Orphan port - connected to a switch and are not participating in the vPC infrastructure

CFS = Cisco Fabric Services - umbrella term

Peer Link can carry unicast traffic under failover scenarios

Peer keepalives are sent on a separate dedicated logical link via OOB network

The 5k is setup like a normal EtherchannelThe magic occurs on the 7k…

Member ports make up the bundle on the upstream switches everything else is an orphan port

Overall picture is a vPC domain with a domain ID (Numeric Identifier)

Peer link has to be 10Gbps (2 recommended)

Portchannels are per VDC

vPC - Layer 2

From 5k to 2k vPC:

CCNA Data Center DCICT - 640-916 Page 8

Dual sided Portchannel setup

Port channels from top to bottom in the DC :)

CLI

Show vpc brief (show eth summary)•Show vpc peer-keepalive•Show vpc role (priority values lower = better)•Show vpc consistency-parameters•

Video 7 - Exploring FabricPath (TRILL)

Why FabricPath?

TRILL = Replacement for STPTransparent interconnect of lots of links!

Layer 3 routing intelligence bought to Layer 2

Layer 2 routing table - shortest patch from source to destination

Switch IDs in routing table

Intelligence? - IS-IS borrowed! Layer 2 IS-ISVery flexible..IPv6 compatibleNot reliant on IP as Layer 2Lending routing capabilities to the L2 worldFabric of L2 switched devicesBest of both worldsSimplicityHigh BandwidthRouting IntelligenceElimination of STP!!Equal cost multipathing (ECMP)16 way equal cost multipathIS-IS working its way back into the Cisco curriculumIS-IS configuration is hidden, more of a concept..

How do we implement?

STP interacts with FabricPath

CCNA Data Center DCICT - 640-916 Page 9

STP interacts with FabricPath

Classical Ethernet Port - Edge portsWithin Fabric path cloud - Fabric Path Ports

Cisco enhanced TRILL with Fabric PathConversational Learning with MAC addressesLearn mac addresses that are actively being used instead of traditional approachSimplifies!! Scalability to increase dramatically!Enabled by default

Verification

Show mac address-table vlan 10 dynamic

Typically interface reported on right… Switch ID will be shown instead of interface. Fabric Path device will stick out!

'show fabricpath route'Great command to show the routing table at Layer 2!!

Video 8 - You down with OTV?

Why?•How?•Verify?•

Overlay Transport Virtualisation

Spread DC infrastructure across the globe!•Localise application to users•Mobile•Pool•Disaster Recovery•

EoMPLS•VPLS•Dark Fiber•Examples of similar tools..•OTV is the way forward!!•

Transport Layer 2 information over IP infrastructure -encapsulation

CCNA Data Center DCICT - 640-916 Page 10

Edge devices are the workhorses of OTV

2 Edge Devices = multihomed

Layer 2 WAN domain

'Internal' interfaces do not join OTV•'Join' Interfaces on the outside of the edge devices•Virtual Overlay Interfaces exist where OTV configurations exist

MAC addresses are learned across the sites between each OTV edge..

MAC to IP binding

Layer 2 frame to Layer 3 packet for transmission over the OTV link

IS-IS routing intelligence makes this possible

OTV is for connect separate geographical sitesFabricPath is used internally to eliminate STP

OTV does not engage in MAC flooding. The edge devices educate each other using the control plane logic

Each edge device caches ARP for each other

We can add a second edge device to the OTV therefore making it multi homed. The edge devices are hosted in the OTV Site VLAN - An edge device is elected as a AED -Authoritative Edge Device. One for even VLANS and one for odd VLANS. The AED advertises MAC addresses and forwards traffic for its own set of VLANs.

Verification

'Show otv adjacency detail'Shows OTV peers and how long they are established

'Show otv overlay 1'

Video 9 - Virtualizing The Network Device

CCNA Data Center DCICT - 640-916 Page 11

Nexus 7k/5k-

VDCs-

NIV - Network Interface Virtualisation-

Admin/Operator - Separated-

Physical port (re)allocations-

Example - Dual core migration-

VDCs

VDC Name: 'test'We can assign ports to the VDC 'test'

4 VDCs-

VDC1 - Default on a 7K (Like VLAN1) NTP/Licensing-

VLANs/VRFs per VDC-

VDC1 used for management-

VDCs - 7K

VDCA and VDCB-

BGP Process in each VDC that is unique. If it crashes then will not affect other VDC

-

Fault Domains

I/O Modules - Ports can be assigned to the VDC-

Virtual Interfaces-

Some of them have to be grouped to a VDC. Only on certain I/O modules

-

Interface Allocation

Network-admin - VDC1-

Network-operator - VDC1-

VDC-Admin (Inherit role as you enter…)-

VDC-Operator-

RBAC

Adv Service Licensing required120 grace period to experiment with VDC

Commands

'switchto' vdc TEST-

'switchback'-

'show vdc'-

'show vdc detail'-

'show vdc membership'-

'show run vdc-all'-

'copy run start vdc-all'-

NIV - Network Interface Virtualisation

Fabric Extension Technology

Nexus 5k - EoRNexus 2k - ToR

Logical port on 5kHost port on 2k

VN Tag - IEEE 802.1Qbh

CCNA Data Center DCICT - 640-916 Page 12

VN Tag - IEEE 802.1Qbh

Tag placed on at 5K and stripped off at 2KVice versa as it returns back!

Fabric PortFP ChannelFex Uplink

Video 10 - Virtualizing Storage

Storage Virtualisation - Physical and Logical Disks

Logical Disk = LUN - Logical Unit Number

LUN Masking - Ensures the logical disk is only available to certain servers using the PWWN (Port World Wide Name)

LUN Mapping - FC environments - Selectively maps LUNS to host bus adapters into servers

LUN Zoning - Cisco propertiary in MDS - Selectively maps a logical disk to a host port. Not vendor dependant.

Block-Disk-Tape-FS-FRHost-Network-ArrayIn Band-OOB

Block = Provide a logical volume to a user that is actually stored physically

Disk = Presenting to the user a disk out of a large array of disks

Tape - Logical tape drive out of a tape library

Filesystem - Allowing a user to access a file in a logical presentation when file maybe in a remote physical location

File and record - Logical volume to a particular user independent of the physical location

OOB is preferred approach for control OOB traffic

Network based storage

CCNA Data Center DCICT - 640-916 Page 13

Network based storageIndependent of all components

Video 11 - Virtualizing Servers

Partitioning-

Encapsulation-

Isolation-

Hardware Abstraction (Migrate VM to another host)

-

CapEx/OpEx-

Benefits

Full (Host OS/Bare Metal)-

Partial-

Paravirtualisation - (No simulation of hardware)-

Techniques

Host OS Based - Type 2Bare metal serverOS installed on topVM software on top App and OS side by side

OS = attack surface

VM Workstation/VM Fusion = Type 2 - Host OS Based

Bare metal - Type 1Type 1 -> Running VMWare Hypervisor ESXi

ESX - Legacy - Linux Based KernelESXi = Embedded/Integrated

Hypervisor sits on top of bare metal serverApp/OS run on top of Hypervisor

vSphere 5.x - Umbrella of software/suiteESXi/vSphere Server - Managing hosts/vSphere Client/vSphere View - Manage desktop OS for clients

Microsoft Hyper-V Server 2012Type 1 - Bare MetalMicrosoft OS exclusive

Citrix XenServerServer, desktop and cloud servers

Video 12 - Introducing the Nexus 1000V

Introducing the 1000v

The HistoryThe 1000vHow It Works

Port Groups are ways to group virtual switch ports for purposes of configuration. (VLAN assignment)

CCNA Data Center DCICT - 640-916 Page 14

purposes of configuration. (VLAN assignment)

vNetwork Standard Switch

-Host separation-Layer 2 devices only - MAC addresses/unicast flooding-For bare metal VMs they will switch at Layer 2-802.1q supported-NIC teaming/port channel-CDP supported

-ESX host contains a Service Console Port for CLI access-VMKernel Port for services like vMotion and NFS comms

2nd Generation of VMWare SwitchvNetwork Distributed Switch - vDS

Allows a single switch across multiple ESXi hosts

Introduction of Receive Rate Limiting - RRLPVLANs - Private VLANs

3rd Party vDS to be built between Cisco and VMWare based on NX-OS. Hence the Nexus 1000v!

LACPPort SecQoSSPANNetflow etc…

vMotion also considered!

The Cisco Nexus 1000v doesn't lose any VMWare features it only ADDS to them

VSM - Virtual Supervisor ModuleVEM - Virtual Ethernet Module

The VSM can be installed to a rack mounted Nexus 1010

Port Groups - Port Profile

port-profile VMPPswitchport mode access

CCNA Data Center DCICT - 640-916 Page 15

switchport mode accessswitchport access vlan 100vmware port-group VMPPno shut

Video 13 - Verifying the Nexus 1000V

ReviewInstallationVerification

VSM can be installed into Nexus 1010 or a dedicated VM

How to Install Cisco Nexus 1000va… 6 steps

OVF/OVA file

If you use the OVF 1st 2 steps are providedThe OVA guides the 1st 4 steps!

Verify the Nexus with normal Nexus commands!

Verify connectivity between Nexus and vCenter:'show svs connections'

ConfigStatus: EnabledOperStatus: Connected

'No connect''Connect' - commands sequence to connect/disconnect Nexus from vCenter

'show svs domain'Domain id: VSM to VEM 'glue'Status: Config push to VC successful

'Show module'Slots 1 and 2 = VSM (Active and Standby)Slot 3 = Virtual Ethernet Module

Control VLAN - Heartbeat Messages between VSM and VEM

CCNA Data Center DCICT - 640-916 Page 16

VEMPacket VLAN - CDP/IGMP Messaging/LACP

Video 14 - Storage Options

File Based - vs Block BasedNFS, FC, SCSI iSCSIDAS to SAN

File Based ProtocolsCommon Internet File System (CIFS)Network File System (NFS)High latency!TCP/IPChatty..Microsoft Office/SharepointCUPS in Nix setups

Block Based ProtocolsSCSI - Small Computers Storage InterfaceIOPSSCSI can be transported via parallel cableMessages between computer and storageLow latencyLimited by distance

Hence iSCSI!!Transport of SCSI over TCP/IPHigher latency and low bandwidth

FC CableLow latencyHigh bandwidth with FC 8Gbps

FCoE 10GbpsFC encapsulated into Ethernet protocol

NFSClient to serverMountd service on client (mount command)Client can access server vol0Portmap/rpcbind - service on serverRPC is used to communicateAutomounter will mount fs on demand as the client requires access and unmount after a certain period of timeLs - list contents of directorySent to NFSD

3 major versions of NFS:NFSv2 RFC1094 32 bit file size, statelessNFSv3 RFC1813 64 bit file size, statelessNFSv4 RFC3530 64 bit file size, stateful, security updates

SCSI

CCNA Data Center DCICT - 640-916 Page 17

16 devices max can be interconnectedBus length total is 25 metersShared channel bandwidth 320Mbps

FiberChannelCarry SCSI payload in the SAN environmentOvercome SCSI limitations! Mainly distance!16 million nodes can be addressedLoop style topologyFabric Transport/Switched800MbpsOr 8Gbps in a switched structure6.2 miles with FC!!Multi-protocol support

A small piece of data is a 'word' at 32 bitsWords are packaged into framesFrames are equip to IP packetSequence is a bunch of frames (Unidirectional)Collection of sequences is an exchange (TCP session)

iSCSITransport over IPTCP 3260 for congestion control and in order delivery of error free data in an iSCSI environment

MDS 9222iMDS 9000Transparent SCSI routing thanks to iSCSI

DAS to SAN

CCNA Data Center DCICT - 640-916 Page 18

Video 15 - Fibre Channel and More Fibre Channel

TopologiesPort TypesAddressingBehind the scenes

'Initiator' and a 'Target'SCSI commands are being sent

P2P TopologyInitiator has a direct connection to the TargetServer to ClientZero scalabilitySimilar to DAS

FC-AL127 devices can be connected in the loop in theory..Only 12 though due to performance concerns1 data pathLatency concernsShared bandwidthToken Ring!! :/

FC Switch'Fabric' - Concept of a SAN with FCScalable by attaching other FC switchesMDS line of storage area equipment

Various port names..

N-Port - Node Port - Host to switch/disk-

Connects to an F-Port - Fabric Port-

1 to 1 connections-

FC SW to FC SW - E-Ports - Expansion Ports -Interswitch links - ISL

-

FL-Port FC SW to Hub/FC-AL-

Host/Storage to FC-AL = NL Ports Node Loop Ports-

Standard Ports

CCNA Data Center DCICT - 640-916 Page 19

MDS to MDS Switch = TE Ports - Trunking Expansion*Cisco invention - Expand on E PortsVSAN EnabledQoSEISL Format - VSAN details

Host DeviceNP Port - Node Proxy PortAllows other end ports to connect THROUGH itTF Port = Trunking Fabric PortTN Port = Trunking Node Port

Protocol Analyzer = SPANSD Port - Spanned Destination Port

FX Port = Variable… Operate as an F port or an FL port

Device from another SAN that is offsite…B Port - Bridge Port - FC Standard supported by Cisco

Auto Mode = F Port, FL Port, E Port, TF Port etc… similar to DTP!

Flashcards would help here!!

WWN - World Wide NamesSimilar method to MAC addresses64 bit or more commonly 128bits in lengthHex format

Node WWN - Unique address for node in FC setupPort WWN - Unique name for Port

If a device has only 1 port the n and p WWNs are the same

FCID = Dynamic acquired that are routable in switch fabric

FC-AL 127 devices limitation due to addressing scheme

FCIDs in switch fabric infrastructure

24 bit 3 x 8 bit elementsDomain ID = 8 bit - define switchArea ID = 8 bit - used to ID a group of ports

CCNA Data Center DCICT - 640-916 Page 20

Area ID = 8 bit - used to ID a group of portsPort = 8 bit

*239 domain ids available to the fabric

FC-4 - Upper layer protocol mapping-

FC-3 - Generic services for fabric mgmt - BIG AREA Name service/Login service/Address manager/alias server/fabric controller/mgmt server/key dist server/time server

-

FC-2 - Framing and flow control-

FC-1 - Line Coding of signals-

FC-0 - Physical-

FCNS and FLOGI

Fiber Channel Name Server - Dist database that MDS switches implement

FCIDSpWWNs/nWWNsOperating Parameters - Soft zoning

RSCN - Register State Change Notification

FLOGI - Fabric Login Process

Highest 16 addresses in 24 bit fabric address space are reserved

Broadcast alias is all Fs as per LAN concept

Before exchange of data can take place…

N ports much be logged in to F Port using FLOGI-

N Port must log into target N port - PLOGI-

Exchange info about upper layer protocols - PRLI-

CommandsDiscover address (ADISC)Discover fabric (FDISC)Discover port (PDISC)

F.C - Flow Control

Critical concept!

CCNA Data Center DCICT - 640-916 Page 21

Fibre Channel Flow Control

Credit Based systemTX to RX

'Transmitter won't send a frame until the receiver tells the transmitter until it can accept one' The goal is to not lose frames!

FSPF - Deal with loops! Fabric Shortest Path FirstCost based

Security in the SAN

ZoningVSAN as an alternative to ZoningHow do we protect our LUNs?LUN Masking and LUN Mapping

Video 16 - Verifying Fibre Channel

Fibre Channel + MDS

BootBIOS - POSTLoaderKickstartLinux Kernel and DriversSys Boot and loads System Image

'Install all' will launch a script on the MDS to simplify boot sequence and will check for errors

Feature Based - Standard Licence/Add FCIP license SAN extension over IP (120 day grace period)

1.

Module Based (9000 series doesn't use this)2.

Licensing Model for MDS:

Purchase license file from cisco and put on MDS via 'install license' command

Commands'show vsan''show vsan 30''show vsan membership''show flogi database''show fcns database vsan 30''device-alias database'device-alias feature to age you in verification and troubleshooting'show device-alias database''show zoneset vsan 30''show tech-support'

Video 17 - FCoE and DCB

FCoE•

CCNA Data Center DCICT - 640-916 Page 22

FCoE•PFC Priority Flow Control•ETS Enhanced Transmission Selection•DCBX Data Center Bridging Exchange•

Reduction in number of server adapters-

LAN - NIC SAN - HBA-

LAN/SAN - CNA-Converged Network Adapter-

Cable for 10Gbps Ethernet-

Unifying traffic takes advantage of Layer 2 domains (Multipath - vpc+FB)

-

TCO - 1 set of operators-

Centralisation of mgmt-

Benefits of FCoE

Entire FC frame needs to be carried in the Ethernet frames.. Hence Jumbo Frames

-

pWWN mapped to MAC addresses in Ethernet world

Additional protocol - FIP FCoE Initialisation Protocol•

Loseless delivery•

10Gbps•

EtherType Value = 0x89064 bit version field in Ethernet frameEverything else is normal!

FC frame = 36 bytes of headers - 2112 bytes of data -2148 bytes max size

Default MTU = 2240 bytes

FCoE = Logical End Point - FC0 and FC1

Initiator with CNA adapter = 'ENode'VNPort = NPort = FCoE LinkPort on MDS is VFPort

FCoE switch = FCF = Fibre Channel Forwarder

VEPorts used to connect FCF devices

Protocol = FIP for associations between FCF

CCNA Data Center DCICT - 640-916 Page 23

Protocol = FIP for associations between FCF

DCB Standards

Video 18 - Connecting FCoE

SFP - Small Form Factor PluggableVIC - Virtual Interface CardNexus Switches (5500UP)

UCS Chassis connects to Fabric InterconnectFabric Interconnect connects to 2k and 5k'Northbound'

SFP

10Gbps SFP

CCNA Data Center DCICT - 640-916 Page 24

FET-10G from Nexus 2K to parent 5KMust be connected to an identical SFP

UCS P81E VIC

Hardware or Software based approach

5500UP

CCNA Data Center DCICT - 640-916 Page 25

Overview of LAN and SAN with C Series UCS

'Unified I/O' in Cisco biased environment

Video 19 - The Nexus 2232 Fabric Extender

10Gbps Fabric Extender

BenefitsConnectivity to the 5500Adapter FEX

Scalability-

32 physical servers on a single FEX-

Each physical server hosting multiple VMs-

8-24 FEXs attached to a 5000 Nexus-

Less cabling! Short inter rack runs of copper from FEX to Servers

-

For 5000 to 2000 we will use longer runs of fiber-

Benefits

FEX Models

7K can only do option 25K can do option 3

Adapter FEX

Parent 5KChild 2KC Series

CCNA Data Center DCICT - 640-916 Page 26

VNTag added between 7k and 5k

Can be extended all the way down to the C Series with a VNIC

IEEE 802.1Qbh

Within the frame:Dest MACSrc MACVNTag - Direction Bit = 0 Host to Network or 1 if Network to Host. Pointer Bit for Multicast for EGRESS replication. Looped Bit set if sending back to a source Nexus 2K FEX - Src and Dst VIF Virtual Interface Field802.1q value

Verification'show int brief'Mode column - access exampleIf you see the mode as Fabric this is your verification you are dealing with a FEX interface'show fex''show fex detail'

Video 20 - The UCS B-Series Family

The Fabric InterconnectThe ChassisBlade ServersMezzanine Cards

IOMS on the Blades - Modular

CCNA Data Center DCICT - 640-916 Page 27

IOMS connect to Fabric Interconnect

Fabric Interconnect connects to 2K switch

'Northbound' connectivity

Fabric Interconnect - Eth, FC, FCoE etc..

Flexibility and Scalability

Commonly installed in a HA pair

UCS Management

6248UP - 2nd Generation of FI2U48 port32 fixed portsExpansion slot

6296UP - 2nd Generation of FI2U48 fixed portsExpansion slot

5108 B Chassis6U19 inch rackable8 half width blade serversFull width in bottom 2 slots

CCNA Data Center DCICT - 640-916 Page 28

Full width in bottom 2 slots4 single phase hot swappable power suppliesRedundant hot swappable fans

1Gbps or 10Gbps on certain number of ports

1-16 converted from 1Gb to 10Gb on the 6140

UCS B200 M3 Blade ServerVIC 1240 Modular LOMExpander24 DIMM slots and 384GB of RAM using 16GB DIMMS

UCS B420 M3 Blade ServerFull Width3 mezzanine connectors1 VIC 12402 VIC 1280VIC Port ExpanderIntel Xeon E5-4600 - 32 Core48 DIMM slots and 1.5TB of main memory using 32GB DIMMs

UCS VICSVIC 1280VIC 1240

M81KR Mezz Card for B Series Devices - VNIC

None Virtual options also availableUCS M51KRB - Mezz Card - Not virtualization of adapters

Video 21 - The UCS C-Series Family

UCS Manager can now managed B and C series side by side

C22 M3C24 M3C220 M3C240 M3

You can buy the C Series preconfigured in several options

Certification Preparation - Cisco do not test on C Series models!!!

Video 22 - Connecting B-Series Servers

OptionsInternal componentsCIMC10GbpsFSM

B Series to Fabric Interconnect1, 2, 4, 8 cables between I/O and FI

CCNA Data Center DCICT - 640-916 Page 29

Must setup interconnect between 2 x FI devicesB Series 2 connections to each FI

Oversubscription Rates/Bottlenecks

B Series with a 32 cable VNIC of 4-10Gbps1 single link to the FI and you have a bottleneck

I/O Multiplexer for FI connectivity to Server

CMS - Chassis MgmtCMC - Chassis Mgmt Controller

Component discovery within chassis

CIMCCisco Integrated Management ControllerProvides 2 mgmt functions:Keyboard/Video/Mouse over IPSerial Over LANIPMI - Intelligent Platform Mgmt InterfaceBMC - Older documentation

UCS Manager - Runs on FIFor B Series and C Series

Unconfigured ports by default

3 port states: Unconfigured, Server (Down to IO module on chassis), Uplink (Northbound)

FSM - Major occurring processes in UCSFinite State MachineAck Chassis - Check everything in chassis! Inventory

Video 23 - UCS Management

UCS Manager TourOptionsUCSM

UCS Manager lives on FEX (Active and Subordinate)

1 GUI2 CLI3 3rd Party Tools (Extended Markup Language)

SNMPKVMOverIPIPMISMASH CLPCIM XMLCall HomeSoL

UCS Database stored in XML

CCNA Data Center DCICT - 640-916 Page 30

Discovery process - automated or manually triggered

FSM:Server DiscoveryService ProfileFirmware DownloadsUpgrade of componentsBackup and Restore

Video 24 - Pools, Policies and Profiles

Preview of CCNP material…

Hardware AbstractionPoolsProfilesTemplatesPolicies

Hardware Abstraction

Scenario of moving a VM via vMotion from 1 blade to another

Virtual MAC/WWN gives us flexibility

'Stateless Computing'

This identity is called a 'Service Profile' - Each node will have a unique service profile.

UCS Manager - What is available to make Hardware Abstraction a reality

Names can be grabbed from the pool… such as a UUID..We can assign a pool of unique UUIDs!

Servers/Blades can also be setup in Pools.

There is also a MAC address pool that can be setup manually

Policies - BIOS Defaults/Host Firmware Packages etc…They are dictating how the particular servers we deploy are going to react. Example of boot ordering.

A Service Profile brings this all together!!A Service Profile template can be created also.

Video 25 - Cisco Load Balancing

Cisco ACECisco GSS

Cisco want out of the application business!!

ACE is now EOL

CCNA Data Center DCICT - 640-916 Page 31

ACE is now EOLApplication Control EngineAvail/Perf/SecurityService Module into Cat 6500 or 7600/Appliance 4710

Layer 4 Load Balancing - TCPLayer 7 Content SwitchingHealth ProbingIntegrates with GSS

Global Site Selector - Most available DC in the world!

~Application PerformanceHW based compressionDelta EncodingCaching/Offloading TCP/SSL

GoalsART - Application Response TimeReduce BW consumptionImprove efficiency of TCP/SSL based protocols

SecurityLast lineDPI - Deep Packet InspectionProtocol SecurityScalable Access Controls - ACL etc..

ACE FeaturesVirtual DevicesRBAC64 Gbps - Service Module within 65001 mill NAT/64k ACLs

CCNA Data Center DCICT - 640-916 Page 32