Ucs-bootcamp.pdf

  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Ucs-bootcamp.pdf as PDF for free.

More details

  • Words: 11,048
  • Pages: 306
Cisco UCS and UCS Director Mike Griffin

Day 1 – UCS architecture and overview  Data Center trends  UCS overview  UCS hardware architecture  UCS B series server overview  C series server overview  UCS firmware management  UCS HA Day 2 – Service profile overview and lab  Pools, Policies and templates  UCS Lab

2

Day 3 – UCS Director  UCS director overview  UCS director hands on lab Day 4 – Advances UCS topics  UCS Networking and connectivity  Implementing QoS within UCS  UCS Central

3

Module 1

Scale Out

Scale Up     

Monolithic servers Large numbers of CPUs Proprietary platform Proprietary OS Many apps per server

The 90’s

High cost / Proprietary Large failure domain

   

Commoditized servers 1 APP / 1 Physical Server X86 platform Commoditized OS

Scale In     

Early 2000’s

Servers under-utilized Power & cooling

6

Bladed and Rack servers Multi-Socket / Multi-core CPUs X64 platforms (Intel / AMD) Commoditized OS Virtual Machine Density

Now Management complexity Cloud Computing / Dynamic Resourcing

Console, power, networking, and storage connectivity to each blade

Console, power, networking, and storage connectivity shared in chassis 7

single core CPU single socket for 1 CPU

core core core core

single socket for 1 CPU

single socket, 1 CPU, 4 processing cores

Terminology   

Sockets – Slot in machine board for processing chip CPU – Processing Chip Core – the actual processing unit inside CPU

Server Impact    

More cores in a CPU = More Processing Critical for application that become processing bound Core densities are increasing 2/4/6/8/12/16 CPUs are x64 based

DIMM Slots 



 

DIMM

DIMMs – Dual Inline Memory Module - a series of dynamic random-access memory integrated circuits. These modules are mounted on a printed circuit board Ranking – memory modules with 2 or more sets of DRAM chips connected to the same address and data buses. Each such set is called a rank. 1 dual and quad ranks exist Speed – Measured in MHz most server memory is DDR3 and PC3-10600 = 1333 MHz As Server memory increases clock speed will sometimes drop in order to be able to utilize such large memory amounts 9

PCIe BUS In virtually all server compute platforms PCIe bus serves as the primary motherboard-level interconnect to hardware



Interconnect: A connection or link between 2 PCIe ports, can consist of 1 or more lanes



Lanes: A lane is composed of a transmit and receive pair of differential lines. PCIe slots can have 1 to 32 lanes. Data transmits bi-directionally over lane pairs. PCIe x16 where x16 represents the # lanes the card can use



Form Factor: A PCIe card fits into a slot of its physical size or larger (maximum ×16), but may not fit into a smaller PCIe slot (×16 in a ×8 slot)

Platform Virtualization:    



Physical servers host multiple Virtual Servers Better physical server utilization (Using more of existing resources) Virtual Servers are managed like physical Access to physical resources on server are shared Access to resources are controlled by hypervisor on physical host

Key Technology for :    

VDI / VXI Server Consolidation Cloud Services DR

Challenges: 

 

11

Pushing Complexity into virtualization Who manages what when everything is virtual Integrated and virtualization aware products

Server Orchestrators / Manager of Manager

Chassis Mgr

Chassis Mgr

Chassis Mgr

Network Mgr

Network Mgr

Network Mgr

Server Mgr

Server Mgr

Server Mgr

Vendor A

Vendor B

Vendor C

12

Cloud Virtualization Web Client Srv Mini Comp Mainframe 1960

1970

1980

1990

2000

2010

Service Catalog

Orchestration and Management

VDI

CRM

Orchestration / Management / Monitoring - Tidal, New Scale, Altiris,Cloupia - UCSM ECO Partner Integration (MS, IBM, EMC, HP) - UCSM XML API

Compute

Infrastructure

Web Store

- UCS B Series - UCS C Series

Network - FCoE - Nexus 7K, 5K, 4K, 3K, 2K,

Storage -

NetApp FAS

Mgmt Server

Over the past 20 years  An evolution of size, not thinking  More servers & switches than ever  Management applied, not integrated  Virtualization has amplified the problem Result  More points of management  More difficult to maintain policy coherence  More difficult to secure  More difficult to scale

15

Mgmt Server



Embed management



Unify fabrics



Optimize virtualization



Remove unnecessary o switches, o adapters, o management modules



Less than 1/3rd the support infrastructure for a given workload

Single Point of Management Unified Fabric

Blade Chassis / Rack Servers 18

SAN

LAN

G

MGMT

S

G

S

Fabric A Interconnect G

G

SAN

G

G

Fabric Interconnect



Chassis

G

Fabric A Interconnect

G



G

G

o Up to 8 half width blades or 4 full

width blades

Compute Chassis Fabric Extender

I

R x8

C

C

I

x8

Fabric Extender

R x8

x8



Fabric Extender o Host to uplink traffic engineering

M

B

Adapter

P

P

B

Adapter



Adapter

Adapter o Adapter for single OS and

X X

X X X X

x86 Computer

x86 Computer

hypervisor systems 

Compute Blad o Half Width or Full Width

Compute Blade (Half slot)

Compute Blade (Full slot)

20

SAN

LAN

UCS Fabric Interconnect 20 Port 10Gb FCoE 40 Port 10Gb FCoE UCS Fabric Extender Remote line card

UCS Blade Server Chassis Flexible bay configurations UCS Blade Server Industry-standard architecture UCS Virtual Adapters Choice of multipleadapters 23

redundant fabric interconnect

2 x power supplies 2 x fan modules

4 x Single slot blades

chassis 2 x double slot blades

24

4 x power supplies

20/40/48/96 x fabric/border ports (Depending on model) 2 x cluster ports

expansion module bay 1 or 2 2 x power entry

console port 1 x management port

8 x fan modules

2 x IOMs

4 x 10GE SFP+ fabric ports (FCoE) 25

4 x power entry

2U

1U UCS 6200 (UP) Series (6248 & 6296) UCS 6100 Series (6120 & 6140) 1U 2U 26

Product Features and Specs

UCS 6120XP

UCS 6140XP

UCS 6248UP

UCS 6296UP

520 Gbps

1.04 Tbps

960 Gbps

1920 Gbps

1RU

2RU

1RU

2RU

1 Gigabit Ethernet Port Density

8

16

48

96

10 Gigabit Ethernet Port Density

26

52

48

96

1/2/4/8G Native FC Port Density

6

12

48

96

Port-to-Port Latency

3.2us

3.2us

1.8us

1.8us

# of VLANs

1024

1024

4096

4096

Layer 3 Ready (future)





40 Gigabit Ethernet Ready (future)





63 per Downlink

63 per Downlink





Switch Fabric Throughput

Switch Footprint

Virtual Interface Support

15 per Downlink

15 per Downlink

Unified Ports (Ethernet or FC) 27

6120 / 6140 FI

6248 / 6296 FI

16 Unified Ports

Switching ASIC Aggregates traffic to/from host-facing 10G Ethernet ports from/to networkfacing 10G Ethernet ports

Upto 8 Fabric Ports to Interconnect

FLASH DRAM

CPU (also referred to as CMC) Controls ASIC and perform other chassis management functionality

L2 Switch

EEPROM

Chassis Management Controller

Control IO

Switching ASIC

Aggregates traffic from CIMCs on the server blades Switch

Interfaces HIF (Backplane ports) NIF (FabricPorts) BIF CIF No local switching – All traffic from HIFs goes upstream for Switching

Chassis Signals

Up to 32 Backplane Ports to Blades

IOM-2104

IOM-2204

30 IOM-2208

2104/220X Generational Contrasts 2104

2208

2204

ASIC

Redwood

Woodside

Woodside

Host Ports

8

32

16

Network Ports

4

8

4

CoSes

4 (3 enabled)

8

8

1588 Support

No

Yes

Yes

Latency

~800nS

~500nS

~500nS

Adapter Redundancy

1 mLOM only

mLOM and Mezz

mLOM and Mezz

I/O Modules

Blade Connectors

PSU Connectors

Redundant data and management paths 32

B22 M3 2-Socket Intel E5-2400, 2 SFF Disk / SSD, 12 DIMM

Blade Servers

B200 M3 2-Socket Intel E5-2600, 2 SFF Disk / SSD, 24 DIMM

B250 M2 2-Socket Intel 5600, 2 SFF Disk / SSD, 48 DIMM

B230 M2 2-Socket Intel E7-2800 and E7-8800, 2 SSD, 32 DIMM

B420 M3 4-Socket Intel E5-4600, 4 SFF Disk / SSD, 48 DIMM

B440 M2 4-Socket Intel E7-4800 and E7-8800, 4 SFF Disk / SSD, 32 DIMM

33

 

UCS C460 M2

UCS C260 M2

o C200 - 1RU base rack-mount server o C210 – 2RU large internal storage moderate RAM o C250 – 2RU Memory Extending (384GB) o C260 – 2RU Large internal storage and large RAM capacity (1TB) o C460 – 4RU and 4 socket / Large intenal storage / large RAM (1TB) o C220 M3 – Dense Enterprise Class 1 RU server / 2 socket / 256 GB / optimize for virtualization o C240 M3 – 2RU / Storage Opotimized / Enterprise class / 384 GB / up to 24 disks

UCS C220 M3

UCS C250 M2

UCS C240 M3 UCS C210 M2 

UCS C200 M2

34

Expands UCS into rack mount market Multiple offerings for different work loads

Offers Path to Unified Computing

Dongle for 2USB, VGA, Console DVD Internal Disk

UCS C200 Front View Console and Management Expansion Card

Power LOM

USB and VGA

UCS C200 Rear View 35

Dongle for 2USB, VGA, Console

DVD Internal Disk

UCS C210 Front View Console and Management

Expansion Card

Power

USB and VGA

UCS C21036 Rear View

LOM

Dongle for 2USB, VGA, Console Internal Disk

DVD

UCS C250 Front View

Power Expansion Card

UCS C250 Rear View 37

USB and VGA LOM Console and Management

Internal Disk

DVD

UCS C260 Front View

38

Dongle for 2USB, VGA, Console

Dongle for 2USB, VGA, Console

DVD

UCS C460 Front View

Internal Disk

Dongle for 2USB, VGA, Console

DVD

UCS C220 Front View

40

Internal Disk

Dongle for 2USB, VGA, Console

Internal Disk

UCS C240 Front View 41

C22 M3 2-Socket Intel E5-2400, 8 Disk s/ SSD, 12 DIMM, 2 PCIe, 1U C24 M3 2-Socket Intel E5-2400, 24 Disks / SSD, 12 DIMM, 5 PCIe, 2U

Rack Servers

C220 M3 2-Socket Intel E5-2600, 4/8 Disks / SSD, 16 DIMM, 2 PCIe, 1U

C240 M3 2-Socket Intel E5-2600, 16/24 Disks / SSD, 24 DIMM, 5 PCIe, 2U C260 M2 2-Socket Intel E7-2800 / E7-8800, 16 Disks / SSD, 64 DIMM, 6 PCIe, 2U

C460 M3 4-Socket Intel E5-4600 , 16 Disks / SSD, 48 DIMM, 7 PCIe, 2U C460 M2 4-Socket Intel E7-4800 / E7-8800, 12 Disks / SSD, 64 DIMM, 10 PCIe 4U

42

Virtualization – M81KR / VIC 1200

VM I/O Virtualization and Consolidation

10GbE/FCoE

Eth

FC

Eth

Compatibility

Existing Driver Stacks

Ethernet Only

Cost Effective 10GbE LAN access

10GbE/FCoE

Eth FC

vNICs 0

1

2

3

PCIe x16

127

10GbE

FC

PCIe Bus 43

For hosts that need LAN access

  

 

Dual 10 Gbps connectivity into fabric PCIe x16 GEN1 host interface Capable of 128 PCIe devices (OS dependent) Fabric Failover capability SRIOV “capable” device 10 Base KR Sub Ports

UIF 0

UIF 1

M81KR - VIC

   

   

Next Generation VIC Dual 4x10 Gbps connectivity into fabric PCIe x16 GEN2 host interface Capable of 256 PCIe devices (OS dependent) Same host side drivers as VIC (M81KR) Retains VIC features with enhancements Fabric Failover capability SRIOV “capable” device

10 Base KR Sub Ports

UIF 1

UIF 0

1280-VIC

45

   

   

mLOM on M3 blades Dual 2x10 Gbps connectivity into fabric PCIe x16 GEN2 host interface Capable of 256 PCIe devices (OS dependent) Same host side drivers as VIC (M81KR) Retains VIC features with enhancements Fabric Failover capability SRIOV “capable” device

10 Base KR Sub Ports

UIF 1

UIF 0

1240-VIC

PCIe x16

127 0 1 2 3

 UCS P81e and VIC1225  Up to 256 vNICs  NIC Teaming done by HW

vNICs

Eth Eth Eth FC FC

10GbE/FCoE

Virtualization

PCIe Bus

10GbE FC

10GbE/FCoE

CNA    

Emulex and Qlogic 2 Fibre Channel 2 Ethernet NIC Teaming through bonding driver

Ethernet or HBA

47

48

RAID Controllers   

1 Built in controller (ICH10R) Option LSI 1064e based mezz controller Option LSI 1078 based Mega RAID controller (0,1,5,6 and 10 support)

Disks    

 

3.5 inch and 2.5 inch form factors 15K SAS (High Performance) 10K SAS (Performace) 7200 SAS (High Cap / Perf) 7200 SATA (Cost and Cap) 73GB, 146GB, 300GB, and 500GB



The FI’s runs 3 separate “planes” for the various functionality o Local-mgmt • Log file management, license management, reboot etc is done through localmgmt o NXOS • The data forwarding plane of the FI • Functionality equivalent to NXOS found on Nexus switches but is Read-only o UCSM • XML Based and is the only way to configure the system • Configures NXOS for data forwarding



“connect” CLI command is used to connected to local-mgmt or NXOS on FI A or B

51

management interfaces

redundant management service

managed endpoints Fabric Interconnect switch elements

UCSM UCSM

chassis elements server elements

multiple protocol support

redundant management plane 52

GUI XML CLI

configuration state

API

Cisco UCS Manager

3rd party tools

53

operational state



Fabric Interconnects synchronize database and state information through dedicated, redundant Ethernet links (L1 and L2)



The “floating” Virtual IP is owned by the Primary Fabric Interconnect



Management plane is active / standby with changes done on the Primary and synchronized with the Secondary FI



Data plane is active / active

54

L1 to L1

L2 to L2

55

56

57

58

Example of session log file on client

Enable Logging in Java to capture issues

Client logs for debugging UCSM access & Client KVM access are found at this location on Client system: 59 C:\Documents and Settings\userid\Application Data\Sun\Java\Deployment\log\.ucsm

• Embedded device manager for family of UCS components • Enables stateless computing via Service Profiles • Efficient scale: Same effort for 1 or N blades

GUI Navigation

CLI Equivalent to GUI

SNMP

SMASH CLP

Call-home

IPMI

CIM XML

Remote KVM

UCS CLI and GUI

Serial Over LAN

UCS XML API



TCP 22



TCP 23 if telnet is enabled (off by default)



TCP 80



UDP 161/162 if SNMP is enabled (off by default)



TCP 443 if https is enabled (off by default)



UDP 514 is syslog is enabled



TCP 2068 (KVM)

64



The C-Series rack mount servers can also be managed by the UCSM.



This requires a pair 2232PP FEX to accomplish this. This FEX supports the needed features for PCIe virtualization and FCoE.



A total of 2 cables must be connected from the server to both FEXs.



One pair cables will be connected to the LOM (LAN on Motherboard). This will provide control plane connectivity for the UCSM to manage the server.



The other pair of cables will be connected the adapter (P81 or VIC1225). This will provide data plane connectivity.



VIC1225 adapters support single wire management in UCSM 2.1

66

• 16 servers per UCS “virtual chassis”

UCS Manager

(pair of 2232PPs) • 1 Gig LOM’s used for management • Scale to 160 Servers (10 sets of 2232)

UCS 6100 or 6200

• Generation 2 IO adapters

Nexus 2232

10 Gb CNA

UCS 6100 or 6200

Nexus 2232

1 Gb LOM GLC-T connector

Mgmt Connection

Data Connection

67

• Management and data for C-Series

UCS Manager

rack servers carried over single wire, rather than separate wires • Requires VIC 1225 adapter

UCS 6100 or 6200

• Continues to offer scale of up to 160

Nexus 2232

servers across blade and rack in a single domain

UCS 6100 or 6200

Nexus 2232

VIC 1225

Mgmt and Data Connection

68





Cisco VIC provides converged network connectivity for Cisco UCS C-Series servers. Integrated into UCSM operates in NIV (VN-TAG) mode. Up to 118 PCIe devices (vNIC/vHBA)



Provides NC-SI connection for stand alone Single Wire Mgmt



The VIC 1225 requires UCSM SW 2.1 for either dual wire or single wire mode.

69

FI-A

FI-B

2232 fex

2232fex

10/100 BMC Mgmt ports 1G LOM ports

10G Adapter ports

GE LOM NC-SI

BMC PCIe

CPU

Mem

Rack server 70



Existing out of band management topologies will continue to work



No Direct FI support – Adapter connected to Nexus 2232



Default CIMC mode – Shared-LOM-EXT



Specific VLAN for CIMC traffic (VLAN 4044)



NC-SI interface restricted to 100 Mbps

71

Server Model

Number of VIC 1225 Supported

PCIe Slots that support VIC 1225

Primary NC-SI Slot (Standby Power) for UCSM integration

UCS C22 M3

1

1

1

UCS C24 M3

1

1

1

UCS C220 M3

1

1

1

UCS C240 M3

2

2 and 5

2

UCS C260 M2

2

1 and 7

7

UCS C420 M3

3

1, 4, and 7

4

UCS C460 M2

2

1 and 2

1

72

SAN A

ETH 1

ETH 2

SAN B

MGMT

MGMT

Uplink Ports OOB Mgmt

Fabric A

Fabric Switch

Cluster

Fabric B

Server Ports

Fabric Extenders

I O M

Virtualized Adapters

A

Compute Blades Half / Full width

Chassis 1

I O M

CNA

B

B200

74

N2232

N2232

VIC

Rack Mount

Mgmt Path

L1 / L2 Clustering

MGMT0

Console



Setup runs on a new system

<snip> Enter the configuration method. (console/gui) ? console Enter the setup mode; setup newly or restore from backup. (setup/restore) ? setup You have chosen to setup a new Fabric interconnect. Continue? (y/n): y Is this Fabric interconnect part of a cluster(select 'no' for standalone)? (yes/no) [n]: yes Enter the switch fabric (A/B) []: A Enter the system name: MySystem Physical Switch Mgmt0 IPv4 address : 10.10.10.2 Physical Switch Mgmt0 IPv4 netmask : 255.255.255.0 IPv4 address of the default gateway : 10.10.10.254 Cluster IPv4 address : 10.10.10.1 <snip> Login prompt MySystem-A login: 

76



Setup runs on a new system o o o o

o o o o o

o

 

Enter the configuration method. (console/gui) ? console Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y Enter the admin password of the peer Fabric interconnect: <password> Retrieving config from peer Fabric interconnect... done Peer Fabric interconnect Mgmt0 IP Address: 10.10.10.2 Cluster IP address : 10.10.10.1 Physical Switch Mgmt0 IPv4 address : 10.10.10.3 Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes Applying configuration. Please wait. Configuration file - Ok

Login prompt MySystem-B login:

77

78

79

80

81



Three downloadable files for blade and rack mount integration o Infrastructure Bundle o B-series Bundle

o C-series Bundle



ISO file for OS drivers

Infrastructure Bundle: • UCSM • Fabric Interconnect (NX-OS) • Fabric Extender (IOM) Firmware • Chassis Mgmt. Controller

Server Bundle: • CIMC • BIOS • RAID Controller FW • Catalog File • UCSM Mgmt Extn.

• Adapter FW • Catalog File • UCSM Mgmt Extn.

UCS Manager

85

86

87



Manual o Upgrade guides published with every UCSM release o Very important to follow the upgrade order listed in the guide http://www.cisco.com/en/US/products/ps10281/prod_installation_guides_list.html



Firmware Auto-Install o New feature in UCSM 2.1 o Wizard-like interface to specify which version of firmware to upgrade

infrastructure / servers to o Sequencing of firmware updates is handled automatically to ensure the least downtime o Intermediate user acknowledgement during fabric upgrade allows users to verify that elements such as storage are in an appropriate state before continuing the upgrade

88



Firmware Auto-Install implements package version based upgrades for both UCS Infrastructure components and Server components



It is a two step process – o o



Infrastructure Firmware Install Server Firmware

Recommended to run “Install Infrastructure Firmware” first and then “Install Server Firmware”

89

Sequence followed by “Install Infrastructure Firmware” 1)

Upgrade UCSM Non disruptive but UCSM connection is lost for 60-80 seconds.

2)

Update backup image of all IOMs Non disruptive.

3)

Activate all IOMs with “set startup” option Non disruptive.

4)

Activate secondary Fabric Interconnect Non disruptive but degraded due to one FI reboot.

5) 6)

Wait for User Acknowledgement Activate primary Fabric Interconnect Non disruptive but degraded due to one FI reboot and UCSM connection is lost for 60-80 seconds. 90

91

92





Blade management service to external client o

IP’s on ext mgmt network

o

NAT’d by the FI

Service to ext clients: o o o

KVM / Virtual media IPMI Serial Over LAN (SOL)

KVM, IPMI, SoL to the FI: SSH, HTTP/S

FI eth0:0 eth0:1 eth0:2 eth0:3 eth0:4 eth0:5

mgmt0

192.168.1.4 192.168.1.5 192.168.1.6

192.168.1.2

Server Server Server Server

1/1 1/2 1/3 1/4



Number of hosts in subnet / number of blades



Must be same VLAN (native) as UCSM mgmt interface



Physical path (FI-A or FI-B) done at blade discovery



CIMC IP associated with o physical blade (UCSM 1.3) o Service profile (UCSM 1.4)

96

98

ISO/IMG

100

Eth

FC

Native Fibre Channel

Lossless Ethernet: 1/10GbE, FCoE, iSCSI, NAS

Benefits

Use-cases

 Simplify switch purchase remove ports ratio guess work

 Flexible LAN & storage convergence based on business needs

 Increase design flexibility

 Service can be adjusted based on the demand for specific traffic

 Remove specific protocol bandwidth bottlenecks 104



Ports on the base card or the Unified Port GEM Module can either be Ethernet or FC



Only a continuous set of ports can be configured as Ethernet or FC



Ethernet Ports have to be the 1st set of ports



Port type changes take effect after next reboot of switch for Base board ports or power-off/on of the GEM for GEM unified ports.

Base card – 32 Unified Ports

Eth

GEM – 16 Unified Ports

FC

105

Eth

FC

106



Slider based configuration



Only even number of ports can be configured as FC



Ethernet o Server Port o Uplink Port o FCoE Uplink o FCoE Storage o Appliance Port



Fiber Channel o FC Uplink Port o FC Storage Port

108



Server Port o Connects to Chassis



Uplink Port o Connects to upstream LAN. o Can be 1 Gig or 10 Gig



FCoE Uplink Port o Connects to an upstream SAN via FCoE o Introduced in UCSM 2.1



FCoE Storage Port o Connects to a directly attached FCoE Target



Appliance Port o Connects to an IP appliance (NAS) 109



FC Uplink Port o Connects to upstream SAN via FC o Can be 2 / 4 or 8 Gig



FC Storage Port o Connects to a directly attached FC Target

110



The FIs do not participate in VTP



VLAN configuration is done in the LAN tab in UCSM



The default VLAN (VLAN 1) is automatically created and cannot be deleted



As of UCSM 2.1, only 982 VLANs are supported



VLAN range is 1-3967 and 4049-4093



Support for Isolated PVLANs within UCS

111

112

113



VSAN configuration is done in the SAN Tab in UCSM



The default VSAN (VSAN 1) is automatically created by the system and cannot be deleted



FCoE VLAN ID associated with every VSAN



FCoE VLAN ID cannot overlap with the Ethernet VLAN ID (created in the LAN tab)



The maximum number of VSANs supported is 32

114

115

116

vPC



Port Channels provide better performance and resiliency



As of UCSM 2.1, maximum 8 members per port channel FI-A



LACP mode is active



Can connect to vPC / VSS



Load Balancing is src-dest mac/ip

117

FI-B

118

119

120





FC Uplinks from FI can be members of a port channel with a Nexus or MDS upstream

FCF

As of UCSM 2.1, maximum 16 members per port channel FI-A



Load Balancing is OX_ID based

121

FI-B

122

123

Module 2

LAN LAN Connectivity

OS & Application

SAN

SAN Connectivity

State abstracted from hardware MAC Address NIC Firmware NIC Settings

Drive Controller F/W Drive Firmware

UUID BIOS Firmware BIOS Settings Boot Order

BMC Firmware

WWN Address HBA Firmware HBA Settings

UUID: 56 4dcd3f 59 5b… MAC : 08:00:69:02:01:FC WWN: 5080020000075740 Boot Order: SAN, LAN

Chassis-1/Blade-2

Chassis-8/Blade-5

  

Separate firmware, addresses, and parameter settings from server hardware Physical servers become interchangeable hardware components Easy to move OS & applications across server hardware

Server Name Server Name UUID Server UUID Name MAC UUID, MAC MAC,WWN WWN Boot info WWN Boot info firmware Boot info LAN Config LAN, SAN Config LAN Config SAN Config Firmware… SAN Config Run-time association



Contain server state information



User-defined Each profile can be individually created o Profiles can be generated from a template o



Applied to physical blades at run time o

Without profiles, blades are just anonymous hardware components

 Consistent and simplified server deployment – “pay-as-you-grow” deployment o Configure once, purchase & deploy on an “as-needed” basis

 Simplified server upgrades – minimize risk o Simply disassociate server profile from existing chassis/blade and associate to new chassis/blade

 Enhanced server availability – purchase fewer servers for HA o Use same pool of standby servers for multiple server types – simply apply appropriate profile during failover

Blade Failure

Time A

Identity LAN/SAN Config Time B Service Profile: MyDBServer



Feature for multi tenancy which defines a management hierarchy for the UCS system



Has no effect on actual operation of blade and the OS



Usually created on the basis of o Application type – ESXCluster, Oracle o Administrative scope – HR, IT



Root is the top of the hierarchy and cannot be deleted



Organizations can have multiple levels depending on requirement

129

Root Org

Eng

QA

HR

HW

130

 

Pools, Policies, Service Profiles, Templates Blades are not part of an organization and are global resources

131



Root has access to Pools, Policies in Group-C



HR has access to Pools, Policies defined in Group-C





Root Org Group-C

Eng has access to Group-B + Group-C

Eng

HR Group-B

QA has access to Group-B + Group-C

QA

HW Group-A



HW has access to Group-A + Group-B + Group-C 132



Consumer of a Pool is a Service Profile.



Pools can be customized to have uniformity in Service Profiles. For example, the Oracle App servers can be set to derive MAC addresses from a specific pool so that it is easy to determine app type by looking at a MAC address on the network



Value retrieved from pool as you create logical object, then specific value from pool belongs to service profile (and still moves from blade to blade at association time)



Overlapping Pools are allowed. UCSM guarantees uniqueness when a logical object is allocated to a Service Profile. 134



Logical Resource Pool o UUID Pool o MAC Pool o WWNN / WWPN Pool



Physical Resource Pool o Server Pool – Created manually or by qualification

135



Point to pool from appropriate place in Service Profile



For example: o vNIC --- use MAC pool

o vHBA – use WW Port Name pool 

In GUI can see the value that is retrieved from the pool o Note that it belongs to service profile, not physical blade

136



Pools simplify creation of Service Profiles.



Management of virtual least identity namespaces, in same UCS domain



Cloning



Templates

137



If you create a profile with pool associations o (server pool, MAC pool, WWPN pool, etc)…..

• Then all pool associations are replicated to a cloned template.

• Specific new values for MAC, WWN will be immediately assigned to the profile from the appropriate pool.

138

  

16-byte (128-bit) number  3x10^38 different values Stored in BIOS Consumed by some software vendors (e.g. MS, VMware)

139



UUIDs (as used by ESX) need only be unique within ESX “datacenter” (unlike MACs, WWNs, and IPs)



It is impossible to assign the same UUID to 2 different UCS servers

via UCSM 

Can have overlapping pools



Pool resolution from current org up to root – if no UUIDs found,

search default pool from current org up to root

140



One MAC per vNIC



MAC address assignment: o Hardware-derived MAC

o Manually create and assign MAC o Assign address from a MAC pool

141



Can have overlapping pools



UCSM performs consistency checking within UCS domain



UCSM does not perform consistency checking with upstream LAN



Should use 00:25:B5 as vendor OUI



Pool resolution from current org up to root – if no MACs found, search default pool from current org up to root

142

  

One WWNN per service profile One WWPN per vHBA WWN assignment: o Use hardware-derived

WWN o Manually create and assign WWN o Assign WWNN pool to profile/template o Assign WWPN pool to vHBA

143



Can have overlapping pools



UCSM performs consistency checking within UCS pod



UCSM does not perform consistency checking with upstream SAN



20:00:00:25:B5:XX:XX:XX recommended



Pool resolution from current org up to root – if no WWNs found, search default pool from current org up to root

144



Manually populated or Auto-populated



Blade can be in multiple pools at same time



“Associate” service profile with pool o Means select blade from pool (still just one profile per blade at a time) o Will only select blade not yet associated with another Service Profile,

and not in process of being disassociated

145

 

One server per service profile Assign server pool to service profile or template

149



Can have overlapping pools



2 servers in an HA cluster could be part of same chassis



Can use hierarchical pool resolution to satisfy SLAs



Pools resolved from current org up to root – if no servers found, then search default pool from current org up to root

150



Policies can be broadly categorized as o Global Policies

• Chassis Discovery Policy • SEL Policy o Policies tied to a Service Profile

• • • • 

Boot Policy BIOS Policy Ethernet Adapter Policy Maintenance Policy

Policies when tied to a Service Profile greatly reduce the time taken for provisioning

152

154

156

157

158

159

160

161

Template flavors: 

Initial template o Updates to template are not propagated to profile clone



Updating template o Updates to template propagated to profile clone

Template types: 

vNIC



vHBA



Service Profile

163



When creating a vNIC in a service profile, a vNIC template can be referenced.



This template will have all of the values and configuration to be used for creating the vNIC.



These values include QoS, VLANs, pin groups, etc.



Can be referenced multiple times when creating a vNIC in your service profile.

164

165



Similar to a vNIC template. This is used when creating vHBAs in your service profile.



Used to assign values such as QoS, VSANs, etc.



Can be referenced multiple times to create multiple vHBAs in your service profile.

166

167



Same flow as creating Service Profile



Can choose server pool (but not individual blade)



Can associate virtual adapters (vNIC, VHBA) with MAC and WWN pools



Can create template from existing service profile o Works nicely for all elements of service profile that use pools

168

169

170

171

172

173

174

175

176

177

178

179

180

181



You will first start by creating several pools, policies and templates that will be used for the values assigned to each of your servers though a service profile.



You will then create a service profile template. From the wizard you will be selecting the various pools, policies and templates.



Once you’ve created your service profile template you will then create two service profiles clones. The values and pools and policies assigned to the template will be used to create two individual service profiles.



For example you will create a MAC pool with 16 usable MAC addresses. This will then be placed in the service profile template. When creating clones from the template, the system will allocate MAC addresses from this pool to be used by each vNIC in this service profile.



The service profile will automatically be assigned to the server via the server pool. You will then boot the server and install Linux. 182

183

Module 3

   

   

Role Based Access Control Remote User Authentication Faults, Events and Audit Logs Backup and Restore Enabling SNMP Call Home Enabling Syslog Fault Suppression

185



Organizations o Defines a management hierarchy for the UCS system o Absolutely no effect on actual operation of blade and its OS



RBAC o Delegated Management o Allows certain users to have certain privileges in certain organizations o Absolutely no effect on who can use and access the OS on the blades

187

 

Orgs and RBAC could be used independently Orgs without RBAC o Structural management hierarchy o Could still use without delegated administration

• Use administrator that can still do everything 

RBAC without Orgs o Everything in root org (as we have been doing so far) o Still possible to delegate administration (separate border network/FC

admin from server admin, eg)

188



Really no such thing as not having Orgs



We have just been doing everything in root (/) org



Just use happily, if you don’t care about hierarchical management

root (/)

SWDev

SWgrpA

QA

SWgrpB

IntTest

Policies



Blades are independent of Org



Same blade can be in many server pools in many orgs



Blade can be associated with logical service profile in any org

Locale: myloc Role:myrole

root (/) /SWDev /Eng/HWEng

priv1 priv2 priv3

User is assigned certain privs (one or more roles) over certain locales (one or more) User: jim



Role is a collection of privileges



There are predefined roles (collections). o You can create new roles as well



Some special privileges: o admin (associated with predefined “admin” role”) o aaa

(associated with predefined “aaa” role”)

   

Radius TACACS+ LDAP Local

198



Provider – The remote authentication server



Provider Group – A group of authentication servers or providers. This must be defined as the group is what’s referenced when

configuring authentication. 

Group Map – Used to match certain attributes in the authentication request to map the appropriate Roles and Locales to the user.

199

 

For LDAP we define a DN and reference the roles and Locales it maps to. If no group map is defined, a user could end up with the default privileges such as read-only

200

  

 

Faults – System and hardware failures such as power supply failures, Power failure, or configuration issues. Events – System events such as clustering, or RSA key generated, etc. Audit logs – Configuration events such as Service Profile and vNIC creation, etc. Syslog – Syslog messages generated and sent to a external Syslog server TechSupport files – These are “show tech” files that have been created and stored.

202

203



A Fault Suppression policy is used to determine how long faults are retained or cleared.



The flapping interval is used to determine how long a fault retains its severity and stays in an active state.



Say for example the flapping interval is 10 seconds. If a critical fault came in continuously within the 10 seconds, the fault would be suppressed and remain in an active state.



After the 10 seconds duration, if no further instances of the fault have been reported the fault is then either retained or cleared based on the suppression policy.

204



Full state backup – This is a backup of the entire system for disaster recovery. This file can not be imported and can only be used when doing a system restore upon startup of the Fabric Interconnects.



All Configuration backup – This backs up the system and logical configuration into an XML file. This file can not be used during the system restore and can only be imported while the UCS is functioning. This backup does not include passwords of locally authenticated users.



System Configuration – Only system configuration such as users, roles and management configuration.



Logical Configuration – This backup is logical configuration such as policies, pools, VLANs, etc. 207

 

Creating a Backup Operation allows you to perform the same backup multiple times. File can be stored on the locally or on a remote file system.

208

209



You can also create scheduled backups



This can only be done for Full State and All System configuration backups.



Can point the UCS to write to an FTP server, storage array or any other type of file system.



This can be done daily, weekly or bi weekly.

211

IP or Hostname of remote server to store backup Protocol to use for backup Admin state of scheduled backup Backup Schedule

212



Once a Backup operation is complete you can then import the configuration as needed.



You must create an Import Operation. This is where you will point to the file you want to Import into the UCS.



You can not import a Full system backup. This file can only be used when doing a system restore when a Fabric Interconnect is booting.



Options are to Merge with the running configuration or replace the configuration.

213



UCS supports both SNMP versions 1, 2 and 3



The following protocols are supported for SNMPv3 users: o HMAC-MD5-96 (MD5) o HMAC-SHA-96 (SHA)



The AES protocol can be enabled under a SNMPv3 user as well for additional security.

216

217



You have the option to enable Traps or Informs. Traps are less reliable because it does not require acknowledgements. Informs require acknowledgements but also have more overhead.



If you enable SNMPv3, the following V3 privileges can be enabled: o Auth—Authentication but no encryption o Noauth—No authentication or encryption o Priv—Authentication and encryption

218





Choose the authentication encryption type. Once you enable AES, you must use a privacy password. This is used when generating the AES 128 bit encryption key.

219



Call Home is a feature that allows UCS to generate a message based on system alerts, faults and environmental errors.



Messages can be E mailed, sent to a pager or an XML based application.



The UCS can send these message in the following format: o Short text format o Full text o XML format

221



A destination profile is used to determine the recipients of the call home alerts, the format it will be sent on and for what severity level.



A call home policy dictates what error messages you would like to enable or disable the system from sending.



When using E mail as the method to send alerts, an SMTP server must be configured.



It’s recommended that both fabric interconnects have reachability to the SMTP server.

222

Call Home logging level for the system

Contact info listing the source of the call home alerts

Source e mail address for Call Home

SMTP Server

223

Alert groups – What elements you want to receive errors on.

Logging Level

Alert Format

E mail recipients who will receive the alerts

224

 

Call home will send alerts for certain types of events and messages. A Call Home policy allows you to disable alerting for these specific messages.

225

  

Smart Call home will alert Cisco TAC of an issue with UCS. Based on certain alerts, A Cisco TAC case will be generated automatically. A destination profile of “CiscoTAC-1” is already predefined. This is configured to send Cisco TAC message with the XML format.

226

  



Under the CiscoTAC-1 profile, enter [email protected] Under the “System Inventory” tab, click “send inventory now”. The message will sent to Cisco. You will then receive and automatic reply based on the contact info you specified in the Call Home setup. Simply click on the link in the e mail and follow the instructions to register your UCS for the Smart Call Home feature.

227

   

Syslog can be enabled under the admin tab in UCS. Local destination will allow you to configure UCS to store syslog messages locally in a file. Remote destination will allow UCS to send to a remote syslog server. Up to three servers can be specified. Local sources will allow you to decide what types of messages are sent. The three sources are Alerts, Audits and Events.

229

Customer benefits •

Customers can align UCSM fault alerting with their operational activities

Feature details • Fault suppression offers the ability to lower severity of designated faults for a maintenance window, preventing Call Home and SNMP traps during that period

• Predefined policies allow a user to easily place a server into a maintenance mode to suppress faults during maintenance operations

1.

Users can now “Start/Stop Fault Suppression” in order to suppress transient faults and Call-home/SNMP notifications

2.

Support on both physical (Chassis, Server, IOM, FEX) and logical entities (Org, Service Profile)

3.

Users can specify a time window during which fault suppression will take into effect

4.

A fault suppression status indicator to show different states (Active, Expired, Pending)

5.

Fault Suppression policies that contain a list of faults raised during maintenance operations will be provided

233

Server Focused

IOM Focused

Fan/PSU Focused

-

Operating system level shutdown/reboot

-

Local disk removal/ replacement

-

Server power on/power off/reset

-

BIOS, adapter firmware activation/upgrades

-

Service profile association, re-association, disassociation

-

Update/Activate firmware

-

Reset IOM

-

Remove/Insert SFPs

-

IOM removal/insert

-

Local disk removal/ replacement

1.

2. 3.

4.

Suppress Policy is used to specify which faults we want to suppress Consists of cause/type pairs defined as Suppress Policy Items System will provide pre-defined suppress policies that are not modifiable Additional suppress policies cannot be created and used by user

236

1.

2.

3.

4.

5.

6.

237

default-chassis-all-maint Blade, IOM, PSU, Fan default-chassis-phys-maint PSU, Fan default-fex-all-maint OM, PSU, Fan default-fex-phys-maint PSU, Fan default-iom-maint IOM default-server-maint

238

Module 4

Single Point of Management Unified Fabric

Stateless Servers with Virtualized Adapters 240



UCS Manager Embedded– manages entire system UCS Fabric Interconnect

UCS Fabric Extender Remote line card

UCS Blade Server Chassis Flexible bay configurations UCS Blade or Rack Server Industry-standard architecture UCS Virtual Adapters Choice of multiple adapters 241

UCS Fabric Interconnect – UCS 6100

UCS Fabric Interconnect – UCS 6200UCS 6100

• • •

• •

20x 10GE Ports – 1 RU 40x 10GE Ports – 2 RU Ethernet or FC Expansion Modules

UCS Fabric Extender – UCS 2104 • •

8x 10GE Downlinks to Servers 4x 10GE Uplinks to FIs

48x Unified Ports (Eth/FC) – 1 RU 32x base and 16x expansion

UCS Fabric Extender – UCS 2208/2204 • •

32x 10GE Downlinks to Servers 8x 10GE Uplinks to FIs

Adapters - M81KR VIC, M71KR, etc.

Adapter - UCS VIC 1280



Up to 2x 10GE ports



Up to 8x 10GE ports



M81KR: Up to 128 virtual interfaces



Up to 256 virtual interfaces

242

UCS 6200

UCS 2104

UCS 2208/2204 IOM

MGMT

MGMT

Uplink Ports OOB Mgmt

Fabric A

Fabric Switch

Fabric B

Cluster

Server Ports

Fabric Extenders

I O M

Virtualized Adapters

A

Compute Blades Half / Full width

Chassis 1

I O M

I O M

CNA

B

A

B200

243

Chassis 20

CNA

CNA

B250

I O M B

MGMT

MGMT

Uplink Ports OOB Mgmt

Fabric A

Fabric Switch

Cluster

Fabric B

Server Ports

Fabric Extenders

I O M

Virtualized Adapters

A

Compute Blades Half / Full width

Chassis 1

I O M

CNA

B

B200

244

N2232

N2232

VIC

Rack Mount

Mgmt Path



Fabric Interconnect

vNIC (LIF) Host presented PCI device managed by UCSM.



vEth 1

VIF Policy application point where a vNIC connects to UCS fabric



vFC 1

IOM

VNTag

Adapter

An identifier that is added to the packet which contains source and destination ID which is used for switching within the UCS fabric.

vHBA 1

vNIC 1

Service Profile (Server) 245

Blade

Cable Virtual Cable (VNTag)

What you see FI-A

FI-A

Switch

FI-A

vFC 1

vEth 1

vFC 1

vEth 1

Eth 1/1

 Dynamic, Rapid Provisioning

IOM A

IOM A

 State abstraction

Cable 10GE A

 Location Independence

10GE A

Adapter

Adapter

Blade

vHBA 1

vNIC 1

Service Profile (Server) 246 Blade

 Blade or Rack

vHBA 1

vNIC 1

(Server)

Physical Cable Virtual Cable (VN-Tag)

Hardware Components

Carmel 6 Carmel cpu Sunnyvale

UPC

UPC

UPC

South Bridge

PCIe x8

Flash

Serial

12 Gig

Intel Jasper Forest

Memory

NVRAM

PEX 8525 4 port PCIE Switch

12 Gig

PCIe x4

ASIC CPU

Unified Crossbar Fabric

DDR3 x2

Carmel 1 Carmel 2

10 Gig

0

PCIE Dual Gig

PCIe x4

PCIe x4

PCIE Dual Gig

PCIE Dual Gig

0

0

1

1 12 Gig

12 Gig

Xcon1

UPC

UPC

Mgmt

UPC Xcon2 Console

10 Gig

248

1

N/C

Eth

FC

Native Fibre Channel

Lossless Ethernet: 1/10GbE, FCoE, iSCSI, NAS

Benefits

Use-cases

 Simplify switch purchase remove ports ratio guess work

 Flexible LAN & storage convergence based on business needs

 Increase design flexibility

 Service can be adjusted based on the demand for specific traffic

 Remove specific protocol bandwidth bottlenecks 249

  



Ports on the base card or the Unified Port GEM Module can either be Ethernet or FC Only a continuous set of ports can be configured as Ethernet or FC Ethernet Ports have to be the 1st set of ports Port type changes take effect after next reboot of switch for Base board ports or power-off/on of the GEM for GEM unified ports.

Base card – 32 Unified Ports

Eth

GEM – 16 Unified Ports

FC

250

Eth

FC

251



Slider based configuration



Only even number of ports can be configured as FC



Configured on a per FI basis

252

61x0/62xx Generational Contrasts Feature

61x0

62xx

Flash

16GB eUSB

32GB iSATA

DRAM

4GB DDR3

16GB DDR3

Processor

Single Core Celeron 1.66

Dual Core Jasper Forest 1.66

Unified Ports

No

Yes

Number of ports / UPC

4

8

Number of VIF’s / UPC

128 / port fixed

4096 programmable

Buffering per port

480KB

640KB

VLANs

1k

1k (4k future)

Active SPAN Session

2

4 (w/dedicated buffer)

Latency

3.2uS

2uS

MAC Table

16k

16k (32k future)

L3 Switching

No

Future

IGMP entries

1k

4k (future)

Port Channels

16

48 (96 in 6296)

FabricPath

No

253

Future

Components Switching ASIC Aggregates traffic to/from host-facing 10G Ethernet ports from/to network-facing 10G Ethernet ports

Upto 8 Fabric Ports to Interconnect

FLASH DRAM EEPROM

CPU (also referred to as CMC)

Chassis Management Controller

Controls Redwood and perform other chassis management functionality

Control IO

Switching ASIC

L2 Switch Aggregates traffic from CIMCs on the server blades

Switch

Woodside Interfaces HIF (Backplane ports) NIF (FabricPorts) BIF CIF No local switching – All traffic from HIFs goes upstream for Switching

Chassis Signals

254

Up to 32 Backplane Ports to Blades

2104/220X Generational Contrasts Feature

2104

2208

2204

ASIC

Redwood

Woodside

Woodside

Host Ports

8

32

16

Network Ports

4

8

4

CoSes

4 (3 enabled)

8

8

1588 Support

No

Yes

Yes

Latency

~800nS

~500nS

~500nS

Adapter Redundancy

1 mLOM only

mLOM and Mezz

mLOM and Mezz

255

   

   



Next Generation VIC based Dual 4x10 Gbps connectivity into fabric PCIe x16 GEN2 host interface Capable of 256 PCIe devices (OS dependent) Same host side drivers as VIC (M81KR) Retains VIC features with enhancements Fabric Failover capability SRIOV “capable” device

10 Base KR Sub Ports

UIF 1

UIF 0

1280-VIC

256

Key Generational Contrasts Function/Capability

M81KR

1280-VIC

PCIe Interface

Gen1 x16

Gen2 x16

Embedded CPU’s

3 @ 500 MHz

3 @ 675 MHz

Uplinks

2 x 10GE

2 x 10GE / 2 x 4 x 10GE

vNICs/vHBAs

128

256

WQ,RQ,CQ

1024

1024

Interrupts

1536

1536

VIFlist

1024

4096

Complete hardware inter-operability between Gen 1 and Gen 2 Fabric Interconnect

IOM

Adapter

6100

2104

UCS M81KR

UCSM 1.4(1) or earlier

6100

2208

UCS M81KR

UCSM UCS 2.0

6100

2104

UCS1280 VIC

UCSM UCS 2.0

6100

2208

UCS1280 VIC

UCSM UCS 2.0

6200

2104

UCS M81KR

UCSM UCS 2.0

6200

2208

UCS M81KR

UCSM UCS 2.0

6200

2104

UCS1280 VIC

UCSM UCS 2.0

6200

2208

UCS1280 VIC

UCSM UCS 2.0

258

Supported

Min Software version required

Ethernet Switching Modes

LAN

Spanning Tree

 

Server vNIC pinned to an Uplink port No Spanning Tree Protocol o Reduces CPU load on upstream switches o Reduces Control Plane load on 6100 o Simplified upstream connectivity



FI A

MAC Learning vEth 3

Fabric A



o Eases MAC Table sizing in the Access Layer

vEth 1 MAC Learning

VLAN 10

L2 Switching VNIC 0

Server 2

UCS connects to the LAN like a Server, not like a Switch Maintains MAC table for Servers only

VNIC 0

Server 1



Allows Multiple Active Uplinks per VLAN o Doubles effective bandwidth vs STP

   260

Prevents Loops by preventing Uplinkto-Uplink switching Completely transparent to upstream LAN Traffic on same VLAN switched locally

   



Server to server traffic on the same VLAN is locally switched Uplink port to Uplink port traffic not switched Each server link is pinned to an uplink port / port-channel Network to server unicast traffic is forwarded to server only if it arrives on pinned uplink port. This is termed as the Reverse Path Forwarding—(RPF) check Packet with source MAC belonging to a server received on an uplink port is dropped (Deja-Vu Check) 261

LAN Server 2 Uplink Ports

Deja-Vu

FI

vEth 1

VLAN 10

vEth 3

VNIC 0

VNIC 0

Server 2

Server 1

RPF



Broadcast traffic for a VLAN is pinned on exactly one uplink port (or port-channel) i.e., it is dropped when received on other uplinks



Server to server multicast traffic is locally switched



RPF and deja-vu check also applies for multicast traffic

LAN B

B Broadcast Listener per VLAN

Uplink Ports

FI

vEth 1

vEth 3

B

262

VNIC 0

VNIC 0

Server 2

Server 1

Root

LAN

  

FI-A

MAC Learning vEth 3

Fabric A



vEth 1 VLAN 10



L2 Switching



VNIC 0

VNIC 0

Server 2

Server 1



263

Fabric Interconnect behaves like a normal Layer 2 switch Server vNIC traffic follows VLAN forwarding Spanning tree protocol is run on the uplink ports per VLAN—Rapid PVST+ Configuration of STP parameters (bridge priority, Hello Timers etc) not supported VTP is not supported currently MAC learning/aging happens on both the server and uplink ports like in a typical Layer 2 switch Upstream links are blocked per VLAN via Spanning Tree logic

Fabric Failover









Fabric provides NIC failover capabilities chosen when defining a service profile Traditionally done using NIC bonding driver in the OS Provides failover for both unicast and multicast traffic Works for any OS

L1 L2

L1 L2

FI-A

FI-B

vEth 1

Physical Cable

vEth 1

IOM

IOM

Virtual Cable 10GE

10GE

PHY Adapter Cisco VIC Menlo – M71KR

265

vNIC 1

VIRT Adapter

OS / Hypervisor / VM

1

2

1

Upstream Switch

15 16

Uplink Ports

UCS FI-A VLAN 10

7

Upstream Switch

15

14

14

8

2

8

16

Uplink Ports

7

UCS FI-B VLAN 20

VLAN 10

VLAN 20

HA Links 1

2

3

4

5

6

1

2

3

4

5

Server Ports

Fabric Ports 1

2

3

4

FEX-A Backplane 1 Ports

2

3

4

5

6

7

Eth 1/1/4

8

UCS Blade Chassis Adapter

Fabric Ports 1

2

3

4

FEX-B 1

2

3

4

5

Eth 1/1/4

vNIC stays UP

MAC –A

6

Server Ports

Eth 0

MAC –B

Eth 1 PCI Bus

Bare Metal Operating System Windows / Linux

266

6

7

8

Backplane Ports

Blade Server

MAC-A gARP

1

2

1

Upstream Switch

15

Uplink Ports

UCS FI-A

1

2

3

4

5

Upstream Switch

15

16

14

14

16

7

8

8

7

veth1240 MAC-C MAC-E

VLAN Web VLAN NFS VLAN VMK VLAN COS

2

Uplink Ports

UCS FI-B

veth1241 MAC-C MAC-E

VLAN Web VLAN NFS VLAN VMK VLAN COS

HA Links

6

1

2

3

4

5

Server Ports

Server Ports

Fabric Ports 1

2

3

4

FEX-A Backplane 1 Ports

2

3

4

5

6

7

Eth 1/1/4 MAC –A Eth

Veth10 Profile Web MAC –C

6

8

UCS Blade Chassis Adapter

0

Veth11 Profile Web MAC –D

Fabric Ports 1

2

3

FEX-B 1

2

3

4

Veth5 Profile VMK

Kernel

MAC –E

267

5

6

7

8

Backplane Ports

Blade Server

Eth 1/1/4 Eth 1

Veth20 Profile NFS

4

MAC –B

Veth10 Hypervisor Profile COS Switch

Service Console

MAC-C, E gARP

Ethernet Switching Modes Recommendations

  

Spanning Tree protocol is not run in EHM hence control plane is unoccupied EHM is least disruptive to upstream network – BPDU Filter/Guard, Portfast enabled upstream MAC learning does not happen in EHM on uplink ports. Current MAC address limitation on the 6100 ~14.5K.

Recommendation: End Host Mode 269







Dynamic pinning Server ports pinned to an uplink port/port-channel automatically Static pinning 1 6100 A Specific pingroups created and associated with vEth 3 adapters Static pinning allows traffic Fabric A management if required for certain applications / servers

Recommendation: End Host Mode

270

2

3

4

DEFINED: PinGroup Oracle Pinning

vEth 1 Switching

VNIC 0

VNIC 0

Server X

Oracle

APPLIED: PinGroup Oracle

 

Fabric Failover is only applicable in EHM. NIC teaming software required to provide failover in Switch mode.

L1 L2

L1 L2

FI-A

FI-B

vEth 1

Physical Cable Virtual Cable

vEth 1

IOM

IOM

10G E

10G E

PHY Adapter Cisco VIC Menlo – M71KR

Recommendation: End Host Mode 271

vNIC 1

VIRT Adapter

OS / Hypervisor / VM

End Host Mode Primary Root

Switch Mode

Secondary Root

LAN

Primary Root

Secondary Root

LAN

Active/Active Border Ports FI-A

Border Ports FI-B

Server Ports

FI-A

Blocking

FI-B

Server Ports

272 Recommendation: End Host Mode



Certain application like MS-NLB (Unicast mode) have the need for unknown unicast flooding which is not done in EHM



Certain network topologies provide better network path out of the

Fabric Interconnect due to STP root placement and HSRP L3 hop. 

Switch Mode is “catch all” for different scenarios

Recommendation: Switch Mode 273

Adapter – IOM Connectivity

IOM-A

Mezzanine Connector Slot

4x10G KR

2x10G KR

2x10G KR

2x10G KR

2x10G KR

IOM-B

Integrated I/O (mLOM – VIC1240)

` x16 Gen 2

B200-M3

x16 Gen 2

QPI

Sandy Bridge CPU # 1

275

Sandy Bridge CPU # 0

Patsburg PCH-B

` x16 Gen 2

x16 Gen 2

B200-M3 CPU

CPU

276

VIC-1240

Mezzanine Slot

2208-B

2208-A

Not Populated

Port 0 Port 1 VIC ASIC

Port 0 Port 1 VIC ASIC

` x16 Gen 2

x16 Gen 2

B200-M3 CPU

CPU

277

VIC-1240

VIC 1280

Mezzanine Slot

2208 - B

2208 - A

Port 0 Port 1 VIC ASIC

Port 0

Port 1 Sereno

Port 1

Port Expander

` x16 Gen 2

x16 Gen 2

B200-M3 CPU

CPU

278

VIC-1240

2208 - B

2208 - A

Port 0

` x16 Gen 2

x16 Gen 2

B200-M3 CPU

CPU

279

VIC-1240

Mezzanine Slot

2204-B

2204-A

Not Populated

Port 0 Port 1 VIC ASIC

`

Port 0 Port 1 VIC ASIC

x16 Gen 2

x16 Gen 2

B200-M3 CPU

CPU

280

VIC-1240

VIC 1280 Mezz

Mezzanine Slot

2204 - B

2204 - A

Port 0 Port 1 VIC ASIC

Port 1

`

Port 1

Mezzanine Slot

Pass Through

Port 0

x16 Gen 2

x16 Gen 2

B200-M3 CPU

CPU

281

VIC-1240

2204 - B

2204 - A

Port 0

IOM – FI Connectivity

Server-to-Fabric Port Pinning Configurations Port Channel Mode

N10-PAC1-550W

UCS 5108

SLOT

1

SLOT

3

SLOT

5

SLOT 7

N10-PAC1-550W

Slot 2

Slot 3

Slot 4

Slot 5

Slot 6

Slot 7

Slot 8

FAIL

OK

FAIL

FAN STAT

STAT

FAN2

FAIL

FAN1

FAN STAT

PS1 STAT

FAIL

OK

PS2

OK

PS2

N10-PAC1-550W

N10-PAC1-550W

N10-PAC1-550W

N10-PAC1-550W

N10-PAC1-550W

Slot 1

OK

FAN STAT

OK

OK

PS2

OK

PS2

OK

N10-PAC1-550W

FAN1

FAIL

FAN2

FAIL

OK OK

FAN STAT

PS1

FAN STAT

STAT

FAIL

FAN1

FAN STAT

PS1 FAIL

FAN2

FAN2

FAN STAT

STAT

PS1

FAN STAT

FAIL

FAN1

Discrete Mode FAIL

OK

FAIL

UCS 5108

! SLOT

SLOT

1

2

SLOT

SLOT

3

4

SLOT

SLOT 6

5

SLOT 7

SLOT

8

OK

Slot 1

Slot 2

Slot 3

Slot 4

Slot 5

Slot 6

Slot 7

Slot 8

OK

FAIL

283

• •

FAIL

OK

FAIL

OK

FAIL

6200 to 2208 6200 to 2204

! SLOT

2

SLOT

4

SLOT 6

SLOT

8

OK

FAIL



Individual Links Bladed pinned to discrete NIFs Valid number of NIFs for pinning – 1,2,4,8



Port-channel Only supported between UCS 6200 –2208/4 XP

284

Number of Active Fabric Links

Blades pinned to fabric link

1-Link

All the HIF ports pinned to the active link

2-Link

1,3,5,7 to link-1 2,4,6,8 to link-2

4-Link

1,5 to link-1 2,6 to link-2 3,7 to link-3 4,8 to link-4

8-Link (Applies only to 2208XP)

1 to link-1 2 to link-2 3 to link-3 4 to link-4 5 to link-5 6 to link-6 7 to link-7 8 to link-8



HIFs are statically pinned by the system to individual fabric ports.



Only 1,2,4 and 8 links are supported, 3,5,6,7 are not valid configuration.

Static Pinning done by the Fabric Interconnect

fabric ports

1,2 4, 8 (2^x) are valid links for initial pinning Applicable to both 6100 / 6200

286

Blade 8

Blade 7

Blade 6

Blade 5

Blade 4

Server Ports

and 2104XP/2208XP

Blade 1



IOM

Blade 2



Fabric Ports

system dependent on number of

Blade 3



287 Blade 8

Blade 7

Blade 6

Blade 5

Blade 4

Blade 3

Blade 2

Server Ports

Fabric Ports



Blade 1



Pinned HIFs are brought down Fabric Interconnect

Other blades unaffected

IOM

Fabric Interconnect 6100/6200

Blades re-pinned to valid number of

Pinned blade connectivity affected



HIF’s brought down/up for re-

Fabric Ports



Unused Link

links – 1,2,4 or 8

pinning

Blade7

Blade 8

288

Blade 6

chassis.

Blade 5

Addition of links requires re-ack of

Blade 4



Blade 2

May result in unused links

Blade 1



Server Ports

IOM

Blade 3



Only possible between 62002208XP



HIFs pinned to port-channel



Port-Channel Hash

Fabric Ports



Fabric Interconnect 6200

Port Channel

289

Blade 8

Blade7

Blade 6

Blade 5

Blade 4

Blade 3

Blade 1

FCoE L2 SA, L2 DA, FC SID ,FC DID

Blade 2

IP L2 DA, L2 SA, L3 DA, L3 SA,VLAN

Server Ports

IOM – 2208XP

Blades still pinned to Port-channel on a link failure



HIF’s not brought down till all members fail

Fabric Ports



Fabric Interconnect 6200

Port Channel

290

Blade 8

Blade7

Blade 6

Blade 5

Blade 4

Blade 3

Blade 2

Blade 1

Server Ports

IOM – 2208XP

N10-PAC1-550W

SLOT

Slot 3

1

SLOT

SLOT

4

3

Slot 6

SLOT 6

SLOT

FAIL

Slot 8 OK

FAIL

OK

FAIL

SLOT

SLOT 7

OK

FAN2

STAT

FAN1

PS1 PS2

Slot 1

N10-PAC1-550W

Slot 2

! SLOT

2

Slot 4

SLOT

4

Slot 5

Slot 6

SLOT 6

5

8

OK

STAT

FAN2

SLOT

2

FAN STAT

OK

N10-PAC1-550W

Slot 3

5

Slot 7

SLOT

FAN STAT

FAIL

OK

PS2

UCS 5108

Slot 4

Slot 5

FAIL

N10-PAC1-550W

!

3

SLOT 7

PS1 STAT

N10-PAC1-550W

Slot 2

SLOT

FAN STAT

OK

PS2

Slot 1

FAN STAT

FAIL

OK

N10-PAC1-550W

1

SLOT

FAIL

OK

N10-PAC1-550W

FAN1

Port Channel Mode FAN STAT

FAN1

FAIL

OK

N10-PAC1-550W

UCS 5108

FAN STAT

PS1 FAIL

FAN2

FAN2

FAN STAT

OK

PS2

OK

STAT

PS1

FAN STAT

FAIL

FAN1

Discrete Mode FAIL

FAIL

Slot 7

Slot 8

SLOT

8

OK

FAIL

OK

FAIL

OK

FAIL

OK

FAIL

 Servers can only use a single 10GE IOM uplink

 Servers can utilize all 8x 10GE IOM uplinks

 A blade is pinned to a discrete 10 Gb uplink

 A blade is pinned to a logical interface of 80 Gbps

 Fabric Failover if a single uplink goes down

 Fabric Failover if all uplinks on same side go down

 Per blade traffic distribution , same as Balboa

 Per flow traffic distribution with-in a port-channel

 Suitable for traffic engineering use case

 Suitable for most environments

 Addition of links requires chassis re-ack.

 Recommended with VIC 1280 291

Upstream Connectivity (Ethernet)

DMZ 1 VLAN 20-30

DMZ 2 VLAN 40-50 All Links Forwarding

Prune VLANs

FI-A

FI-B

EHM

EHM DMZ 1 Server

DMZ 2 Server

Assumption: No VLAN overlap293 between DMZ 1 & DMZ 2

Dynamic Re-pinning of failed uplinks

FI-A Sub-second re-pinning

vEth 3

vEth 1 Switching

VLAN 10

Fabric A

L2 Switching

All uplinks forwarding for all VLANs GARP aided upstream convergence No STP Sub-second re-pinning

VNIC stays up

VNIC 0

vSwitch / N1K

MAC A

ESX HOST 1

No server NIC disruption 294

Pinning

VNIC 0

Server 2

VM 1

VM 2

MAC B

MAC C

Recommended: Port Channel Uplinks

No disruption

No GARPs needed

FI-A Sub-second convergence vEth 3 Fabric A

More Bandwidth per Uplink Per flow uplink diversity

vEth 1 Switching

VLAN 10

L2 Switching

No Server NIC disruption

NIC stays up

VNIC 0

Fewer GARPs needed

MAC A

Faster bi-directional convergence Fewer moving parts VNIC 0

RECOMMENDED

Pinning

295

Server 2

vSwitch / N1K

ESX HOST 1 VM 1

VM 2

MAC B

MAC C

vPC uplinks hide uplink & switch failures from Server VNICs vPC Domain

No disruption No GARPs Needed!

FI-A Pinning vEth 3

More Bandwidth per Uplink

vEth 1 Switching

VLAN 10

Fabric A

No Server NIC disruption Switch and Link resiliency

L2 Switching

Per flow uplink diversity

NIC stays up vSwitch / N1K

No GARPs

VNIC 0

Faster Bi-directional convergence Fewer moving parts

vPC RECOMMENDED

Server 2 296

VNIC 0

ESX HOST 1 VM 1

VM 2

MAC B

MAC C

 VNIC 0 on Fabric A  VNIC 1 on Fabric B

VM1 to VM4:

 VM1 Pinned to VNIC0  VM4 Pinned to VNIC1

1) 2)

L2 Switching

3)

 VM1 on VLAN 10  VM4 on VLAN 10 FI-A

FI-B

EHM

VNIC 0

VM1

Leaves Fabric A L2 switched upstream Enters Fabric B

EHM

VNIC 1

VNIC 0

VNIC 1

ESX HOST 1

ESX HOST 2

vSwitch / N1K Mac Pinning

vSwitch / N1K Mac Pinning

VM2

VM3

297

VM4

7K1

7K2

FI-A

FI-B

EHM

1. 2. 3.

EHM

Traffic destined for a vNIC on the Red Uplink enters 7K1 Same scenario vice-versa for Green All Inter-Fabric traffic traverses Nexus 7000 peer link 298

vPC uplinks to L3 aggregation switch 7K1

7K2

vPC Domain keepalive

vPC peer-link

FI-A EHM

FI-B EHM

With 4 x 10G (or more) uplinks per 6100 – Port Channels

FI-A

FI-B

EHM

EHM

All UCS uplinks forwarding No STP influence on the topology End Host Mode

300

Upstream Connectivity (Storage)



Fabric Interconnect operates in N_Port Proxy mode (not FC Switch mode) o Simplifies multi-vendor interoperation

FLOGI FDISC

o Simplifies management



F_Port

SAN switch sees Fabric Interconnect as an FC End Host

N_Proxy

FI-A 



Server vHBA pinned to an FC uplink in the same VSAN. Round Robin selection.

Eliminates the FC domain on UCS Fabric Interconnect One VSAN per F_port (multi-vendor)



Trunking and Port channeling (OX_ID) with MDS, Nexus 5K

N_Proxy

FI-B

vFC 1

vFC 2

F_Proxy

N_Port vHBA 0

vHBA 1

Server 1



F_Port

vFC 1

vFC 2

F_Proxy

N_Port vHBA 0

vHBA 1

Server 2

Ethernet FC

Converged FCoE link Dedicated FCoE link



UCS Fabric Interconnect behaves like an FC fabric switch



Primary use case is directly attached FC or FCoE Storage Targets

FC

E Port

FI-A



Light subset of FC Switching features Select Storage ports o Set VSAN on Storage ports

FCoE

FI-B

vFC 1

vFC 2

F_Proxy

vFC 1

vFC 2

F_Proxy

o

N_Port vHBA 0



Fabric Interconnect uses a FC Domain ID



UCSM 2.1 - In the absence of SAN, Zoning for directly connected targets will be done on the FI’s.

vHBA 1

Server 1

N_Port vHBA 0

vHBA 1

Server 2

Ethernet FC

Converged FCoE link Dedicated FCoE link



FI’s in NPV Mode

Nexus 7k/5k

FLOGI FDISC VF Port



VF Port

Support for trunking and port-channeling VNP



Zoning happens upstream to UCS

FI-A

VNP

FI-B

vFC 1

vFC 2

F_Proxy

N_Port vHBA 0

vHBA 1

Server 1

vFC 1

vFC 2

F_Proxy

N_Port vHBA 0

vHBA 1

Server 2

Ethernet FC

Converged FCoE link Dedicated FCoE link



FI’s in NPV Mode

Nexus 5k FLOGI FDISC VF Port







With a Nexus 5k upstream, the link can be converged i.e. Ethernet/IP and FCoE traffic on the same wire Goes against the best practices for upstream Ethernet connectivity Can be used in scenarios where port licenses and cabling an issue.

VF Port

VNP

FI-A

VNP

FI-B

vFC 1

vFC 2

F_Proxy

N_Port vHBA 0

vHBA 1

Server 1

vFC 1

vFC 2

F_Proxy

N_Port vHBA 0

vHBA 1

Server 2

Ethernet FC

Converged FCoE link Dedicated FCoE link



IP Storage attached to “Appliance Port” NFS, iSCSI, CIFS



Controller interfaces active/standby for a given volume when attached to separate FIs



Controller interfaces Active/Active when each handling their own volumes



Sub-optimal forwarding possible if not careful Ensure vNICs are accessing Volumes local to its fabric

LAN

NAS Volume A

Volume B

C1

Appliance Port

A

U

FI-A

U

A FI-B

vEth 1

vNIC 0

vEth 2

vNIC 1

Server 1

vEth 1

vNIC 0

vEth 2

vNIC 1

Server 2





Storage attached directly to FI o NFS o iSCSI o CIFS o FCoE Supported with Netapp Unified Target Adapter (UTA)

UTA

Unified Appliance Port

A1

A

U

FI-A

U

A FI-B

vEth 1



LAN

Netapp

vEth 2

vEth 1

vEth 2

Cable and port reduction

vNIC 0

vNIC 1

FC 0

Server 1

FC 1