Data Center Poster

  • Uploaded by: Roberto Solano
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Data Center Poster as PDF for free.

More details

  • Words: 3,139
  • Pages: 1
DATA CENTRE NETWORKED APPLICATIONS BLUEPRINT A KEY FOUNDATION OF CISCO SERVICE-ORIENTED NETWORK ARCHITECTURE

SECONDARY INTERNET EDGE AND EXTRANET

INTERNET EDGE

LARGE BRANCH OFFICE

EXTRANET

Use dual Integrated Services Routers for a large branch office. Each router is connected to different WAN links for higher redundancy. Use dual stack IPv6-IPv4 services on Layer 3 devices. Use the integrated IOS firewall and intrusion prevention for edge security and the integrated Call Manager for remote voice capabilities. Consider IPv6 firewall policies, filtering and DHCP prefix delegation when IPv6 traffic is expected to/from branch offices. Use integrated Wide Area Engines for file caching, local video broadcast, and static content serving.

Use a dedicated extranet as a highly scalable and secure termination point for IPsec and SSL VPNs to support business partner connectivity. Apply the intranet server farm design best practices to the specific partner facing application environments, but considering their specific security and scalability requirements. Consider the use of the Application Control Engine (ACE) to provide high performance and high scalability load balancing, SSL and application security capabilities per partner profile by using virtual instances of these functions.

Large branches have similar LAN/SAN designs to small data centers and small campuses for server and storage, and client connectivity respectively. Use Layer 3 switches to house branch-wide LAN services and to provide connectivity to all required access layer switches. Consider a number of VLANs based on the branch functions such as server farm, point of sale, voice, video, data, wireless and management in the LAN design.

Partners

Perform Distributed Denial of Service (DDoS) attack mitigation at the Enterprise Edge. Place the Guard and Anomaly Detector to detect and mitigate high-volume attack traffic by diverting through anti-spoofing and attack specific dynamic filter counter-measures. Use the Adaptive Security Appliance to concurrently perform firewall, VPN, and instrusion protection functions at the edge of the enterprise network. Use dual-stack IPv6-IPv4 on the edge routers and consider IPv6 firewall and filtering capabilities.

Service Provider 2

Service Provider 1

Place the Global Site Selector in the DMZ to prevent DNS traffic from penetrating the edge security boundaries. Consider the design of the Internet-facing server farm following the same best practices used in intranet server farms, with specific scalability and security requirements driven by the size of the target user population. Use the AONs application gateway for XML filtering such as schema and digital signature validation, and provide transaction integrity and security for legacy application message formats.

INTERNET

Use a collapsed Internet Edge and extranet design for a highly centralized and integrated edge network. Edge services are provided by embedding intelligence from service modules such as firewall, content switching and SSL (ACE) and VPN modules, and appliances such as Guard XT and the Anomaly Detector for DDoS protection. Additional edge functionality includes site selector and content caching, as well as event correlation engines and traffic monitoring provided by the integrated service devices. Consider the use of dual-stack IPv4-IPv6 services in Layer 3 devices and the need to support IPv6 firewall policies and IPv6 filtering capabilities.

VPN

Consider the storage network design and available storage connectivity options: FC, iSCSI and NAS. Plan the data replication process from the branch to headquarters based on latency and transaction rates requirements. Consider QoS classification to ensure the different types of traffic match the loss and latency requirements of the applications.

SECONDARY CAMPUS NETWORK CAMPUS CORE

CAMPUS NETWORK

Building Y

REMOTE OFFICES HOME OFFICE

SMALL OFFICE Consider an integrated services design for a full service branch environment. Services include voice, video, security and wireless. Voice services include IP phones, local call processing, local voice mail, and VoIP gateways to the PSTN. Security services include integrated firewall, intrusion protection, IPsec and admission control. Connect small office networks to headquarters through VPN, and ensure QoS classification and enforcement provides adequate service levels to the different traffic types. Configure multicast for applications that require concurrent recipients of the same traffic. Consider a dual-stack IPv4-IPv6 router to support IPv6 traffic. Ensure IPv6 firewall rules and filtering capabilities are enabled on the router.

Consider the home office as an extension of the enterprise network. Basic services include access to applications and data, voice and video. Use VPN to ensure security for teleworker environments, thus relying on the corporate security policies. Also consider the use of wireless access as an extension of the enterprise network; secure and reliable. Enable QoS to police and enforce service levels for voice, data and video traffic. Consider a dual-stack IPv6-IPv4 router to support IPv6 remote devices. Security policies for IPv6 traffic should include IPv6 filtering and capable firewalls.

WIDE AREA NETWORK

CAMPUS CORE

When Layer 2 is used in the Campus access layer, select a primary distribution switch to be the primary default gateway and STP root. Set the redundant distribution switch as the backup default gateway and secondary root. Use HSRP and PVRST+ as the primary default gateway and STP protocols.

Integrate wireless controllers at the distribution layer and wireless access points at the access layer. Use Etherchannel between the distribution switches to provide redundancy and scalability. Dual-home access switches to the distribution layer to increase redundancy by providing alternate paths. Consider the use of dual-stack IPv4-IPv6 services at the access, distribution and core Layer 3 and/or Layer 2 devices.

PSTN

The Campus core provides connectivity between the major areas of an Enterprise network including the data center, extranet, Internet edge, Campus, Wide Area Network (WAN), and Metropolitan Area Network (MAN). Use a fully-meshed Campus core to provide high-speed redundant Layer 3 connectivity between the different network areas. Use dual-stack IPv6-IPv4 in all Layer 3 devices and desktop services.

SECONDARY SITE

PRIMARY SITE

Use 10GbE throughout the infrastructure (between distribution switches and between access and distribution) when high throughput is required. Use Layer 3 access switches when shared VLANs are not needed in more than one access switch at a time, and very low convergence is required.

Building Z

SECONDARY DATA CENTER

Use the WAN as the primary path for user traffic destined for the intranet server farm. Through the use of DNS and RHI control the granularity of applications being independently advertised, and state of distributed application environments. Ensure the proper QoS classification is used for voice, data and video traffic. Use dual-stack IPv6-IPv4 in all Layer 3 devices.

Use the secondary data center as a backup location that houses critical standby transactional (near zero RPO and RTO) and redundant active non-transactional applications (RPO and RTO in the 12-24 hours range).

Building X

LARGE-SCALE PRIMARY DATA CENTER DATA CENTER CORE

Edge: • 96 switches - 24 ports each - 12 servers per switch • 12 Uplinks to aggregation layer Aggregation: • 6 switches - 96 ports each • 12 downlinks - one per edge switch Core: • Number of switches based on fabric connectivity needs

Use VSANs to group isolated fabrics into a shared infastructure while keeping their dedicated fabric services, security, and stability integral per group. Dual-home hosts to each of the SAN fabrics using Fibre Channel Host Bus Adapters (HBAs).

NETWORK OPERATIONS CENTER (NOC)

Data Center Fundamentals: www.ciscopress.com/datacenterfundamentals

B

Use a dual-fabric (fabrics A and B) topology to achieve high resiliency in SAN environments. A common management VSAN is recommended to allow the fabric manager to manage and monitor the entire network environment.

Design Best Practices: www.cisco.com/go/datacenter

Service Modules Area

Service Appliances Area

Use storage virtualisation to further increase the effective storage utilization and centralise management of storage arrays. Arrays form a single pool of virtual storage which are presented as virtual disks to applications.

4992 Node Ethernet Cluster • • • • • • • • •

288 Node Infiniband Cluster Use a non-blocking design for server clusters dedicated to computational tasks. In a non-blocking design for every HCA connected to the edge/access layer, there is an uplink to an aggregation layer switch.

SERVER CLUSTER VSAN

1536 GbE servers

Topology Details Edge • 24 switches - 24 ports each • 12 servers per switch • 12 uplinks to aggregation layer Aggregation • 12 switches - 24 ports each • 1 or more uplinks to each core switch Core • number of core switches based on connectivity needs to IP network

To achieve additional redundancy on an HA server cluster, distribute a portion of the servers in the HA cluster to a data center. This distribution of HA clusters across distributed data centers, referred to as geo-clusters or stretched clusters, often times requires Layer 2 adjacency between distributed nodes. Adjacency means the same VLAN (IP subnet) and VSAN have to be extended over the shared transport infrastructure, between the distributed data centers. The HA cluster spans multiple geographically distant data center hosting facilities.

High availability clusters consist of multiple servers supporting mission-critical applications in business continuance or disaster recovery scenarios. The applications include databases, filers, mail servers or file servers. The nodes of a single application cluster use a clustering mechanism that relies on unicast packets if there are two nodes or multicast if using more than two nodes. The nodes backup each other and use heartbeats to determine node status. The network infrastructure suporting the HA cluster is shared by other server farms. Additional VLANs and VSANs are used to connect additional NICs and HBAs required by the cluster to operate. The application data must be available to all nodes in the cluster. This requires the disk to be shared so it cannot be local to each node. The shared disk or disk array is also accessible through IP (iSCSI or NAS), Fibre Channel (SAN) or shared SCSI. The transport technologies that can be used to connect the LAN and the SAN of the data centers can be Dark Fiber, DWDM, CWDM, SONET, Metro Ethernet, EoMPLS, L2TPv3 as some of the options shown above.

Modular chassis per rack group 8 aggregation - 16 access switches 16 10GbE downlinks per aggregation 8 10GbE uplinks per access Layer 3 in aggregation and access 8-way equal cost multipath ECMP 312 GbE ports per access switch 3.9:1 Oversubscription 80 Gigabit per access switch

The core layer is required when the cluster needs to connect to an existing IP network environment. The modular access layer switches provide access functions to groups of racks at a time. Design is aimed at reducing hop count between any two nodes in the cluster.

4992 GbE attached servers

Use a DWDM/SONET/SDH/Ethernet transport network to support high-speed, low-latency uses, such as synchronous data replication between distributed disk subsystems. The common transport network supports multiple protocols such as FC, GbE, and ESCON concurrently.

PUBLIC VLANs HIGH AVAILABILITY SERVER CLUSTER

The nodes in HA clusters are linked to multiple networks using existing network infrastructure. Use the private network for heartbeats and the public network for inter-cluster communication and client access. Nodes in distributed data centers may need to be in the same subnet, requiring Layer 2 adjacency.

CLUSTER VLANs FABRIC C FABRIC D VLAN X The transport network supports multiple communication streams between nodes in the stretched clusters. Use multiple VLANs to separate intracluster from client traffic. Use multiple SAN fabrics to provide path redundancy for the extended SAN. Use multipathing on the hosts and IVR between the SAN fabric Directors to take advantage of the redundant fabrics. Use write acceleration to improve the performance rate of the data replication process. Consider the use of encryption to secure data transfers and compression to increase the data transfer rates.

End-user Workstation

Service Devices Placement Location

B

Use a SONET/SDH transport network for FCIP, in addition to voice, video, and additional IP traffic between distributed locations in a metro or long-haul environments. Consider the use of RPR/802.17 technology to create a highly available MAN core for distributed locations.

HIGH AVAILABILITY SERVER CLUSTER

NOC VLAN/VSAN

Use a NOC VLAN to house critical management tools and to isolate management traffic from client/server traffic. Use NTP, SSH-2, SNMPv3, CDP and Radius/TACACS+ as part of the management infrastructure. Use CiscoWorks LMS to manage the network infrastructure and monitor IPv4-IPv6 traffic, and the Cisco Security Manager to control, configure and deploy firewall, VPN and IPS security policies. Use on the Performance Visibility Manager to measure end-to-end application performance. Use the Monitoring, Analysis, and Response System to correlate traffic for anomaly detection purposes. Use the Network Planning Solution to build network topology models, for failure scenario analysis and other what-if scenarios based on device configuration, routing tables, NAM and NetFlow data. Use the MDS Fabric Manager to manage the storage network. Use NetFlow and the Network Analysis Module for capacity planning and traffic profiling.

[email protected]

- 8 core switches connected to each aggregation module through a 10GbE link per switch - 4 aggregation modules each with 2 Layer 3 switches that provide 10GbE connectivity to the access layer switches - 8 access switches per aggregation module, each switch connecting to 2 aggregation switches through 10GbE links - Each access layer switch supports 48 10/100/1000 ports and 2 10GbE uplinks

A

Use PortChannels and trunks to aggregate multiple physical inter-switch links (ISL) into a logical link. Use VSANs to segregate multiple distinct SANs in a physical fabric to consolidate isolated SANs and SAN fabrics. Use core-edge topologies to connect multiple workgroup fabric switches when tolerable over-subcription is a design objective.

B

A

Topology Details:

Connect access switches used in application and back-end segments to each other across application tier function boundary through EtherChannel® links. Use VLANs to separate groups of servers by function or application service type.

Topology details for 1024 servers include:

A

Infiniband Network

High density Ethernet clusters consist of multiple servers that operate concurrently to solve computational tasks. Some of these tasks require certain degree of processing parallelism while others require raw CPU capacity. Common applications of large Ethernet clusters include large search engine sites and large web server farms. The diagram shows a tiered design using “top of rack” 1RU access switches for a total of 1536 servers.

Use firewalls to control the traffic path between tiers of servers and to isolate distinct application environments. Use ACE as a content switch to monitor and control server and application health, and to distribute traffic load between clients and the server farm, and between server/application tiers.

SERVER CLUSTER VSAN

Blade Servers

HIGH DENSITY ETHERNET CLUSTER

Deploy access layer switches in pairs to enable server dual-homing and NIC Teaming. Use trunks and channels between access and aggregation switches. Carry VLANs that are needed throughout the server farm on every trunk, to increase flexibility. Trimm unneeded VLANs from every trunk.

Use an Infiniband fabric for applications that execute a high rate of computational tasks and require low latency and high throughput. Select the proper oversubcription rate between edge and aggregation layers for intracluster purposes, or between aggregation, core and the Ethernet fabric. In a 2:1 blocking topology for every two HCAs connected to edge switches, there is an uplink to an aggregation layer switch. Core switches provide connectivity to the Ethernet fabric. Use Vframe to manage I/O virtualisation capabilities of server fabric switches.

Questions:

When using pass-through modules dual-home servers to access/edge layer switches. Pass-through modules allow Fibre Channel environments to avoid interoperability issues while allowing access to the advance SAN fabric features.

Select a primary aggregation switch to be the primary default gateway and STP root. Set the redundant aggregation switch as the backup default gateway and secondary root. Use HSRP and RPVST+ as the primary default gateway and STP protocols and ensure he active service devices are in the STP root switch.

B

Blade Servers

Use a high-speed (10GbE) metro optical network for packet-based and transparent LAN services between distributed Campus and Data Centre environments.

Attach integrated Infiniband switches to Server Fabric Switches acting as gateways to the Ethernet network. Connect the gateway switches to the aggregation switches to reach the IP network.

Application, security and virtualisation services provided by service modules or appliances are best offered from the aggregation layer. Services are made available to all servers, provisioning is centralized and the network topology is kept predictable and deterministic.

HIGH PERFORMANCE INFINIBAND CLUSTER

Mauricio Arregoces [email protected]

In an integrated Ethernet switch fabric, set up half the blades active on switch1 and half active on switch2. Dual-home each Ethernet switch to Layer 3 switches through GbE-channels. Use RPVST+ for fast STP convergence. Use link-state tracking to detect uplink failure and allow the blades standby NIC to take over.

EXPANDED MULTI-TIER DESIGN

Use VSANs to create separate SANs over a shared physical infrastructure. Use two distinct SAN fabrics to mainain a highly available SAN environment. Use port channel to increase path redundancy and fast recovery from link failure. Use FSPF for equal cost load-balancing through redundant paths. Use storage virtualisation to pool distinct physical storage arrays as one, hiding physical details (arrays, spindles, LUNs).

Designed By:

Consider blade server direct attachment and network fabric options; Pass-through modules or integrated switches, and Fibre Channel, Ethernet and Infiniband.

Group servers providing like-functions in the same VLANs to apply consistent and manageable set of security, SSL, load balancing, and monitoring policies. Dual-home critical servers to different access switches, and stagger primary physical connections between available access switches.

METRO ETHERNET

Place all network-based service devices (modules or appliances) at the aggregation layer to centralise the configuration and management tasks and to leverage service intelligence applied to the entire server farm.

Use a data center core layer to support multiple aggregation modules. Use multiple aggregation modules when the number of servers per module exceeds the capacity of the module. Connect the data center core switches to the campus core switches to reach the rest of the Enteprise network. Consider the use of 10GbE links between core and aggregation switches. Use dual-stack IPv6-IPv4 in all Layer 3 devices in the data center, and identify the server farm requirements to ensure IPv6 traffic conforms to firewall and filtering policies.

Use ACE as a content switch to scale application services including SSL off-loading on server farms. Use virtual firewalls to isolate application environments. Use AON to optimise inter-application security and communications services, and to provide visibility into real-time transactions. Use MARS to detect security anomalies by correlating data from different traffic sources.

A

MAN INTERCONNECT

BLADE SERVER COMPLEX

COLLAPSED MULTITIER DESIGN

Consolidate application and security services (service modules or appliances) at the aggregation layer switches. Ensure the access layer design (whether L2 or L3) provides a predictable and deterministic behavior and allows the server farm to scale up the expected number of nodes. Use VLANs in conjunction with instances of application and security services applied to each application environment independently.

The secondary data center design is a smaller replica of the primary that houses backup critical application environments. These support business functions that must be resumed to achieve regular operating conditions.

PUBLIC NETWORK

VLAN Y

VSAN Q

PRIVATE NETWORK

GbE

Wireless Connection

Cisco Application Control Engine

VSAN P

10 GbE

Cisco 3000 Series Multifabric Server Switch

Part #: 910300406R01

Related Documents


More Documents from ""