Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5
Setup for Microsoft Cluster Service
Setup for Microsoft Cluster Service Revision: 041108
You can find the most up-to-date technical documentation on our Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product updates. If you have comments about this documentation, submit your feedback to:
[email protected]
© 2007, 2008 VMware, Inc. All rights reserved. Protected by one or more U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886, 6,789,156, 6,795,966, 6,880,022, 6,944,699, 6,961,806, 6,961,941, 7,069,413, 7,082,598, 7,089,377, 7,111,086, 7,111,145, 7,117,481, 7,149,843, 7,155,558, 7,222,221, 7,260,815, 7,260,820, 7,269,683, 7,275,136, 7,277,998, 7,277,999, 7,278,030, 7,281,102, and 7,290,253; patents pending. VMware, the VMware “boxes” logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com 2
VMware, Inc.
Contents
About This Book 5
1 Getting Started 9 Introduction 9 Clustering Software 10 Clustering Hardware 10 Clustering Configurations 10 Clustering Virtual Machines on a Single Host (Cluster in a Box) 10 Clustering Virtual Machines Across Physical Hosts (Cluster Across Boxes) 11 Clustering Physical Machines with Virtual Machines (Standby Host) 13 Prerequisites for Clustering 13 Prerequisites for Cluster in a Box 14 Prerequisites for Clustering Across Boxes 14 Prerequisites for Standby Host Clustering 15 Shared Storage Summary 15 Caveats, Restrictions, and Recommendations 16 Recommendations for Using MSCS and Boot from SAN 17 Setting up a Clustered Continuous Replication Environment for Microsoft Exchange 18
2 Clustering Virtual Machines on One Physical Host 19 Task 1: Creating the First Node 19 Task 2: Creating the Second Node 21 Task 3: Adding Hard Disks to Node1 21 Task 4: Adding Hard Disks to Node2 24
3 Clustering Virtual Machines Across Physical Hosts 27 Task 1: Creating the First Node 27 Task 2: Creating the Second Node 29 Task 3: Adding Hard Disks to Node1 30 Task 4: Adding Hard Disks to Node2 32
VMware, Inc.
3
Setup for Microsoft Cluster Service
4 Clustering Physical and Virtual Machines 35 Task 1: Creating the First Node 35 Task 2: Creating the Second Node 36 Task 3: Installing Microsoft Cluster Service 38 Task 4: Creating Additional Physical/Virtual Pairs 38
5 Upgrading Clustered Virtual Machines 39 Legacy Cluster Setup Options 39 Upgrading Cluster in a Box (CIB) 40 Upgrading CIB: Shared RDMs and Boot Disks in Separate VMFS Volumes 40 Upgrading CIB: RDMs and Boot Disks in Same VMFS Volume 41 Upgrading CIB: Virtual Disks 42 Upgrading Cluster Across Boxes 42 Using Shared Pass‐Through RDMs 42 Upgrading a Cluster with Files in Shared VMFS2 Volumes 43 Upgrading Clusters Using Physical to Virtual Clustering 44
Appendix: Setup Checklist 45 Index 49
4
VMware, Inc.
About This Book
This book, Setup for Microsoft Cluster Service, first discusses the types of clusters you can implement using virtual machines with Microsoft Cluster Service. It then gives step‐by‐step instructions for each type of cluster, and concludes with a checklist of clustering requirements and recommendations. Setup for Microsoft Cluster Service covers both ESX Server 3.5 and ESX Server 3i version 3.5. For ease of discussion, this book uses the following product naming conventions:
For topics specific to ESX Server 3.5, this book uses the term “ESX Server 3.”
For topics specific to ESX Server 3i version 3.5, this book uses the term “ESX Server 3i.”
For topics common to both products, this book uses the term “ESX Server.”
When the identification of a specific release is important to a discussion, this book refers to the product by its full, versioned name.
When a discussion applies to all versions of ESX Server for VMware Infrastructure 3, this book uses the term “ESX Server 3.x.”
Intended Audience This book is for system administrators who are familiar with both VMware technology and Microsoft Cluster Service. NOTE This is not a guide to using Microsoft Cluster Service. Use your Microsoft documentation for information on installation and configuration of Microsoft Cluster Service.
VMware, Inc.
5
Setup for Microsoft Cluster Service
Document Feedback VMware welcomes your suggestions for improving our documentation. If you have comments, send your feedback to:
[email protected]
VMware Infrastructure Documentation The VMware Infrastructure documentation consists of the combined VMware VirtualCenter and ESX Server documentation set.
Abbreviations Used in Figures The figures in this book use the abbreviations listed in Table 1. Table 1. Abbreviations Abbreviation
Description
FC
Fibre Channel
SAN
Storage area network type datastore shared between managed hosts
VM#
Virtual machines on a managed host
Technical Support and Education Resources The following sections describe the technical support resources available to you. To access the current versions of this book and other books, go to: http://www.vmware.com/support/pubs.
Online and Telephone Support Use online support to submit technical support requests, view your product and contract information, and register your products. Go to: http://www.vmware.com/support Customers with appropriate support contracts should use telephone support for the fastest response on priority 1 issues. Go to: http://www.vmware.com/support/phone_support.html
6
VMware, Inc.
About This Book
Support Offerings Find out how VMware support offerings can help meet your business needs. Go to: http://www.vmware.com/support/services
VMware Education Services VMware courses offer extensive hands‐on labs, case study examples, and course materials designed to be used as on‐the‐job reference tools. For more information about VMware Education Services, go to: http://mylearn1.vmware.com/mgrreg/index.cfm
VMware, Inc.
7
Setup for Microsoft Cluster Service
8
VMware, Inc.
1
Getting Started
1
This chapter introduces clustering, discusses the different types of clusters and prerequisites for each type, and includes some caveats and recommendations in the following sections:
“Introduction” on page 9
“Clustering Configurations” on page 10
“Prerequisites for Clustering” on page 13
“Caveats, Restrictions, and Recommendations” on page 16
“Recommendations for Using MSCS and Boot from SAN” on page 17
“Setting up a Clustered Continuous Replication Environment for Microsoft Exchange” on page 18
Introduction This document discusses traditional clustering (hot standby) using MSCS in a VMware Infrastructure environment. Clustering virtual machines can reduce hardware costs of traditional high availability clusters. VMware also supports a cold standby clustering solution using VMware HA in conjunction with VirtualCenter clusters. VMware HA functionality, as well as the differences between the two approaches, is discussed in the Resource Management Guide.
VMware, Inc.
9
Setup for Microsoft Cluster Service
A number of different applications use clustering:
Stateless applications, such as Web servers and VPN servers.
Applications that have built‐in recovery features, such as database servers, mail servers, and file servers.
VirtualCenter Server can be used as a clustered application. See http://www.vmware.com/pdf/VC_MSCS.pdf.
Clustering Software Several clustering software products can be used in conjunction with virtual machines. However, VMware tests clustering only with MSCS and supports only MSCS.
Clustering Hardware A typical clustering setup includes:
Disks that are shared between nodes. A shared disk is required as a quorum disk. In a cluster across boxes, the shared disk must be on an FC SAN.
A private heartbeat network between nodes.
Clustering Configurations Several clustering configurations are possible in a VMware Infrastructure environment and are briefly discussed below:
“Clustering Virtual Machines on a Single Host (Cluster in a Box)” on page 10
“Clustering Virtual Machines Across Physical Hosts (Cluster Across Boxes)” on page 11
“Clustering Physical Machines with Virtual Machines (Standby Host)” on page 13
Clustering Virtual Machines on a Single Host (Cluster in a Box) A cluster in a box consists of two clustered virtual machines on the same ESX Server host connected to the same storage (either local or remote). See Figure 1‐1 for an example.
10
VMware, Inc.
Chapter 1 Getting Started
Figure 1-1. Cluster in a Box
private network virtual machine Node1 cluster software
virtual machine Node2 public network
cluster software
physical machine
storage (local or SAN)
This configuration protects against failures at the operating system and application level, but it does not protect against hardware failures. Chapter 2, “Clustering Virtual Machines on One Physical Host,” discusses how to set up a cluster in a box using MSCS.
Clustering Virtual Machines Across Physical Hosts (Cluster Across Boxes) A cluster across boxes configuration provides both hardware and software‐level protection by placing the cluster nodes on separate ESX Server hosts, as shown in Figure 1‐2. This configuration requires shared storage on an FC SAN for the quorum disk. This configuration protects against software failures and hardware failures on the physical machine. Chapter 3, “Clustering Virtual Machines Across Physical Hosts,” discusses how to set up a cluster across boxes using MSCS.
VMware, Inc.
11
Setup for Microsoft Cluster Service
Figure 1-2. Cluster Across Boxes
private network virtual machine Node1 cluster software
virtual machine Node2 cluster software
public network
physical machine
physical machine
storage (SAN)
You can expand the cluster‐across‐boxes model and place multiple virtual machines on multiple physical machines. For example, you can consolidate four clusters of two physical machines each to two physical machines with four virtual machines each. This setup protects you from both hardware and software failures. At the same time, this setup results in significant hardware cost savings. Figure 1-3. Clustering Multiple Virtual Machines Across Hosts
1
2
3
4
5
7
6
8
VM1
VM2
VM3
VM4
VM5
VM6
VM7
VM8
physical machine
physical machine
Figure 1‐3 shows how four two‐node clusters can be moved from eight physical machines to two.
12
VMware, Inc.
Chapter 1 Getting Started
Clustering Physical Machines with Virtual Machines (Standby Host) For a simple clustering solution with low hardware requirements, you might choose to have one standby host. Set up your system to have a virtual machine corresponding to each physical machine on the standby host, and then create clusters, one each for each physical machine and its corresponding virtual machine. In case of hardware failure in one of the physical machines, the virtual machine on the standby host can take over for that physical host. Figure 1‐4 shows a standby host using three virtual machines on a single physical machine. Each virtual machine is running clustering software. Figure 1-4. Clustering Physical and Virtual Machines
virtual machine cluster software cluster software
virtual machine cluster software cluster software
virtual machine cluster software
cluster software
physical machine
Prerequisites for Clustering Using MSCS in any of the configurations discussed requires preparation. This section lists the prerequisites for the ESX Server host and the virtual machine. For additional software prerequisite information, see the Guide to Creating and Configuring a Server Cluster under Windows Server 2003 on the Microsoft Web site. “Appendix: Setup Checklist” on page 45 summarizes prerequisites for different types of clusters.
VMware, Inc.
13
Setup for Microsoft Cluster Service
Prerequisites for Cluster in a Box To set up a cluster in a box, you must have:
ESX Server host, one of the following:
ESX Server 3 – An ESX Server host with a physical network adapter for the service console. If the clustered virtual machines need to connect with external hosts, then an additional network adapter is highly recommended.
ESX Server 3i – An ESX Server host with a physical network adapter for the VMkernel. If the clustered virtual machines need to connect with external hosts, a separate network adapter is recommended.
A local SCSI controller. If you plan to use a VMFS volume that exists on a SAN, you need an FC HBA (QLogic or Emulex).
You can set up shared storage for a cluster in a box either by using a virtual disk or by using a remote raw device mapping (RDM) LUN in virtual compatibility mode (non‐pass‐through RDM). When you set up the virtual machine, you need to configure:
Two virtual network adapters.
A hard disk that is shared between the two virtual machines (quorum disk).
Optionally, additional hard disks for data that are shared between the two virtual machines if your setup requires it. When you create hard disks, as described in this document, the system creates the associated virtual SCSI controllers.
Prerequisites for Clustering Across Boxes The prerequisites for clustering across boxes are similar to those for cluster in a box. You must have:
14
ESX Server host. VMware recommends three network adapters per host for public network connections. The minimum configuration is:
ESX Server 3 – An ESX Server host configured with at least two physical network adapters dedicated to the cluster, one for the public and one for the private network, and one network adapter dedicated to the service console.
ESX Server 3i – An ESX Server host configured with at least two physical network adapters dedicated to the cluster, one for the public and one for the private network, and one network adapter dedicated to the VMkernel.
Shared storage must be on an FC SAN.
You must use an RDM in physical or virtual compatibility mode (pass‐through RDM or non‐pass‐through RDM). You cannot use virtual disks for shared storage. VMware, Inc.
Chapter 1 Getting Started
Prerequisites for Standby Host Clustering The prerequisites for standby host clustering are similar to those for clustering across boxes. You must have:
ESX Server host. VMware recommends three network adapters per host for public network connections. The minimum configuration is:
ESX Server 3 – An ESX Server host configured with at least two physical network adapters dedicated to the cluster, one for the public and one for the private network, and one network adapter dedicated to the service console.
ESX Server 3i – An ESX Server host configured with at least two physical network adapters dedicated to the cluster, one for the public and one for the private network, and one network adapter dedicated to the VMkernel.
You must use RDMs in physical compatibility mode (pass‐through RDM). You cannot use virtual disk or RDM in virtual compatibility mode (non‐pass‐through RDM) for shared storage.
You cannot have multiple paths from the ESX Server host to the storage.
Running third‐party multipathing software is not supported. Because of this limitation, VMware strongly recommends that there only be a single physical path from the native Windows host to the storage array in a configuration of standby‐host clustering with a native Windows host. The ESX Server host automatically uses native ESX Server multipathing, which can result in multiple paths to shared storage.
Use the STORport Miniport driver for the FC HBA (QLogic or Emulex) in the physical Windows machine.
Shared Storage Summary Table 1‐1 illustrates which shared storage setup is supported for which clustering solution. The setup for each solution is shown in bold. Table 1-1. Shared Storage Summary Cluster in a Box
Cluster Across Boxes
Standby Host Clustering
Virtual disks
Yes
No
No
Pass‐through RDM
No
Yes
Yes
Yes
Yes
No
(physical compatibility mode) Non‐pass‐through RDM (virtual compatibility mode)
VMware, Inc.
15
Setup for Microsoft Cluster Service
Caveats, Restrictions, and Recommendations This section summarizes caveats, restrictions, and recommendation for using MSCS in a VMware Infrastructure environment.
VMware only supports third‐party cluster software that is specifically listed as supported in the hardware compatibility guides. For latest updates to VMware support for Microsoft operating system versions for MSCS, or for any other hardware‐specific support information, see the Storage/SAN Compatibility Guide for ESX Server 3.5 and ESX Server 3i.
Each virtual machine has five PCI slots available by default. A cluster uses four of these slots (two network adapters and two SCSI host bus adapters), leaving one PCI slot for a third network adapter (or other device), if needed.
VMware virtual machines currently emulate only SCSI‐2 reservations and do not support applications using SCSI‐3 persistent reservations.
Use LSILogic virtual SCSI adapter.
Use Windows Server 2003 SP2 (32 bit or 64 bit) or Windows 2000 Server SP4. VMware recommends Windows Server 2003.
Use two‐node clustering.
Clustering is not supported on iSCSI or NFS disks.
NIC teaming is not supported with clustering.
The boot disk of the ESX Server host should be on local storage.
Mixed HBA environments (QLogic and Emulex) on the same host are not supported.
Mixed environments using both ESX Server 2.5 and ESX Server 3.x are not supported.
Clustered virtual machines cannot be part of VMware clusters (DRS or HA).
You cannot use migration with VMotion on virtual machines that run cluster software.
Set the I/O time‐out to 60 seconds or more by modifying HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\ TimeOutValue. The system might reset this I/O time‐out value if you recreate a cluster. You must reset the value in that case.
16
VMware, Inc.
Chapter 1 Getting Started
Use the eagerzeroedthick format when you create disks for clustered virtual machines. By default, the VI Client or vmkfstools create disks in zeroedthick format. You can convert a disk to eagerzeroedthick format by importing, cloning, or inflating the disk. Disks deployed from a template are also in eagerzeroedthick format.
Add disks before networking, as explained in the VMware Knowledge Base article at http://kb.vmware.com/kb/1513.
Recommendations for Using MSCS and Boot from SAN This section gives some recommendations for clustered virtual machines that use boot from SAN. For general information about boot from SAN, see the Fibre Channel SAN Configuration Guide. NOTE You cannot use clustered virtual machines that boot from an iSCSI SAN. Booting from SAN is complex. Problems you encounter in physical environments extend to virtual environments. VMware recommends the following when you put the boot disk of a virtual machine you wish to use in a cluster setup on a SAN.
Consider the best practices for boot from SAN that Microsoft publishes in the following knowledge base article: http://support.microsoft.com/kb/305547/en-us
Use StorPort lsilogic drivers instead of SCSIport drivers when running Microsoft Cluster Service for Windows Server 2003 guest operating systems.
VMware does not recommend migration with VMotion of clustered virtual machines.
Given the complexity of booting clustered virtual machines from SAN, VMware recommends you test clustered configurations in different failover scenarios before you put them into production environments.
If your environment is susceptible to conditions that cause cluster node servers to lose all paths to the storage array, do the following (for all cluster configurations):
Set bus sharing for the boot disk (scsi0) to None.
Set scsi0.returnBusyOnNoConnectStatus to FALSE for each node. See “To set scsi0.returnBusyOnNoConnectStatus” on page 18.
Set up the guest operating system to restart automatically after a crash. See “To set up automatic restart for the guest operating system” on page 18.
When all paths to storage are lost, the active node will crash and attempt to reboot.
VMware, Inc.
17
Setup for Microsoft Cluster Service
To set scsi0.returnBusyOnNoConnectStatus 1
Log in to a VI Client and select the virtual machine from the inventory panel. The configuration page for this virtual machine appears.
2
In the Summary tab, click Edit Settings.
3
Click Options > Advanced > General, and then click Configuration Parameters to open the Configuration Parameters dialog box.
4
Click Add Row.
5
Type scsi0.returnBusyOnNoConnectStatus in the Name column and FALSE in the Value column.
6
Click OK to close the Configuration Parameters dialog box, and then click OK again to close the Virtual Machine Properties dialog box.
To set up automatic restart for the guest operating system 1
Right‐click My Computer.
2
Choose Properties, then select the Advanced tab and click Settings under Startup and Recovery.
3
Choose Automatically restart on system failure.
Setting up a Clustered Continuous Replication Environment for Microsoft Exchange You can set up a clustered continuous replication (CCR) environment for Microsoft Exchange in your VMware Infrastructure environment. Microsoft discusses setup for Exchange Server 2007 on their Web site at: http://technet.microsoft.com/en-us/library/bb124558.aspx Microsoft discusses setup of CCR clusters on their Web site at: http://technet.microsoft.com/en-us/library/bb123996.aspx When working in a VMware Infrastructure environment, you use virtual machines instead of using physical machines as the cluster components. Use physical compatibility mode RDMs. If the boot disks of the CCR virtual machines are on a SAN, see “Recommendations for Using MSCS and Boot from SAN” on page 17.
18
VMware, Inc.
2
Clustering Virtual Machines on One Physical Host
2
This chapter guides you through creating a two‐node MSCS cluster on a single ESX Server machine. The process consists of four tasks, discussed in the following sections:
“Task 1: Creating the First Node” on page 19
“Task 2: Creating the Second Node” on page 21
“Task 3: Adding Hard Disks to Node1” on page 21
“Task 4: Adding Hard Disks to Node2” on page 24
NOTE Microsoft Cluster Service is already installed for Windows Server 2003. See the Guide to Creating and Configuring a Server Cluster under Windows Server 2003 and other documentation on the Microsoft Website. For Windows 2000 Server, you must install the Microsoft Cluster Service software.
Task 1: Creating the First Node Creating the first node consists of these major steps, discussed in detail in this section.
Creating the virtual machine for the first node with two virtual network adapters.
Installing the operating system.
Powering down the first node.
NOTE Before you create a virtual machine, create a virtual disk in eagerzeroedthick format using vmkfstools, and select that disk during virtual machine creation.
VMware, Inc.
19
Setup for Microsoft Cluster Service
To create and configure the first node’s virtual machine 1
Launch a VI Client and connect to the ESX Server host or a VirtualCenter Server. Use the user name and password of the user who will own the virtual machine.
2
In the inventory panel, right‐click the host and choose New Virtual Machine.
3
Make the following selections using the wizard. Table 2-1. New Virtual Machine Properties Page
Selection
Wizard Type
Typical.
Name and Location
Choose a name (for example, Node1) and location.
Resource Pool
Select the resource pool for the virtual machine, or select the host if there are no resource pools.
Datastore
Choose a local datastore as the location for the virtual machine configuration file and the virtual machine disk (.vmdk) file. This must be a disk in eagerzeroedthick format. Note: The virtual machine configuration file and the .vmdk file should always be stored on the local disk. (SEE UPDATE)
Guest Operating System
Choose the Windows 2000 Server or Windows Server 2003 operating system that you intend to install.
CPUs
Use the default unless you have special requirements.
Memory
Use the default unless you need additional memory and your server supports it.
Network
Change NICs to Connect to 2, and select the second network for the second NIC. You need one NIC for the private network and the second NIC for the public network.
4
20
Virtual Disk Capacity
If you need a primary SCSI disk larger than 4GB, enter the appropriate value in the Capacity field.
Ready to Complete
Click OK to create the virtual machine.
Install a Windows Server 2000 or Windows Server 2003 operating system on the virtual machine.
VMware, Inc.
Chapter 2 Clustering Virtual Machines on One Physical Host
Task 2: Creating the Second Node Creating the second node involves cloning the Node1 virtual machine and adding disks that point to the shared storage. You can clone the node using a VI Client connected to a VirtualCenter Server, as described below, or using vmkfstools. See the Server Configuration Guide for a reference to vmkfstools. To clone the Node1 virtual machine 1
Shut down the guest operating system and power off the virtual machine.
2
In the VI Client inventory panel, select Node1 and choose Clone from the right‐button menu.
3
Make the following selections with the wizard: Table 2-2. Cloned Virtual Machine Properties Page
Selection
Name and Location
Choose a name (for example, Node2) and location.
Resource Partition
Select the resource pool for the virtual machine, or select the host if there are no resource pools.
Datastore
Choose a local datastore as the location for the virtual machine configuration file and the .vmdk file. This must be a disk in eagerzeroedthick format. (SEE UPDATE)
Customization
Choose Do not customize.
Ready to Complete
Click OK to create the virtual machine.
You have now created your second cluster node, a virtual machine with two network adapters on which the operating system is installed.
Task 3: Adding Hard Disks to Node1 After you have created two virtual machines as cluster nodes, you are ready to add a shared quorum disk. You can also add additional shared disks to the cluster if you plan on clustering additional data disks. After you have added disks, you can configure the clusterʹs public and private IP addresses.
VMware, Inc.
21
Setup for Microsoft Cluster Service
To prepare for adding disks You must zero out the disks you use with a cluster‐in‐a‐box scenario. You can use vmkfstools to do so. If you run on an ESX Server 3i host, you use the vmkfstools Remote CLI, which you must execute with connection parameters. See the ESX Server 3i Configuration Guide for information on installing and using Remote CLI commands. CAUTION When you zero out a disk, you lose all data.
To create and zero out the disk, use the following command: Service Console
vmkfstools -c <size> -d eagerzeroedthick -a lsilogic /vmfs/volumes/<mydir>/<myDisk>.vmdk
Remote CLI
vmkfstools.pl --server <server_address> --username <user> --password <user_password> -c <size> -d eagerzeroedthick -a lsilogic /vmfs/volumes/<mydir>/<myDisk>.vmdk
To zero out an existing disk, use a the following command: Service Console
vmkfstools [-w |--writezeroes] /vmfs/volumes/<mydir>/<myDisk>.vmdk
Remote CLI
vmkfstools.pl --server <server_address> --username <user> --password <user_password> [-w |--writezeroes] /vmfs/volumes/<mydir>/<myDisk>.vmdk
Repeat this process for each virtual disk you want to use as a shared disk in the cluster. For example, if you have one quorum disk and one shared storage disk, you must run the tool on both disks. To add a quorum disk and optional shared storage disk 1
Select the virtual machine you created and choose Edit Settings.
2
Click Add, select Hard Disk, and click Next.
3
Select Choose an existing virtual disk and select one of the disks you prepared. See “To prepare for adding disks” on page 22. NOTE You can also use a mapped SAN LUN set to virtual compatibility mode. In that case, you don’t need to run the vmkfstools commands listed in “To prepare for adding disks.”
22
VMware, Inc.
Chapter 2 Clustering Virtual Machines on One Physical Host
4
Choose a new virtual device node. For example, choose SCSI(1:0), and use the default mode. NOTE This must be a new controller. You cannot use SCSI 0.
5
Click Finish. The wizard creates both a new hard disk and a new SCSI controller.
6
Select the new SCSI controller and click Change Controller Type. Make sure the controller is set to LsiLogic (the default). BusLogic is not supported when you use MSCS with ESX Server 3.0 or later.
7
In the same panel, set SCSI Bus Sharing to Virtual and click OK.
8
If you require additional shared data disks, repeat Step 1 through Step 6 but choose a new target device, such as SCSI (1:1), on the controller that was just created. Figure 2‐1 shows your setup at this point.
VMware, Inc.
23
Setup for Microsoft Cluster Service
Figure 2-1. Cluster in a Box Setup for One Node (SEE UPDATE) NIC1 virtual switch1
virtual switch2
VNIC1
VNIC2
virtual machine Node1 VSCSI1
VSCSI2
SCSI1
SCSI2
physical machine
FC local storage
remote storage
Task 4: Adding Hard Disks to Node2 After you set up Node1, repeat the process to configure IP addresses and add one or more disks to Node2.
Set up the IP addresses so the private and public networks match those of Node1.
Point the quorum disk to the same location as the Node1 quorum disk. Point any shared storage disks to the same location as the Node1 shared storage disks.
If you are adding an RDM or virtual disk to the second node, choose Use existing disk. CAUTION If you clone a virtual machine with an RDM setup, all RDMs are converted to virtual disks. Unmap all RDMs before cloning, and remap them after cloning is complete. The completed setup is shown in Figure 2‐2.
24
VMware, Inc.
Chapter 2 Clustering Virtual Machines on One Physical Host
Figure 2-2. Cluster in a Box Complete Setup (SEE UPDATE) NIC1 virtual switch1 (public) virtual switch2 (private) VNIC1
VNIC2
VNIC2
virtual machine Node1 VSCSI1
physical machine
VNIC1
virtual machine Node2
VSCSI2
VSCSI2
SCSI1
SCSI2
VSCSI1
FC local storage
remote storage
VMware, Inc.
25
Setup for Microsoft Cluster Service
26
VMware, Inc.
3
Clustering Virtual Machines Across Physical Hosts
3
This chapter guides you through creating an MSCS cluster that consists of two virtual machines on two ESX Server hosts. Although this process is similar to the process for setting up a cluster in a box, steps are repeated for ease of use. The chapter consists of the following sections:
“Task 1: Creating the First Node” on page 27
“Task 2: Creating the Second Node” on page 29
“Task 3: Adding Hard Disks to Node1” on page 30
“Task 4: Adding Hard Disks to Node2” on page 32
NOTE Microsoft Cluster Service is already installed for Windows Server 2003 so you don’t need to install it. See the Guide to Creating and Configuring a Server Cluster under Windows Server 2003 and other documentation on the Microsoft Website. For Windows 2000 Server, you must install the Microsoft Cluster Service software.
Task 1: Creating the First Node Creating the first node consists of these major steps, discussed in this section:
Creating the virtual machine for Node1 with local storage for the boot disk. See “Prerequisites for Clustering Across Boxes” on page 14 for requirements. (SEE UPDATE)
Installing the operating system on Node1.
NOTE Before you create a virtual machine, create a virtual disk in eagerzeroedthick format using vmkfstools. Then point to that disk during virtual machine creation.
VMware, Inc.
27
Setup for Microsoft Cluster Service
To create the first node’s virtual machine 1
Launch a VI Client and connect to the VirtualCenter Server that manages the cluster’s ESX Server hosts. Use the user name and password of the user who will administer the virtual machine.
2
In the inventory panel, right‐click the ESX Server host and choose New Virtual Machine.
3
Make the following selections with the wizard: Table 3-1. New Virtual Machine Properties Page
Selection
Wizard Type
Typical.
Name and Location
Choose a name (for example Node1) and location.
Resource Pool
Select the resource pool for the virtual machine, or the host if there are no resource pools.
Datastore
Choose a local datastore as the location for the virtual machine configuration file and the .vmdk file. (SEE UPDATE)
4
28
Guest Operating System
Choose the Windows 2000 Server or Windows Server 2003 operating system you intend to install.
CPUs
Use the default suggested for your operating system.
Memory
Use the default unless you need additional memory and your server supports it.
Network
Change NICs to Connect to 2, and select the second network for the second NIC.
Virtual Disk Capacity
If you need a primary SCSI disk larger than 4GB, enter the appropriate value in the Capacity field.
Ready to Complete
Click OK to create the virtual machine.
Install a Windows 2000 Server or Windows Server 2003 operating system on the virtual machine.
VMware, Inc.
Chapter 3 Clustering Virtual Machines Across Physical Hosts
Task 2: Creating the Second Node Creating the second node involves cloning the Node1 virtual machine onto a second ESX Server host, adding disks to that virtual machine, and ensuring that the disks point to the storage shared with Node1. You can clone the node by using a VI Client connected to a VirtualCenter Server, described in the following procedure, or by using vmkfstools. See the Server Configuration Guide for a reference to vmkfstools. NOTE If you clone a virtual machine with RDMs, the RDMs are converted to virtual disks during the conversion process. Remove all RDMs before cloning, and remap them after cloning is complete. To clone the Node1 virtual machine 1
Shut down the guest operating system and power off the virtual machine.
2
In the VI Client inventory panel, select Node1 and choose Clone from the right‐button menu. Make the following selections with the wizard: Table 3-2. Cloned Virtual Machine Properties Page
Selection
Name and Location
Choose a name (for example Node2) and location.
Host or Cluster
Choose the second host for the cluster setup.
Resource Partition
Select the resource pool for the virtual machine, or select the host if there are no resource pools.
Datastore
Choose a local datastore as the location for the virtual machine configuration file and the .vmdk file. This must be a disk in eagerzeroedthick format. (SEE UPDATE)
Customization
Choose Do not customize.
Ready to Complete
Click OK to create the virtual machine.
You have now created a virtual machine with two network adapters on which the operating system you chose for Node1 is installed.
VMware, Inc.
29
Setup for Microsoft Cluster Service
Task 3: Adding Hard Disks to Node1 After you have created the two virtual machines with the operating system installed, you need perform the following tasks:
Configuring the guest operating system’s private and public IP addresses. See the documentation for the Microsoft 2003 operating system for configuration information.
Adding a virtual hard disk that is shared by the two virtual machines as the quorum disk, and optionally, one or more shared data disks to Node1. NOTE These disks must point to SAN LUNs. Both RDM in physical compatibility mode (pass‐through RDM) and RDM in virtual compatibility mode (non‐passthrough RDM) are supported. The procedure below uses physical compatibility mode.
To add a quorum disk and optional shared storage disks 1
Select the virtual machine you created and choose Edit Settings.
2
Click Add, select Hard Disk, and click Next.
3
In the Select a Disk page, choose Mapped SAN LUN and click Next. Your hard disk points to a LUN that uses RDM.
4
In the LUN selection page, choose an unformatted LUN and click Next. Ask your SAN administrator which of the LUNs are unformatted. You can also see all formatted LUNs in the host’s Configuration tab and deduce which LUNs are unformatted by comparing the list of formatted LUNs with the list in the LUN selection page.
5
In the Select Datastore page, select a datastore and click Next. This datastore must be on a SAN because you need a single shared RDM file for each shared LUN on the SAN.
6
Select Physical as the compatibility mode, and click Next. A SCSI controller is created when the virtual hard disk is created.
7
Choose a new virtual device node, for example choose SCSI(1:0), and use the default mode. NOTE This must be a new SCSI Controller. You cannot use SCSI 0.
30
VMware, Inc.
Chapter 3 Clustering Virtual Machines Across Physical Hosts
8
Click Finish to complete creating the disk. The wizard creates both a new SCSI controller and a new hard disk.
9
Select the new SCSI controller and click Change Controller Type.
10
Select LsiLogic in the dialog box that appears. MSCS on ESX Server 3.x is not supported in conjunction with BusLogic.
11
In the same panel, set SCSI Bus Sharing to Physical and click OK.
12
If you need additional shared data disks in your configuration, repeat Step 1 through Step 8 but choose a new Virtual Device Node, such as SCSI (1:1). Figure 3‐1 shows the setup at this point.
VMware, Inc.
31
Setup for Microsoft Cluster Service
Figure 3-1. Cluster Across Boxes, Node1 Setup (SEE UPDATE) NIC1
NIC2
virtual switch1 (public)
virtual switch2 (private)
VNIC1
VNIC2
virtual machine Node1 VSCSI1
VSCSI2 physical machine
SCSI1
SCSI2 FC
local storage
remote storage
Task 4: Adding Hard Disks to Node2 After you have set up Node1, set up Node2 so the private and public networks match. Then share the quorum and any shared data disks for Node1 with Node2. You have the option of reusing the existing RDM that you created when setting up the first cluster node.(SEE UPDATE) To reuse a SAN-based RDM 1
On Node2, click Add, select Hard Disk, and click Next.
2
In the Select a Disk page, choose Use Existing Disk, and click Next.
3
Select the RDM created on the shared datastore in Step 5 for Node1.
4
Continue with Step 6 through Step 10 for the quorum disk (see “To add a quorum disk and optional shared storage disks” on page 30).
5
(Optional) Continue with Step 6 through Step 8 for each additional shared data disk (see “To add a quorum disk and optional shared storage disks” on page 30).
The completed setup looks like Figure 3‐2.
32
VMware, Inc.
Chapter 3 Clustering Virtual Machines Across Physical Hosts
Figure 3-2. Cluster Across Boxes Complete Setup (SEE UPDATE)
NIC1
NIC2
NIC2
NIC1
virtual switch1 (public)
virtual switch2 (private)
virtual switch2 (private)
virtual switch1 (public)
VNIC1
VNIC2
VNIC2
VNIC1
virtual machine Node1 VSCSI1
virtual machine Node2
VSCSI2
VSCSI2
physical machine SCSI1
VSCSI1 physical machine
SCSI2
SCSI2
FC
FC
SCSI1
local storage
local storage
remote storage
VMware, Inc.
33
Setup for Microsoft Cluster Service
34
VMware, Inc.
4
Clustering Physical and Virtual Machines
4
This chapter guides you through creating an MSCS cluster in which each physical machine has a corresponding virtual machine. The chapter consists of the following sections:
“Task 1: Creating the First Node” on page 35
“Task 2: Creating the Second Node” on page 36
“Task 3: Installing Microsoft Cluster Service” on page 38
“Task 4: Creating Additional Physical/Virtual Pairs” on page 38
Task 1: Creating the First Node Because the first node is a physical machine, no detailed instructions for creating the first node are included in this chapter. See the Microsoft Cluster Service documentation for all prerequisites and caveats. You should set up your system as follows:
Choose the Advanced Minimum configuration within the Windows Cluster Administrator application.
Set up the physical machine to have at least two network adapters.
Set up the physical machine to have access to the same storage on a SAN as the ESX Server host on which you will run the corresponding virtual machine.
Install the operating system you want to use throughout the cluster.
NOTE VMware recommends that you don’t run multipathing software in the physical or virtual machines.
VMware, Inc.
35
Setup for Microsoft Cluster Service
Task 2: Creating the Second Node Creating the second node consists of the following major steps:
Creating a virtual machine that is set up for clustering across boxes.
Making sure the shared storage visible from Node1 (the physical machine) is also visible from Node2 (the virtual machine).
Installing the operating system.
Network adapter setup of the node depends on the type of ESX Server you are using. VMware recommends three network adapters per host for connections to the outside. See “Prerequisites for Cluster in a Box” on page 14 for information on the minimum configuration. (SEE UPDATE)
NOTE Before you create a virtual machine, create a virtual disk in eagerzeroedthick format using vmkfstools. Then point to that disk during virtual machine creation. To create the second node 1
Launch a VI Client and connect to the ESX Server host. Use the user name and password of the user who will own the virtual machine.
2
In the inventory panel, right‐click the host and choose New Virtual Machine.
3
Make the following selections with the wizard: Table 4-1. New Virtual Machine Properties
36
Page
Selection
Wizard Type
Typical.
Name and Location
Choose a name (for example, Node2) and location.
Resource Pool
Select the resource pool for the virtual machine, or the host if there are no resource pools.
Datastore
Choose a local datastore as the location for the virtual machine configuration file and the .vmdk file. This must be a disk in eagerzeroedthick format. (SEE UPDATE)
Guest Operating System
Choose the Windows 2000 Server or Windows Server 2003 operating system you want to install later.
CPUs
Use the default.
Memory
Use the default unless you need additional memory and your server supports it.
VMware, Inc.
Chapter 4 Clustering Physical and Virtual Machines
Table 4-1. New Virtual Machine Properties (Continued) Page
Selection
Network
Change NICs to Connect to 2, and select the second network for the second NIC.
Virtual Disk Capacity
If you need a primary SCSI disk larger than 4GB, enter the appropriate value in the Capacity field.
Ready to Complete
Click OK to create the virtual machine.
You need a shared SCSI controller and shared SCSI disks for shared access to clustered services and data. The next section sets up the disks for Node2 to point to the quorum disk and shared storage disks, if any, for Node1. To add a quorum disk and optional shared storage disk 1
Select the virtual machine you created and choose Edit Settings.
2
Click Add, select Hard Disk, and click Next.
3
In the Select a Disk page, choose Mapped SAN LUN and click Next. Your hard disk points to a LUN using RDM.
4
In the LUN selection page, choose the LUN that is used by Node1.
5
In the Select Datastore page, select the local datastore, which is also the location of the boot disk, and click Next. (SEE UPDATE)
6
Select Physical compatibility mode and click Next.
7
Select a virtual device node on a different SCSI Controller than the one that was created when you created the virtual machine. This SCSI Controller is created when the virtual hard disk is created.
8
Click Finish to complete creating the disk. The wizard creates both a new device node and a new hard disk.
9
Select the new SCSI controller, set SCSI Bus Sharing to Physical, and click OK.
10
(Optional) For additional storage disks, repeat Step 1 through Step 6 but choose a disk. Use the same virtual adapter.
11
Install Windows 2000 Server or Windows Server 2003 on the virtual machine.
VMware, Inc.
37
Setup for Microsoft Cluster Service
Task 3: Installing Microsoft Cluster Service The final task is to configure Microsoft Cluster Service. See the Guide to Creating and Configuring a Server Cluster under Windows Server 2003 and other information on the Microsoft Website. In some complex storage solutions, such as an FC switched fabric, a particular storage unit might have a different identity (target ID or raw disk ID) on each computer in the cluster. Although this is a valid storage configuration, it causes a problem when you want to add a node to the cluster. To avoid identity target problems 1
Within the Microsoft Cluster Administrator utility, disable the storage validation heuristics by clicking the Back button to return to the Select Computer page.
2
Click the Advanced button and select the Advanced (minimum) configuration option.
Microsoft Cluster Service should operate normally in the virtual machine after it is installed.
Task 4: Creating Additional Physical/Virtual Pairs For each physical machine:
38
Repeat Task1 to set up an additional virtual machine on the ESX Server host.
Cluster the physical machine with that virtual machine.
VMware, Inc.
5
Upgrading Clustered Virtual Machines
5
This chapter discusses how to upgrade clusters that use VMFS2 to VMFS3. It presents a comprehensive discussion of all cases in the following sections:
“Legacy Cluster Setup Options” on page 39
“Upgrading Cluster in a Box (CIB)” on page 40
“Upgrading Cluster Across Boxes” on page 42
“Upgrading Clusters Using Physical to Virtual Clustering” on page 44
NOTE Upgrading is supported only from ESX Server 2.5.2 or higher. You can upgrade from ESX Server 2.5.2 to ESX Server 3.0.x or ESX Server 3.5, and you can upgrade from ESX Server 3.0.x to ESX Server 3.5. Because there are no earlier versions of ESX Server 3i, this chapter does not apply to that platform.
Legacy Cluster Setup Options Using VMFS2, you had a number of options for setting up your MSCS cluster:
VMware, Inc.
For virtual machines clustered on a single physical host (cluster in a box), you could use a public VMFS in one of two ways:
Using non‐pass‐through RDMs
Using shared virtual disks
39
Setup for Microsoft Cluster Service
For virtual machines clustered on multiple physical hosts (cluster across boxes), you had three options:
Shared disks on shared VMFS
Two pass‐through RDMs backed by the same LUN on public volume
A single pass‐through RDM on a shared VMFS volume
For clusters of physical and virtual machines (standby host clustering), you used a public volume using pass‐through RDM.
This chapter steps you through the upgrade process for each of these options.
Upgrading Cluster in a Box (CIB) With VMFS2, a cluster in a box setup uses a public VMFS. By default, the general upgrade process, discussed in the Upgrade Guide, includes information about upgrading of public VMFS2 volumes to VMFS3. If you did not upgrade the VMFS used by the cluster during the upgrade process, you can upgrade using the VI Client later.
Upgrading CIB: Shared RDMs and Boot Disks in Separate VMFS Volumes This section steps you through upgrading a cluster in a box that uses shared non‐pass‐through RDMs that reside in a different VMFS2 volume than the boot disks for the cluster virtual machines. To perform the upgrade 1
Power off all clustered virtual machines.
2
Upgrade the ESX Server host from ESX Server 2.5.2 to ESX Server 3.x.
3
If you did not upgrade the VMFS2 volume where your cluster .vmdk files are kept to VMFS3 during upgrade of the host, upgrade now:
4
40
a
Select the upgraded host in a VI Client and click the Configuration tab.
b
Click Storage.
c
Select the volume.
d
Click Upgrade to VMFS3.
If necessary, upgrade the volume where your shared RDM files are located and upgrade those files, as in Step 3.
VMware, Inc.
Chapter 5 Upgrading Clustered Virtual Machines
5
Right‐click each cluster virtual machine in the inventory panel and click Upgrade Virtual Hardware.
6
Power on each virtual machine and verify the cluster setup. If the virtual machine fails to power on with error message Invalid Argument, you have a misconfigured cluster setup. The virtual disk used in ESX 2.x is not allowed to power on in ESX 3.x because ESX 3.x checks for invalid disk types.
Upgrading CIB: RDMs and Boot Disks in Same VMFS Volume This section steps you through upgrading a cluster in a box that uses shared non‐pass‐through RDMs that reside in the same VMFS2 volume as the boot disks for the cluster virtual machines. To perform the upgrade 1
Upgrade the ESX Server host from ESX Server 2.5.2 to ESX Server 3.x.
2
In the VI Client inventory panel, select the upgraded host in a VI Client.
3
Click the Configuration tab, and click Storage.
4
Upgrade the VMFS2 volume where your cluster .vmdk files and your shared RDM files are located to VMFS3, as follows: a
Select the volume where the files are located.
b
Click Upgrade to VMFS3.
This action upgrades the VMFS2 volumes to VMFS3 and relocates the .vmx file for the cluster virtual machines into the upgraded VMFS3 volume in a directory structure. 5
Right‐click the second cluster node’s virtual machine in the inventory panel and click Upgrade Virtual Hardware. An error like the following results: VMware ESX Server could not completely upgrade your virtual disk "/vmfs/volumes/2a3330116-da-11....vmdk due to the following error: The system cannot find the file specified."
The error is the result of Step 3, where the cluster’s virtual machines were relocated to the first node’s directory. 6
Ignore the error. The system updates your virtual hardware regardless of the error. You can verify this by viewing the .vmx file entries of the second cluster node.
VMware, Inc.
41
Setup for Microsoft Cluster Service
7
Manually edit the .vmx file of the second cluster virtual machine so that the entries of the quorum disk and any other shared disk point to the shared RDM files that are inside the first node’s directory inside the VMFS3 partition.
8
Power on each virtual machine and verify the cluster setup.
Upgrading CIB: Virtual Disks This section steps you through upgrading a cluster in a box (CIB) that uses shared virtual disks for the cluster virtual machines. To perform the upgrade 1
Import the old virtual disk to the new virtual disk, as follows: vmkfstools -I /vmfs/volumes/vol1/
.vmdk /vmfs/volumes/vol2/<myVMDir>/.vmdk
old-virtdisk.vmdk – the virtual disk from which you are importing
new-virtdisk.vmdk – the new virtual disk to which you are importing
2
Rename old-virtdisk.vmdk.
3
Edit the .vmx file to point to new-virtdisk.vmdk.
4
Power on the node and verify that the cluster service starts without problems.
Upgrading Cluster Across Boxes There are two types of clusters across boxes. This section explains how to upgrade clusters across boxes using shared pass‐through RDMs and how to upgrade clusters across boxes with shared file systems.
Using Shared Pass-Through RDMs This section explains how to upgrade a cluster with pass‐through RDMs for each node. To upgrade the cluster
42
1
Upgrade the ESX Server host from ESX Server 2.5.2 to ESX Server 3.x.
2
In the VI Client inventory panel, select the upgraded host.
3
Click the Configuration tab, and then click Storage.
VMware, Inc.
Chapter 5 Upgrading Clustered Virtual Machines
4
5
Upgrade the VMFS2 volume where your shared pass‐through RDM files are kept to VMFS3: a
Select the volume.
b
Click Upgrade to VMFS3.
Select the volume where the boot disk for the cluster virtual machine is located and upgrade it as in Step 3. This upgrades the volume and relocates the .vmx files related to the virtual machines inside the volume. The new directory structure is organized for easy management.
6
Right‐click on the cluster virtual machine in the inventory panel on the left.
7
Choose Upgrade Virtual Hardware from the right‐button menu.
8
Repeat the steps for the Node2 host.
9
Power on the virtual machines and verify the cluster.
Upgrading a Cluster with Files in Shared VMFS2 Volumes This section explains how to upgrade a cluster across boxes if you used shared files in a shared VMFS2 volume. To upgrade the cluster 1
Before upgrading to VMFS3, change the shared VMFS2 volume from shared to public, as follows: vmkfstools -L lunreset vmhba:0 vmkfstools -F public vmhba
2
Perform the host upgrades from ESX Server 2.5.2 to ESX Server 3.x.
3
Select the first upgraded host in a VI Client inventory panel.
4
Click the Configuration tab, and click Storage.
5
Upgrade the VMFS2 volume where your cluster .vmdk files are kept to VMFS3:
6
VMware, Inc.
a
Select the volume.
b
Click Upgrade to VMFS3.
Create LUNs for each shared disk (that is, one LUN for each shared disk).
43
Setup for Microsoft Cluster Service
7
For each disk, create a separate RDM for each cluster node backed by the same physical device. Create the RDM and import the virtual disk to this RDM. vmkfstools -i /vmfs/volumes/vol1/.vmdk /vmfs/volumes/vol2/<myVMDir>//<myrdm.vmdk> -d rdmp:/vmfs/devices/disks/vmhbaC:T:L:P
Where
8
old-virtdisk = the source virtual disk.
myVMDir = the target virtual machine directory.
rdm-for-vm1 = an optional directory in which to store RDM files for that virtual machine.
myrdm.vmdk = the target RDM file that this command creates.
vmhbaC:T:L:P = the device representing the raw LUN that you are mapping
C = controller number (the FC HBA).
T = the storage array’s target number through which the LUN is accessed.
L = LUN number.
P = partition number. In this example you must use 0 as the value to address the whole LUN
Edit the .vmx file to point to the RDM instead of the shared file: scsi<X>:.filename = "/vmfs/volumes/vol2/<myVMDir>//<myrdm.vmdk>" scsi<X>:.deviceType = "scsi-passthru-rdm"
9
Right‐click the cluster virtual machine in the inventory panel and select Upgrade Virtual Machine.
10
Repeat Step 8 and Step 9 for Node2.
11
Power on the nodes and verify that the cluster service starts without problems.
Upgrading Clusters Using Physical to Virtual Clustering If you are using a physical to virtual cluster using VMFS2, you use a public disk that is mapped using RDM from the virtual machine. By default, the upgrade process converts your VMFS2 disks to VMFS3. You can also explicitly convert VMFS2 volumes later if you did not convert them as part of the default conversion.
44
VMware, Inc.
Appendix: Setup Checklist
Administrators who are setting up Microsoft Cluster Service on ESX Server 3.x can use this appendix as a checklist. The appendix includes information in the following tables:
Table A‐1 “Requirements for Clustered Disks”
Table A‐2 “Other Clustering Requirements and Recommendations”
Table A‐1 lists the requirements for clustered disks. Table A-1. Requirements for Clustered Disks Component
Single-host Clustering
Multihost Clustering
Clustered virtual disk (.vmdk)
SCSI bus sharing mode must be set to Virtual.
Not supported.
Clustered disks, virtual compatibility mode (non pass‐through RDM)
Device type must be Virtual compatibility mode.
Device type must be Virtual compatibility mode for cluster across boxes, but not for standby host clustering.
SCSI bus sharing mode must be set to Virtual. A single, shared RDM mapping file for each clustered disk is required.
VMware, Inc.
SCSI bus sharing mode must be set to Physical. Requires a single, shared RDM mapping file for each clustered disk.
45
Setup for Microsoft Cluster Service
Table A-1. Requirements for Clustered Disks (Continued) Component
Single-host Clustering
Multihost Clustering
Clustered disks, physical compatibility mode (pass‐through RDM).
Not supported.
Device type must be Physical compatibility mode. This is set during hard disk creation. SCSI bus sharing mode must be set to Physical (the default). A single, shared RDM mapping file for each clustered disk is required.
All types
All clustered nodes must use the same target ID (on the virtual SCSI adapter) for the same clustered disk. A separate virtual adapter must be used for clustered disks.
Table A‐2 lists other clustering requirements. Table A-2. Other Clustering Requirements and Recommendations Component
Requirement
Disk
If you place the boot disk on a virtual disk, create that disk using vmkfstools, specifying the eagerzeroedthick option. The only disks that you should not create with the eagerzeroedthick option are RDM files (both physical and virtual compatibility mode) and the boot disks of native Windows hosts in standby clustering.
Windows
Use Windows Server 2003 SP2 (32 bit), Windows Server 2003 (64 bit) SP2, or Windows 2000 Server SP4. VMware recommends Windows Server 2003. Only two cluster nodes. Disk I/O time‐out is sixty seconds or more (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue). Note: If you recreate the cluster, this value might be reset to its default, so you must change it again. Cluster service must restart automatically on failure (first, second, and subsequent times).
ESX Server configuration
VMware recommends that you don’t overcommit memory, that is, set Memory Reservation (minimum memory) to the same as Memory Limit (maximum memory). If you must overcommit memory, the swap file must be local, not on the SAN.
46
VMware, Inc.
Appendix: Setup Checklist
Table A-2. Other Clustering Requirements and Recommendations (Continued) Component Information required by technical support to analyze clustering related issues
Requirement Verify that the setup complies with the checklist. vm‐support tarball (vmkernel log, virtual machine configuration files and logs, …) Application and system event logs of all virtual machines with the problem. Cluster log of all virtual machines with the problem (that is, %ClusterLog%, which is usually set to %SystemRoot%\cluster\cluster.log). Disk I/O timeout (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue) VI Client display names and Windows NETBIOS names of the virtual machines experiencing the problem. Date and time and the problem occurred. SAN configuration of the ESX Server system (LUNs, paths, and adapters).
Multipathing
VMware, Inc.
Running third‐party multipathing software is not supported.
47
Setup for Microsoft Cluster Service
48
VMware, Inc.
Index
A
D
across boxes, clustering 27 introduction 11 prerequisites 14
disks adding to nodes (across boxes) 30 adding to nodes (in-a-box) 21 quorum 22, 30, 37 shared 22, 30, 37
C caveats 16 cloning node1 21, 29 cluster across boxes first node 27 introduction 11 prerequisites 14 second node 29 upgrading 42, 43 cluster in a box first node 19 introduction 10 prerequisites 14 second node 21 upgrading 40, 41 clustering hardware 10 clustering physical and virtual machines 13, 35 first node 35 second node 36 clustering software 10 clustering virtual machines across hosts 27 introduction 11 prerequisites 14 clustering virtual machines on one host 19 introduction 10 prerequisites 14
VMware, Inc.
E ESX Server 3i 14, 15
F first node creating (across boxes) 27 creating (in-a-box) 19 creating (standby host) 35
H hardware 10
I in-a-box, clustering 19 introduction 10 prerequisites 14
M Microsoft Cluster Service (MSCS) installing 25, 38
N N+1, prerequisites 15 node1, cloning 21, 29
49
Setup for Microsoft Cluster Service
P prerequisites for clustering 13
Q quorum disk 22, 30, 37
R requirements 16
S second node creating (across boxes) 29 creating (in-a-box) 21 creating (standby host) 36 service console 14, 15 shared storage disk 22, 30, 37 shared storage summary 15 standby host 35 introduction 13 prerequisites 15 upgrading 44 storage quorum disk 22, 30, 37 shared 22, 30, 37
U upgrading 40 cluster across boxes 42, 43 cluster in a box 40, 41 standby-host cluster 44
50
VMware, Inc.
Updates for Setup for Microsoft Cluster Service
Last Updated: May 15, 2008 The following document is a list of updates to the Setup for Microsoft Cluster Service document. Updated descriptions are organized by page number so you can easily locate the areas of the guide that have changes. The document contains the following updates:
Updates for Table 2‐1 on Page 20
Updates for Table 2‐2 on Page 21
Updates for Figure 2‐1 on Page 24
Updates for Figure 2‐2 on Page 25
Updates for Task 1: Creating the First Node on Page 27
Updates for Table 3‐1 on Page 28
Updates for Task 4: Adding Hard Disks to Node 2 on Page 32
Updates for Table 3‐2 on Page 29
Updates for Figure 3‐1 on Page 32
Updates for Figure 3‐2 on Page 33
Updates for Table 4‐1 on Page 36
Updates for To add a quorum disk and optional shared disk Procedure on Page 37
Updates for To add a quorum disk and optional shared disk Procedure on Page 37
VMware, Inc.
Update–1
Updates for Setup for Microsoft Cluster Service
Updates for Table 2-1 on Page 20 Table 2‐1 should describe the datastore as follows: Page
Selection
Datastore
Choose a datastore for the virtual machine configuration file and the virtual machine disk (.vmdk) file.
Updates for Table 2-2 on Page 21 Table 2‐2 should describe the datastore as follows: Page
Selection
Datastore
Choose a datastore as the location for the virtual machine configuration file and the .vmdk file.
Updates for Figure 2-1 on Page 24 Figure 2‐1 should not point to a local datastore, but to a local or remote datastore, as follows: NIC1 virtual switch1
virtual switch2
VNIC1
VNIC2
virtual machine Node1 VSCSI1
VSCSI2
SCSI1
SCSI2
physical machine
FC local or remote storage
remote storage
Update–2
VMware, Inc.
Updates for Setup for Microsoft Cluster Service
Updates for Figure 2-2 on Page 25 Figure 2‐2 should not point to a local datastore but to a local or remote datastore, as follows: NIC1 virtual switch1 (public) virtual switch2 (private) VNIC1
VNIC2
VNIC2
virtual machine Node1 VSCSI1
physical machine
VNIC1
virtual machine Node2
VSCSI2
VSCSI2
SCSI1
SCSI2
VSCSI1
FC local or remote storage
remote storage
Updates for Task 1: Creating the First Node on Page 27 “Task 1: Creating the First Node” on page 27 incorrectly states that the virtual machine must be created on local storage. The text should instead read as follows:
Creating the virtual machine for Node1. See “Prerequisites for Clustering Across Boxes” on page 14 for requirements.
VMware, Inc.
Update–3
Updates for Setup for Microsoft Cluster Service
Updates for Table 3-1 on Page 28 Table 3‐1 should describe the datastore as follows: Page
Selection
Datastore
Choose a datastore as the location for the virtual machine configuration file and the .vmdk file.
Updates for Table 3-2 on Page 29 Table 3‐2 should describe the datastore as follows: Page
Selection
Datastore
Choose a datastore as the location for the virtual machine configuration file and the .vmdk file.
Updates for Figure 3-1 on Page 32 Figure 3‐1 should not point to a local datastore, but to a local or remote datastore, as follows: NIC1
NIC2
virtual switch1 (public)
virtual switch2 (private)
VNIC1
VNIC2
virtual machine Node1 VSCSI1
VSCSI2 physical machine
SCSI1
SCSI2 FC
local or remote storage
remote storage
Update–4
VMware, Inc.
Updates for Setup for Microsoft Cluster Service
Updates for Task 4: Adding Hard Disks to Node 2 on Page 32 The introductory paragraph to “Task 4: Adding Hard Disks to Node2” on page 32 should read as follows: After you have set up Node1, set up Node2 so that the private and public networks match. Then, share the quorum and any shared data disks for Node1 with Node2. Use the RDM that you created when setting up the first cluster node.
Updates for Figure 3-2 on Page 33 Figure 3‐2 should not point to a local datastore but to a local or remote datastore, as follows:
NIC1
NIC2
NIC2
NIC1
virtual switch1 (public)
virtual switch2 (private)
virtual switch2 (private)
virtual switch1 (public)
VNIC1
VNIC2
VNIC2
VNIC1
virtual machine Node1 VSCSI1
virtual machine Node2 VSCSI2
VSCSI2 physical machine
SCSI1
VSCSI1 physical machine
SCSI2
SCSI2
FC
FC
local or remote storage
SCSI1
local or remote storage
remote storage
VMware, Inc.
Update–5
Updates for Setup for Microsoft Cluster Service
Updates for Task 2: Creating the Second Node on Page 36 In “Task 2: Creating the Second Node,” the bullet that describes network adapter setup points to the wrong prerequisite section. The bullet should read as follows:
Network adapter setup of the node depends on the type of ESX Server you are using. VMware recommends three network adapters per host for connections to the outside. See “Prerequisites for Standby Host Clustering” on page 15 for information on the minimum configuration.
Updates for Table 4-1 on Page 36 Table 4‐1 should describe the datastore as follows: Page
Selection
Datastore
Choose a datastore as the location for the virtual machine configuration file and the .vmdk file.
Updates for To add a quorum disk and optional shared disk Procedure on Page 37 Step 5 in the procedure “To add a quorum disk and optional shared storage disk” on page 37 should read as follows: 5
Update–6
In the Select Datastore page, select the datastore, which is also the location of the boot disk, and click Next.
VMware, Inc.