VERITAS Volume Manager 4.0 for UNIX: Operations
100-002030
COURSE DEVELOPERS
Gail Adey Jade Arrington Harry Richards LEAD SUBJECT MATTER EXPERTS
Bob Lucas Dave Rogers Stephen Williams TECHNICAL CONTRIBUTORS AND REVIEWERS
Chris Amidei Barbara Ceran Connie Economou Danny Foreman Bill Havey Gene Henriksen Harold Holderman Michael Hsiung Gerald Jackson Danqing Jin Scott Kaiser Stefan Kwiatkowski Jack Lamirande Chris Maino Monu Pradhan-Advani Christian Rabanus Vance Ray Sue Rich Saumyendra “Sam” Sengupta Brian Staub Andrew Tipton Jiju Vithayathil Jerry Vochteloo Brad Willer
Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this guide, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. Copyright Copyright © 2004 VERITAS Software Corporation. All rights reserved. No part of the contents of this training material may be reproduced in any form or by any means or be used for the purposes of training or education without the written permission of VERITAS Software Corporation. Trademark Notice VERITAS, the VERITAS logo, and VERITAS FirstWatch, VERITAS Cluster Server, VERITAS File System, VERITAS Volume Manager, VERITAS NetBackup, and VERITAS HSM are registered trademarks of VERITAS Software Corporation. Other product names mentioned herein may be trademarks and/or registered trademarks of their respective companies. VERITAS Volume Manager 4.0 for UNIX: Operations Participant Guide VERITAS Software Corporation 350 Ellis Street Mountain View, CA 94043 Phone 650–527–8000 www.veritas.com
Table of Contents Course Introduction What Is Storage Virtualization? ................................................................... Intro-2 Storage Management Issues ........................................................................ Intro-2 Defining Storage Virtualization................................................................... Intro-3 How Is Storage Virtualization Used in Your Environment? ....................... Intro-4 Storage-Based Storage Virtualization ......................................................... Intro-5 Host-Based Storage Virtualization .............................................................. Intro-5 Network-Based Storage Virtualization........................................................ Intro-5 Introducing VERITAS Storage Foundation.................................................. Intro-6 What Is VERITAS Volume Manager? ........................................................ Intro-7 What Is VERITAS File System? ................................................................. Intro-7 Benefits of VERITAS Storage Foundation ................................................. Intro-9 VERITAS Storage Foundation Curriculum ................................................ Intro-11 VERITAS Volume Manager for UNIX: Operations Overview .................. Intro-12 Objectives .................................................................................................. Intro-12 Additional Course Resources .................................................................... Intro-13 Lesson 1: Virtual Objects Introduction........................................................................................................ 1-2 Physical Data Storage....................................................................................... 1-4 Physical Disk Structure ...................................................................................... 1-4 Physical Disk Naming ........................................................................................ 1-8 Disk Arrays....................................................................................................... 1-10 Multipathed Disk Arrays .................................................................................. 1-10 Virtual Data Storage ........................................................................................ 1-11 Virtual Storage Management............................................................................ 1-11 What Is a Volume? ........................................................................................... 1-11 How Do You Access a Volume? ...................................................................... 1-11 Why Use Volume Manager? ............................................................................ 1-11 Volume Manager-Controlled Disks.................................................................. 1-13 Comparing CDS Disks and Sliced Disks ......................................................... 1-14 Volume Manager Storage Objects .................................................................. 1-15 Disk Groups ...................................................................................................... 1-15 Volume Manager Disks .................................................................................... 1-15 Subdisks............................................................................................................ 1-16 Plexes................................................................................................................ 1-16 Volumes............................................................................................................ 1-17 Volume Manager RAID Levels ........................................................................ 1-18 RAID ................................................................................................................ 1-18 VxVM-Supported RAID Levels....................................................................... 1-19 Volume Layouts ............................................................................................... 1-19 Disk Spanning................................................................................................... 1-19 Data Redundancy.............................................................................................. 1-20 Resilience.......................................................................................................... 1-20 Summary ......................................................................................................... 1-21
Table of Contents
iii Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 2: Installation and Interfaces Introduction ....................................................................................................... 2-2 Installation Prerequisites ................................................................................... 2-4 OS Version Compatibility .................................................................................. 2-4 Version Release Differences............................................................................... 2-5 Adding License Keys......................................................................................... 2-6 Obtaining a License Key..................................................................................... 2-6 Generating License Keys with vLicense............................................................. 2-8 Adding a License Key......................................................................................... 2-9 Viewing Installed License Keys ......................................................................... 2-9 Managing Multiple Licensing Utilities............................................................ 2-10 VERITAS Software Packages......................................................................... 2-11 VERITAS Storage Solutions Products and Suites............................................ 2-11 Installing VxVM As Part of a Product Suite .................................................... 2-11 VERITAS Volume Manager Packages............................................................ 2-12 Package Space Requirements .......................................................................... 2-12 VERITAS File System Packages..................................................................... 2-13 VxVM Optional Features................................................................................. 2-14 VxFS Optional Features................................................................................... 2-15 Other Options Included with Foundation Suite ............................................... 2-16 Licenses Required for Optional Features......................................................... 2-16 Before Installing VxVM: What Is Enclosure-based Naming? ........................ 2-17 Before Installing VxVM: What Is a Default Disk Group? .............................. 2-18 Installing VxVM ............................................................................................... 2-19 Methods for Adding VxVM Packages............................................................. 2-19 Adding Packages with the VERITAS Installation Menu ................................ 2-20 Adding Packages Using the Product Installation Scripts................................. 2-22 Adding Packages Manually ............................................................................. 2-24 Verifying Package Installation......................................................................... 2-26 Configuring VxVM Using vxinstall ................................................................ 2-28 VxVM User Interfaces ..................................................................................... 2-29 Volume Manager User Interfaces .................................................................... 2-29 Using the VEA Interface.................................................................................. 2-30 The VEA Main Window.................................................................................. 2-31 Other Views in VEA........................................................................................ 2-31 Accessing Tasks Through VEA....................................................................... 2-32 Viewing Commands Through the Task History Window ............................... 2-33 Viewing Commands Through the Command Log File.................................... 2-34 Displaying VEA Help Information.................................................................. 2-35 Using the Command Line Interface................................................................. 2-36 Accessing Manual Pages for CLI Commands ................................................. 2-37 Using the vxdiskadm Interface ........................................................................ 2-38 Installing and Starting the VEA Software ........................................................ 2-39 Installing the VEA Server and Client on UNIX .............................................. 2-39 Installing the VEA Client on Windows ........................................................... 2-40 Starting the VEA Server .................................................................................. 2-41 Starting the VEA Client ................................................................................... 2-41 Managing the VEA Server............................................................................... 2-43 Confirming VEA Server Startup...................................................................... 2-43
iv
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Stopping and Restarting the VEA Server ......................................................... Displaying the VEA Version ............................................................................ Monitoring VEA Event and Task Logs ............................................................ Controlling User Access to VEA...................................................................... Modifying Group Access.................................................................................. Summary .........................................................................................................
2-43 2-43 2-43 2-44 2-45 2-47
Lesson 3: Managing Disks and Disk Groups Introduction........................................................................................................ 3-2 Naming Disk Devices ........................................................................................ 3-4 Device Naming Schemes.................................................................................... 3-4 Traditional Device Naming ................................................................................ 3-4 Enclosure-Based Naming ................................................................................... 3-5 Benefits of Enclosure-Based Naming................................................................. 3-6 Selecting a Naming Scheme ............................................................................... 3-7 Changing the Disk-Naming Scheme .................................................................. 3-7 Disk Configuration Stages................................................................................. 3-8 What Is a Disk Group? ....................................................................................... 3-8 Why Are Disk Groups Needed? ......................................................................... 3-8 System-Wide Reserved Disk Groups ................................................................. 3-9 Displaying Reserved Disk Group Definitions .................................................... 3-9 Setting the Default Disk Group ........................................................................ 3-10 Before Configuring a Disk for Use by VxVM ................................................. 3-11 Stage One: Initialize a Disk .............................................................................. 3-11 Stage Two: Assign a Disk to a Disk Group...................................................... 3-12 Stage Three: Assign Disk Space to Volumes ................................................... 3-12 Creating a Disk Group..................................................................................... 3-13 Creating a Disk Group ...................................................................................... 3-13 Adding Disks .................................................................................................... 3-13 Disk Naming..................................................................................................... 3-13 Default Disk Naming........................................................................................ 3-14 Notes on Disk Naming ..................................................................................... 3-14 Creating a Disk Group: VEA............................................................................ 3-15 Adding a Disk: VEA......................................................................................... 3-16 Creating a Disk Group: vxdiskadm .................................................................. 3-17 Initializing a Disk: CLI..................................................................................... 3-17 Creating a Disk Group: CLI ............................................................................. 3-17 Adding a Disks to a Disk Group: CLI .............................................................. 3-18 Viewing Disk and Disk Group Information....................................................... 3-19 Keeping Track of Your Disks........................................................................... 3-19 Displaying Disk Information: VEA.................................................................. 3-19 Viewing Disk Details: VEA ............................................................................. 3-20 Viewing Disk Properties: VEA ........................................................................ 3-21 Viewing Disk Group Properties: VEA ............................................................. 3-22 Displaying Basic Disk Information: CLI.......................................................... 3-23 Displaying Detailed Disk Information: CLI ..................................................... 3-24 Displaying Disk Group Information: CLI ........................................................ 3-26 Managing Disks............................................................................................... 3-27 Creating a Non-CDS Disk and Disk Group...................................................... 3-27 Removing Disks ............................................................................................... 3-28 Table of Contents
v Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Before You Remove a Disk ............................................................................. Evacuating a Disk ............................................................................................ Evacuating a Disk: VEA.................................................................................. Evacuating a Disk: vxdiskadm......................................................................... Evacuating a Disk: CLI.................................................................................... Removing a Disk: VEA ................................................................................... Removing a Disk: vxdiskadm.......................................................................... Removing a Disk: CLI ..................................................................................... Changing the Disk Media Name...................................................................... Before You Rename a Disk ............................................................................. Renaming a Disk: VEA ................................................................................... Renaming a Disk: CLI ..................................................................................... Managing Disk Groups.................................................................................... Deporting a Disk Group................................................................................... Deporting and Specifying a New Host ............................................................ Deporting and Renaming ................................................................................. Deporting a Disk Group: VEA ........................................................................ Deporting a Disk Group: vxdiskadm ............................................................... Deporting a Disk Group: CLI .......................................................................... Importing a Deported Disk Group ................................................................... Importing and Renaming ................................................................................. Importing and Clearing Host Locks................................................................. Importing As Temporary ................................................................................. Forcing an Import ............................................................................................ Importing a Disk Group: VEA......................................................................... Importing a Disk Group: vxdiskadm ............................................................... Importing a Disk Group: CLI .......................................................................... Example: Disk Groups and High Availability ................................................. Moving Disk Groups Between Systems .......................................................... Moving a Disk Group: VEA ............................................................................ Moving a Disk Group: vxdiskadm................................................................... Moving a Disk Group: CLI.............................................................................. Renaming a Disk Group................................................................................... Renaming a Disk Group: VEA ........................................................................ Renaming a Disk Group: CLI .......................................................................... Destroying a Disk Group ................................................................................. Destroying a Disk Group: VEA....................................................................... Destroying a Disk Group: CLI......................................................................... Upgrading a Disk Group.................................................................................. Summary of Supported Features for Disk Group Versions ............................. Upgrading a Disk Group: VEA........................................................................ Upgrading a Disk Group: CLI ......................................................................... Summary.........................................................................................................
3-28 3-29 3-29 3-29 3-29 3-30 3-30 3-31 3-32 3-32 3-32 3-32 3-33 3-33 3-33 3-33 3-34 3-35 3-35 3-36 3-36 3-36 3-37 3-37 3-38 3-39 3-39 3-40 3-41 3-41 3-42 3-42 3-43 3-43 3-44 3-45 3-45 3-45 3-46 3-47 3-48 3-49 3-50
Lesson 4: Creating Volumes Introduction ....................................................................................................... 4-2 Selecting a Volume Layout ............................................................................... 4-4 Concatenated Layout .......................................................................................... 4-4 Striped Layout..................................................................................................... 4-5 Mirrored Layout.................................................................................................. 4-6 vi
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
RAID-5 ............................................................................................................... 4-7 Comparing Volume Layouts .............................................................................. 4-8 Creating a Volume........................................................................................... 4-10 Creating a Volume............................................................................................ 4-10 Before You Create a Volume ........................................................................... 4-10 Creating a Volume: VEA ................................................................................. 4-11 Creating a Volume: CLI ................................................................................... 4-16 Creating a Concatenated Volume: CLI ............................................................ 4-17 Creating a Striped Volume: CLI....................................................................... 4-18 Creating a RAID-5 Volume: CLI ..................................................................... 4-19 Creating a Mirrored Volume: CLI.................................................................... 4-20 Creating a Mirrored and Logged Volume: CLI................................................ 4-21 Estimating Volume Size: CLI........................................................................... 4-22 Displaying Volume Layout Information............................................................ 4-23 Displaying Volume Information: VEA ............................................................ 4-23 Object Views in Main Window ........................................................................ 4-23 Disk View Window .......................................................................................... 4-24 Volume View Window ..................................................................................... 4-25 Volume to Disk Mapping Window .................................................................. 4-26 Volume Layout Window .................................................................................. 4-27 Volume Properties Window ............................................................................. 4-28 Displaying Volume Layout Information: CLI.................................................. 4-29 Displaying Information for All Volumes ......................................................... 4-31 Creating a Layered Volume............................................................................. 4-32 What Is a Layered Volume? ............................................................................. 4-32 Comparing Regular Mirroring with Enhanced Mirroring ................................ 4-33 How Do Layered Volumes Work? ................................................................... 4-35 Layered Volumes: Advantages......................................................................... 4-35 Layered Volumes: Disadvantages .................................................................... 4-36 Layered Volume Layouts ................................................................................. 4-37 mirror-concat ........................................................................................... 4-37 mirror-stripe ........................................................................................... 4-38 concat-mirror ........................................................................................... 4-39 stripe-mirror ........................................................................................... 4-40 Creating a Layered Volume: VEA ................................................................... 4-41 Creating a Layered Volume: CLI ..................................................................... 4-41 Viewing a Layered Volume: VEA ................................................................... 4-42 Viewing a Layered Volume: CLI ..................................................................... 4-42 Removing a Volume ........................................................................................ 4-43 Removing a Volume: VEA............................................................................... 4-43 Removing a Volume: CLI ................................................................................ 4-43 Summary ......................................................................................................... 4-44 Lesson 5: Configuring Volumes Introduction........................................................................................................ Administering Mirrors ........................................................................................ Adding a Mirror.................................................................................................. Adding a Mirror: VEA ....................................................................................... Adding a Mirror: CLI ......................................................................................... Removing a Mirror ............................................................................................. Table of Contents
5-2 5-4 5-4 5-5 5-5 5-6 vii
Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Removing a Mirror: VEA ................................................................................... 5-7 Removing a Mirror: CLI..................................................................................... 5-7 Adding a Log to a Volume................................................................................. 5-9 Logging in VxVM............................................................................................... 5-9 Dirty Region Logging ......................................................................................... 5-9 RAID-5 Logging.............................................................................................. 5-10 Adding a Log: VEA .......................................................................................... 5-11 Adding a Log: CLI........................................................................................... 5-12 Removing a Log: CLI ...................................................................................... 5-12 Changing the Volume Read Policy ................................................................. 5-13 Volume Read Policies with Mirroring............................................................. 5-13 Changing the Volume Read Policy: VEA ....................................................... 5-14 Changing the Volume Read Policy: CLI ......................................................... 5-14 Allocating Storage for Volumes....................................................................... 5-15 Specifying Storage Attributes for Volumes..................................................... 5-15 Specifying Storage Attributes: VEA................................................................ 5-16 Specifying Storage Attributes: CLI ................................................................. 5-16 Specifying Ordered Allocation of Storage for Volumes.................................. 5-20 Specifying Ordered Allocation: VEA.............................................................. 5-21 Specifying Ordered Allocation: CLI................................................................ 5-21 Administering File Systems............................................................................. 5-24 Adding a File System to a Volume: VEA........................................................ 5-24 Mounting a File System: VEA......................................................................... 5-24 Unmounting a File System: VEA .................................................................... 5-24 Adding a File System to a Volume: CLI ......................................................... 5-25 Mounting a File System at Boot: CLI.............................................................. 5-27 Using VERITAS File System Commands....................................................... 5-28 Location of VxFS Commands: ........................................................................ 5-28 General File System Command Syntax ........................................................... 5-29 Using VxFS Commands by Default ................................................................ 5-29 Using mkfs Command Options ...................................................................... 5-30 Maximum File and File System Sizes ............................................................. 5-32 Other mount Command Options...................................................................... 5-33 Unmounting a File System .............................................................................. 5-33 Identifying File System Type........................................................................... 5-34 Identifying Free Space ..................................................................................... 5-34 Comparing VxFS with Traditional File System Allocation Policies............... 5-35 Example: UFS Block-Based Allocation .......................................................... 5-35 VxFS Extent-Based Allocation........................................................................ 5-36 Benefits of Extent-Based Allocation ............................................................... 5-37 Upgrading the VxFS File System Layout........................................................ 5-38 VxFS Structural Components .......................................................................... 5-40 VxFS Allocation Units..................................................................................... 5-40 VxFS Structural Files....................................................................................... 5-40 Controlling File System Fragmentation........................................................... 5-42 Types of Fragmentation ................................................................................... 5-42 Running Fragmentation Reports ...................................................................... 5-44 Interpreting Fragmentation Reports................................................................. 5-45 VxFS Defragmentation .................................................................................... 5-46 Defragmenting Extents .................................................................................... 5-46 viii
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Defragmenting Directories ............................................................................... 5-47 Other fsadm Defragmentation Options............................................................. 5-47 Duration of Defragmentation............................................................................ 5-47 Scheduling Defragmentation ............................................................................ 5-48 Role of the Intent Log....................................................................................... 5-49 Maintaining File System Consistency .............................................................. 5-50 Generic fsck Options ........................................................................................ 5-50 VxFS-Specific fsck Options ............................................................................. 5-51 Resizing the Intent Log..................................................................................... 5-52 Controlling Logging Behavior.......................................................................... 5-53 Selecting mount Options for Logging .............................................................. 5-53 Logging and VxFS Performance ...................................................................... 5-55 File Change Log ............................................................................................... 5-57 Comparing the Intent Log and the File Change Log ........................................ 5-57 Summary ......................................................................................................... 5-58 Lesson 6: Reconfiguring Volumes Online Introduction........................................................................................................ 6-2 Resizing a Volume ............................................................................................ 6-4 Resizing a Volume.............................................................................................. 6-4 Resizing a Volume with a File System............................................................... 6-4 Resizing a Volume and File System: Methods................................................... 6-6 Resizing a Volume and File System: VEA ........................................................ 6-8 Resizing a Volume and File System: vxresize ................................................... 6-9 Resizing a Volume Only: vxassist.................................................................... 6-10 Resizing a File System Only: fsadm................................................................. 6-11 Resizing a Dynamic LUN................................................................................. 6-13 Resizing a LUN: VEA ...................................................................................... 6-13 Resizing a LUN: CLI........................................................................................ 6-13 Changing the Volume Layout .......................................................................... 6-14 What Is Online Relayout?................................................................................. 6-14 Supported Transformations .............................................................................. 6-15 How Does Online Relayout Work? .................................................................. 6-16 Notes on Online Relayout................................................................................. 6-18 Changing the Volume Layout: VEA ................................................................ 6-19 Changing the Volume Layout: CLI .................................................................. 6-21 The vxassist relayout Command....................................................................... 6-22 The vxassist convert Command........................................................................ 6-23 Managing Volume Tasks................................................................................. 6-24 Managing Volume Tasks: VEA ....................................................................... 6-24 Managing Volume Tasks: CLI ......................................................................... 6-25 Displaying Task Information with vxtask ........................................................ 6-26 Options for vxtask list............................................................................. 6-27 Monitoring a Task with vxtask ......................................................................... 6-28 Controlling Tasks with vxtask .......................................................................... 6-29 Controlling Relayout Tasks with vxrelayout.................................................... 6-30 Controlling the Task Progress Rate .................................................................. 6-31 Slowing a Task with vxtask.............................................................................. 6-32 Throttling a Task with VEA ............................................................................. 6-32
Table of Contents
ix Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Analyzing Volume Configurations with Storage Expert................................... What Is Storage Expert? .................................................................................. What Are the Storage Expert Rules? ............................................................... Before Using Storage Expert ........................................................................... Running a Storage Expert Rule........................................................................ Rule Output...................................................................................................... Displaying a Rule Description: Example......................................................... Running a Rule: Example ................................................................................ Displaying Tunable Attributes of a Rule: Example......................................... Displaying Default Attribute Values of a Rule: Example ............................... Customizing Rule Default Values ................................................................... Storage Expert Rules: Complete Listing.......................................................... Summary.........................................................................................................
6-33 6-33 6-34 6-36 6-36 6-37 6-38 6-38 6-39 6-39 6-40 6-41 6-44
Lesson 7: Encapsulation and Rootability Introduction ....................................................................................................... 7-2 Placing the Boot Disk Under VxVM Control ...................................................... 7-4 What Is Encapsulation?....................................................................................... 7-4 Data Disk Encapsulation Requirements.............................................................. 7-4 What Is Rootability? ........................................................................................... 7-5 Boot Disk Encapsulation Requirements ............................................................. 7-5 Why Encapsulate Root?...................................................................................... 7-6 When Not to Encapsulate Root........................................................................... 7-6 Limitations of Boot Disk Encapsulation............................................................. 7-7 File System Requirements for Root Volumes .................................................... 7-8 Before Encapsulating the Boot Disk................................................................ 7-10 Encapsulating the Boot Disk: vxdiskadm ......................................................... 7-11 Viewing Encapsulated Disks ........................................................................... 7-12 Creating an Alternate Boot Disk...................................................................... 7-14 Mirroring the Boot Disk................................................................................... 7-14 Requirements for Mirroring the Boot Disk...................................................... 7-14 Why Create an Alternate Boot Disk?............................................................... 7-14 Creating an Alternate Boot Disk: VEA ........................................................... 7-15 Creating an Alternate Boot Disk: vxdiskadm .................................................. 7-15 Creating an Alternate Boot Disk: CLI ............................................................. 7-16 Possible Boot Disk Errors................................................................................ 7-17 Booting from an Alternate Mirror.................................................................... 7-18 Removing the Boot Disk from VxVM Control .................................................. 7-19 The vxunroot Command .................................................................................. 7-19 When to Use vxunroot ..................................................................................... 7-19 Unencapsulating the Boot Disk ....................................................................... 7-20 Upgrading to a New VxVM Version................................................................. 7-21 General Notes on Upgrades ............................................................................. 7-21 Upgrading Volume Manager Only .................................................................. 7-22 Upgrading VxVM Only: installvm ........................................................... 7-23 Upgrading VxVM Only: Manual Package Upgrade........................................ 7-24 Upgrading VxVM Only: Upgrade Scripts ....................................................... 7-25 The upgrade_start Script .................................................................................. 7-25 The upgrade_finish Script................................................................................ 7-26 Upgrading Solaris Only ................................................................................... 7-28 x
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Upgrading VxVM and Your Operating System ............................................... 7-30 After Upgrading................................................................................................ 7-32 Upgrading to a New VxFS Version.................................................................. 7-33 Summary ......................................................................................................... 7-34 Lesson 8: Recovery Essentials Introduction........................................................................................................ 8-2 Maintaining Data Consistency........................................................................... 8-4 What Is Resynchronization? ............................................................................... 8-4 Atomic-Copy Resynchronization ....................................................................... 8-5 Read-Writeback Resynchronization ................................................................... 8-6 Minimizing the Impact of Resynchronization .................................................... 8-7 Dirty Region Logging......................................................................................... 8-8 How Does DRL Work? ...................................................................................... 8-8 Dirty Region Log Size ........................................................................................ 8-9 How the Bitmaps Are Used in Dirty Region Logging ....................................... 8-9 RAID-5 Logging............................................................................................... 8-10 Hot Relocation................................................................................................. 8-11 What Is Hot Relocation?................................................................................... 8-11 How Does Hot Relocation Work? .................................................................... 8-12 How Is Space Selected for Relocation?............................................................ 8-13 Managing Spare Disks .................................................................................... 8-14 Managing Spare Disks: VEA ........................................................................... 8-14 Managing Spare Disks: vxdiskadm .............................................................. 8-14 Managing Spare Disks: CLI ............................................................................. 8-15 Replacing a Disk ............................................................................................. 8-16 Disk Replacement Tasks .................................................................................. 8-16 Adding a New Disk .......................................................................................... 8-17 Replacing a Disk: VEA .................................................................................... 8-18 Replacing a Failed Disk: vxdiskadm ................................................................ 8-18 Replacing a Disk: CLI ...................................................................................... 8-18 Unrelocating a Disk ......................................................................................... 8-19 The vxunreloc Utility........................................................................................ 8-19 Unrelocating a Disk: VEA................................................................................ 8-19 Unrelocating a Disk: vxdiskadm ...................................................................... 8-20 Unrelocating a Disk: CLI ................................................................................. 8-20 Viewing Relocated Subdisks: CLI ................................................................... 8-20 Recovering a Volume ...................................................................................... 8-21 Recovering a Volume: VEA............................................................................. 8-21 The vxreattach Command................................................................................. 8-21 The vxrecover Command ................................................................................. 8-22 Protecting the VxVM Configuration ................................................................. 8-23 Backing Up a Disk Group Configuration ......................................................... 8-24 Restoring a Disk Group Configuration............................................................. 8-24 Summary ......................................................................................................... 8-25 Appendix A: Lab Exercises Lab 1: Introducing the Lab Environment ........................................................... A-2 Lab 2: Installation and Interfaces ...................................................................... A-3 Lab 3: Managing Disks and Disk Groups .......................................................... A-7 Table of Contents
xi Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 4: Creating Volumes................................................................................. A-10 Lab 5: Configuring Volumes............................................................................ A-13 Lab 6: Reconfiguring Volumes Online............................................................. A-16 Lab 7: Encapsulation and Rootability.............................................................. A-20 Lab 8: Recovery Essentials............................................................................. A-22 Appendix B: Lab Solutions Lab 1 Solutions: Introducing the Lab Environment ........................................... B-2 Lab 2 Solutions: Installation and Interfaces ...................................................... B-3 Lab 3 Solutions: Managing Disks and Disk Groups .......................................... B-9 Lab 4 Solutions: Creating Volumes................................................................. B-15 Lab 5 Solutions: Configuring Volumes ............................................................ B-23 Lab 6 Solutions: Reconfiguring Volumes Online............................................. B-30 Lab 7 Solutions: Encapsulation and Rootability .............................................. B-38 Lab 8 Solutions: Recovery Essentials............................................................. B-41 Appendix C: VxVM/VxFS Command Reference VxVM Command Quick Reference ................................................................... C-2 Disk Operations ................................................................................................. C-2 Disk Group Operations ...................................................................................... C-2 Subdisk Operations ............................................................................................ C-3 Plex Operations.................................................................................................. C-3 Volume Operations ............................................................................................ C-4 DMP, DDL, and Task Management ................................................................. C-5 Using VxVM Commands: Examples ................................................................. C-7 VxFS Command Quick Reference .................................................................... C-9 Setting Up a File System ................................................................................... C-9 Online Administration ....................................................................................... C-9 Benchmarking .................................................................................................. C-10 Managing Extents ............................................................................................ C-10 Defragmenting a File System........................................................................... C-11 Intent Logging.................................................................................................. C-11 I/O Types and Cache Advisories ..................................................................... C-12 File System Tuning .......................................................................................... C-13 Controlling Users ............................................................................................. C-14 QuickLog ......................................................................................................... C-15 Quick I/O ......................................................................................................... C-16 Appendix D: VxVM/VxFS 3.5 to 4.0 Differences Quick Reference VxVM and VxFS 3.5 to 4.0 Differences Quick Reference................................. D-2 Glossary Index
xii
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Course Introduction
Storage Management Issues Human Resource Database
10% Full
VM40_Solaris_R1.0_20040115
E-mail Server
50% Full Other Otherissues: issues: •• Multiple-vendor Multiple-vendorhardware hardware •• Explosive Explosivedata datagrowth growth •• Different Differentapplication applicationneeds needs •• Multiple Multipleoperating operatingsystems systems •• Rapid Rapidchange change •• Budgetary Budgetaryconstraints constraints
Customer Order Database
90% Full • Problem: Customer order database cannot access unutilized storage. • Common solution: Add more storage. I-3
What Is Storage Virtualization? Storage Management Issues Storage management is becoming increasingly complex due to: • Multiple operating systems • Unprecedented data growth • Storage hardware from multiple vendors • Dissimilar applications with different storage resource needs • Management pressure to increase efficiency • Budgetary and cost-control constraints • Rapidly changing business climates To create a truly efficient environment, administrators must have the tools to skillfully manage large, complex, and heterogeneous environments. Storage virtualization helps businesses to simplify the complex IT storage environment and gain control of capital and operating costs by providing consistent and automated management of storage.
Intro–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
What Is Storage Virtualization? Virtualization: The logical representation of physical storage across the entire enterprise
Consumer
Consumer
Consumer
Application requirements from storage • Application requirements • Growth potential
Capacity
• Failure • Throughput resistance • Responsiveness • Recovery time
Performance
• Disk size • Number of disks/ path
• Disk seek time • Cache hit rate
Availability • MTBF • Path redundancy
Physical aspects of storage VM40_Solaris_R1.0_20040115
Physical Storage Resources
VM40_Solaris_R1.0_20040115
I-4
I-4
Defining Storage Virtualization Storage virtualization is the process of taking multiple physical storage devices and combining them into logical (virtual) storage devices that are presented to the operating system, applications, and users. Storage virtualization builds a layer of abstraction above the physical storage, so that data is not restricted to specific hardware devices, creating a flexible storage environment. Storage virtualization simplifies management of storage and potentially reduces cost through improved hardware utilization and consolidation. With storage virtualization, the physical aspects of storage are masked to users. Administrators can concentrate less on physical aspects of storage and more on delivering access to necessary data. Benefits of storage virtualization include: • Greater IT productivity through the automation of manual tasks and simplified administration of heterogeneous environments • Increased application return on investment through improved throughput and increased uptime • Lower hardware costs through the optimized use of hardware resources
Course Introduction Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Intro–3
Storage Virtualization: Types Storage-Based Servers
Storage
Host-Based
Network-Based
Server
Servers Switch
Storage Storage
Most companies use a combination of these three types of storage virtualization to support their chosen architectures and application requirements. VM40_Solaris_R1.0_20040115
I-5
How Is Storage Virtualization Used in Your Environment? The way in which you use storage virtualization, and the benefits derived from storage virtualization, depend on the nature of your IT infrastructure and your specific application requirements. Three main types of storage virtualization used today are: • Storage-based • Host-based • Network-based Most companies use a combination of these three types of storage virtualization solutions to support their chosen architecture and application needs. The type of storage virtualization that you use depends on factors such as: • Heterogeneity of deployed enterprise storage arrays • Need for applications to access data contained in multiple storage devices • Importance of uptime when replacing or upgrading storage • Need for multiple hosts to access data within a single storage device • Value of the maturity of technology • Investments in a SAN architecture • Level of security required • Level of scalability needed
Intro–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Storage-Based Storage Virtualization Storage-based storage virtualization refers to disks within an individual array that are presented virtually to multiple servers. Storage is virtualized by the array itself. For example, RAID arrays virtualize the individual disks (that are contained within the array) into logical LUNS which are accessed by host operating systems using the same method of addressing as a directly-attached physical disk. This type of storage virtualization is useful under these conditions: • You need to have data in an array accessible to servers of different operating systems. • All of a server’s data needs are met by storage contained in the physical box. • You are not concerned about disruption to data access when replacing or upgrading the storage. The main limitation to this type of storage virtualization is that data cannot be shared between arrays, creating islands of storage that must be managed. Host-Based Storage Virtualization Host-based storage virtualization refers to disks within multiple arrays and from multiple vendors that are presented virtually to a single host server. For example, software-based solutions, such as VERITAS Storage Foundation, provide hostbased storage virtualization. Using VERITAS Storage Foundation to administer host-based storage virtualization is the focus of this training. Host-based storage virtualization is useful under these conditions: • A server needs to access data stored in multiple storage devices. • You need the flexibility to access data stored in arrays from different vendors. • Additional servers do not need to access the data assigned to a particular host. • Maturity of technology is a highly important factor to you in making IT decisions. Note: By combining VERITAS Storage Foundation with clustering technologies, such as VERITAS Cluster Volume Manager, storage can be virtualized to multiple hosts of the same operating system. Network-Based Storage Virtualization Network-based storage virtualization refers to disks from multiple arrays and multiple vendors that are presented virtually to multiple servers. Network-based storage virtualization is useful under these conditions: • You need to have data accessible across heterogeneous servers and storage devices. • You require central administration of storage across all Network Attached Storage (NAS) systems or Storage Area Network (SAN) devices. • You want to ensure that replacing or upgrading storage does not disrupt data access. • You want to virtualize storage to provide block services to applications. Course Introduction Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Intro–5
VERITAS Storage Foundation VERITAS Storage Foundation provides host-based storage virtualization for performance, availability, and manageability benefits for enterprise computing environments. Company Business Process High Availability
VERITAS Cluster Server/Replication
Application Solutions
Storage Foundation for Databases
Data Protection
VERITAS NetBackup/Backup Exec
Volume Manager and File System
VERITAS Storage Foundation Hardware and Operating System
VM40_Solaris_R1.0_20040115
I-6
Introducing VERITAS Storage Foundation VERITAS storage management solutions address the increasing costs of managing mission-critical data and disk resources in Direct Attached Storage (DAS) and Storage Area Network (SAN) environments. At the heart of these solutions is VERITAS Storage Foundation, which includes VERITAS Volume Manager (VxVM), VERITAS File System (VxFS), and other value add products. Independently, these components provide key benefits. When used together as an integrated solution, VxVM and VxFS deliver the highest possible levels of performance, availability, and manageability for heterogeneous storage environments.
Intro–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VxVM and VxFS Users
Applications
Databases
VERITAS VERITAS File File System System (VxFS) (VxFS) VERITAS VERITAS Volume Volume Manager Manager (VxVM) (VxVM)
Virtual Storage Resources Volumes Volumes Physical Storage Resources
JBOD Brand “A” Disk Array
VM40_Solaris_R1.0_20040115
Brand “B” Disk Array
VM40_Solaris_R1.0_20040115
I-7
I-7
What Is VERITAS Volume Manager? VERITAS Volume Manager, the industry-leader in storage virtualization, is an easy-to-use, online storage management solution for organizations that require uninterrupted, consistent access to mission-critical data. VxVM enables you to apply business policies to configure, share, and manage storage without worrying about the physical limitations of disk storage. VxVM reduces total cost of ownership by enabling administrators to easily build storage configurations that improve performance and increase data availability. VxVM provides a logical volume management layer which overcomes the physical restrictions of hardware disk devices by spanning volumes across multiple spindles. Through the support of RAID redundancy techniques, VxVM protects against disk and hardware failures, while providing the flexibility to extend the capabilities of existing hardware. Working in conjunction with VERITAS File System, VERITAS Volume Manager creates a foundation for other value-added technologies such as SAN environments, clustering and failover, automated management, backup and HSM, and remote browser-based management. What Is VERITAS File System? A file system is a collection of directories organized into a structure that enables you to locate and store files. All information processed is eventually stored in a file system. The main purposes of a file system are to: • Provide shared access to data storage Course Introduction Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Intro–7
• • • •
Provide structured access to data Control access to data Provide a common, portable application interface Enable the manageability of data storage
The value of a file system depends on its integrity and performance. • Integrity: Information sent to the file system must be exactly the same when it is retrieved from the file system. • Performance: A file system must not impose an undue overhead when responding to I/O requests from applications. In most cases, the requirements to provide integrity and performance conflict. Therefore, a file system must provide a balance between these two requirements. VERITAS File System is a powerful, quick-recovery journaling file system that provides the high performance and easy online manageability required by missioncritical applications. VERITAS File System augments UNIX file management with continuous availability and optimized performance. It provides scalable, optimized performance and the capacity to meet the increasing demands of user loads in client/server environments.
Intro–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VERITAS Storage Foundation: Benefits Manageability • Manage storage and file systems from one interface. • Configure storage online. • VxVM and VxFS are consistent across Solaris, HP-UX, AIX, and Linux. Availability • Features are implemented to protect against data loss. • Online operations eliminate planned downtime. Performance • I/O throughput can be maximized using volume layouts. • Performance bottlenecks can be located and eliminated using analysis tools. Scalability • VxVM and VxFS run on 32-bit and 64-bit operating systems. • Storage can be deported to larger enterprise platforms. VM40_Solaris_R1.0_20040115
I-8
Benefits of VERITAS Storage Foundation Commercial system availability now requires continuous uptime in many implementations. Systems must be available 24 hours a day, 7 days a week, and 365 days a year. VERITAS Storage Foundation reduces the cost of ownership by providing scalable manageability, availability, and performance enhancements for these enterprise computing environments. Manageability • Management of storage and the file system is performed online in real time, eliminating the need for planned downtime. • Online volume and file system management can be performed through an intuitive, easy-to-use graphical user interface that is integrated with the VERITAS Volume Manager (VxVM) product. • VxVM provides consistent management across Solaris, HP-UX, AIX, Linux, and Windows platforms. • VxFS command operations are consistent across Solaris, HP-UX, AIX, and Linux platforms. Availability • Through RAID techniques, storage remains available in the event of hardware failure. • Hot relocation guarantees the rebuilding of redundancy in the case of a disk failure. • Recovery time is minimized with logging and background mirror
Course Introduction Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Intro–9
• •
resynchronization. Logging of file system changes enables fast file system recovery. Snapshot of a file system provides an internally consistent, read-only image for backup and file system checkpoints provide read-writable snapshots.
Performance • I/O throughput can be maximized by measuring and modifying volume layouts while storage remains online. • Performance bottlenecks can be located and eliminated using VxVM analysis tools. • Extent-based allocation of space for files minimizes file level access time. • Read-ahead buffering dynamically tunes itself to the volume layout. • Aggressive caching of writes greatly reduces the number of disk accesses. • Direct I/O performs file I/O directly into and out of user buffers. Scalability • VxVM runs over a 32-bit and 64-bit operating system. • Storage can be deported to larger enterprise-class platforms. • Storage devices can be spanned. • VxVM is fully integrated with VERITAS File System (VxFS). • With VxFS, several add-on products are available for maximizing performance in a database environment.
Intro–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Storage Foundation Curriculum Path VERITAS VERITASVolume Volume Manager Managerfor forUNIX: UNIX: Operations Operations
VERITAS VERITASVolume Volume Manager Managerfor forUNIX: UNIX: Maintenance Maintenance
~2 Days
~1 Day
VERITAS VERITAS Enterprise Enterprise Storage StorageSolutions Solutions
~2 Days
VERITAS Storage Foundation for UNIX 5 Days
VM40_Solaris_R1.0_20040115
I-9
VERITAS Storage Foundation Curriculum VERITAS Volume Manager for UNIX: Operations is the first in a series of courses designed to provide you with comprehensive instruction on making the most of VERITAS Storage Foundation.
Course Introduction Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Intro–11
VxVM Operations: Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
I-10
VERITAS Volume Manager for UNIX: Operations Overview This training provides comprehensive instruction on operating the file and disk management foundation products: VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). In this course, you learn how to combine file system and disk management technology to ensure easy management of all storage and maximum availability of essential data. Objectives After completing this training, you will be able to: • Identify VxVM virtual storage objects and volume layouts. • Install and configure VxVM and VxFS. • Configure and manage disks and disk groups. • Create concatenated, striped, mirrored, RAID-5, and layered volumes. • Configure volumes by adding mirrors, logs, storage attributes, and file systems. • Reconfigure volumes online, resize volumes and file systems, and use the Storage Expert utility to analyze volume configurations. • Place the root disk under VxVM control and mirror the root disk. • Perform basic VxVM recovery operations.
Intro–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Course Resources • • • •
Lab Exercises (Appendix A) Lab Solutions (Appendix B) VxVM/VxFS Command Reference (Appendix C) VxVM/VxFS 3.5 to 4.0 Differences Quick Reference (Appendix D) • Glossary
VM40_Solaris_R1.0_20040115
I-11
Additional Course Resources Appendix A: Lab Exercises This section contains hands-on exercises that enable you to practice the concepts and procedures presented in the lessons. Appendix B: Lab Solutions This section contains detailed solutions to the lab exercises for each lesson. Appendix C: VxVM/VxFS Command Reference This section contains a quick reference guide to common VERITAS Volume Manager and VERITAS File System commands. Appendix D: VxVM/VxFS 3.5 to 4.0 Differences Quick Reference This section contains an overview of the differences between VxVM/VxFS 3.5 and VxVM/VxFS 4.0. Glossary For your reference, this course includes a glossary of terms related to VERITAS Storage Foundation.
Course Introduction Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Intro–13
Intro–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 1 Virtual Objects
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
1-2
© Copyright 2002 VERITAS
Introduction Overview This lesson describes the virtual storage objects that VERITAS Volume Manager (VxVM) uses to manage physical disk storage. This lesson introduces common virtual storage layouts, illustrates how virtual storage objects relate to physical storage objects, and describes the benefits of virtual data storage. Importance Before you install and set up VERITAS Volume Manager, you should be familiar with the virtual objects that VxVM uses to manage physical disk storage. A conceptual understanding of virtual objects helps you to interpret and manage the virtual objects represented in VxVM interfaces, tools, and reports.
1–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: • Identify the structural characteristics of a disk that are affected by placing a disk under VxVM control. • Describe the structural characteristics of a disk after it is placed under VxVM control. • Identify the virtual objects that are created by VxVM to manage data storage, including disk groups, VxVM disks, subdisks, plexes, and volumes. • Define VxVM RAID levels and identify virtual storage layout types used by VxVM to remap address space.
VM40_Solaris_R1.0_20040115
1-3
© Copyright 2002 VERITAS
Outline of Topics • Physical Data Storage • Virtual Data Storage • Volume Manager Storage Objects • Volume Manager RAID Levels
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–3
Physical Disk Structure Physical storage objects: • The basic physical storage device that ultimately stores your data is the hard disk. • When you install your operating system, hard disks are formatted as part of the installation program. • Partitioning is the basic method of organizing a disk to prepare for files to be written to and retrieved from the disk. • A partitioned disk has a prearranged storage pattern that is designed for the storage and retrieval of data.
VM40_Solaris_R1.0_20040115
1-4
Solaris
HP-UX
AIX
Linux
© Copyright 2002 VERITAS
Physical Data Storage Physical Disk Structure Solaris A physical Solaris disk is made up of the following parts: VTOC: A Solaris disk has an area called the volume table of contents (VTOC), or disk label, that stores information about disk structure and organization. The VTOC is typically less than 200 bytes and resides on the first sector of the disk. A sector is 512 bytes on most systems. On the boot disk, the boot block resides within the first 16 sectors (8K). The boot block has instructions that point to the second stage of the boot process.
1–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Partitions: After the VTOC, the remainder of a Solaris disk is divided into units called partitions. A partition is a group of cylinders set aside for a particular use. Information about the size, location, and use of partitions is stored in the VTOC in the partition table. Another term for a partition is a slice. Some Solaris utilities, such as the format utility, only use the term “partition”. Partition 2 refers to the entire disk, including the VTOC, by convention. This partition is also referred to as the backup slice. HP-UX On an HP-UX system, the physical disk is traditionally partitioned using either the whole disk approach or Logical Volume Manager (LVM).
•
•
The whole disk approach enables you to partition a disk in five ways: the whole disk is used by a single file system; the whole disk is used as swap area; the whole disk is used as a raw partition; a portion of the disk contains a file system, and the rest is used as swap; or the boot disk contains a 2-MB special boot area, the root file system, and a swap area. An LVM data disk consists of four areas: Physical Volume Reserved Area (PVRA); Volume Group Reserved Area (VGRA); user data area; and Bad Block Relocation Area (BBRA).
AIX A native AIX disk does not have a partition table of the kind familiar on many other operating systems such as Solaris, Linux, and Windows. An application could use the entire unstructured raw physical device, but the first 512-byte sector normally contains information including a physical volume identifier (pvid) to support recognition of the disk by AIX. An AIX disk is managed by IBM’s Logical Volume Manager (LVM) by default. A disk managed by LVM is called a physical volume (PV). A physical volume consists of: • PV reserved area: A physical volume begins with a reserved area of 128 sectors containing PV metadata, including the pvid. • Volume Group Descriptor Area (VGDA): One or two copies of the VDGA follows. The VGDA contains information describing a volume group (VG), which consists of one or more physical volumes. Included in the metadata in the VGDA is the definition of the physical partition (PP) size, normally 4 MB. • Physical partitions: The remainder of the disk is divided into a number of physical partitions. All of the PVs in a volume group have PPs of the same size, as defined in the VGDA. In a normal VG, there can be up to 32 PPs in a PV. In a big VG, there can Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–5
be up to 128 PPs in a PV.
The term partition is used differently in different operating systems. In many kinds of UNIX, Linux, and Windows, a partition is a variable sized portion of contiguous disk space that can be formatted to contain a file system. In LVM, a PP is mapped to a logical partition (LP), and one or more LPs from any location throughout the VG can be combined to define a logical volume (LV). A logical volume is the entity that can be formatted to contain a file system (by default either JFS or JFS2). So a physical partition compares in concept more closely to a disk allocation cluster in some other operating systems, and a logical volume plays the role that a partition does in some other operating systems. Linux On Linux, a nonboot disk can be divided into one to four primary partitions. One of these primary partitions can be used to contain logical partitions, and is called the extended partition. The extended partition can have up to 12 logical partitions on a SCSI disk and up to 60 logical partitions on an IDE disk. You can use fdisk to set up partitions on a Linux disk.
1–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
On a Linux boot disk, the boot partition must be a primary partition and is typically located within the first 1024 cylinders of the drive. On the boot disk, you must also have a dedicated swap partition. The swap partition can be a primary or a logical partition, and can be located anywhere on the disk. Logical partitions must be contiguous, but do not need to take up all of the space of the extended partition. Only one primary partition can be extended. The extended partition does not take up any space until it is subdivided into logical partitions.
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–7
Physical Disk Naming VxVM parses disk names to retrieve connectivity information for disks. Operating systems have different conventions: Operating System
Device Naming Convention Example
Solaris
/dev/[r]dsk/c1t9d0s2
HP-UX
/dev/[r]dsk/c3t2d0 (no slice)
AIX
/dev/hdisk2 (no slice)
Linux
SCSI disks: /dev/sda[1-4] (primary partitions) /dev/sda[5-16] (logical partitions) /dev/sdbN (on the second disk) /dev/sdcN (on the third disk) IDE disks: /dev/hdaN, /dev/hdbN, /dev/hdcN
VM40_Solaris_R1.0_20040115
1-5
© Copyright 2002 VERITAS
Physical Disk Naming Solaris
You locate and access the data on a physical disk by using a device name that specifies the controller, target ID, and disk number. A typical device name uses the format: c#t#d#. • c# is the controller number. • t# is the target ID. • d# is the logical unit number (LUN) of the drive attached to the target. If a disk is divided into partitions, then you also specify the partition number in the device name: • s# is the partition (slice) number. For example, device name c0t0d0s1 is connected to controller number 0 in the system, with a target ID of 0, physical disk number 0, and partition number 1 on the disk. HP-UX
You locate and access the data on a physical disk by using a device name that specifies the controller, target ID, and disk number. A typical device name uses the format: c#t#d#. • c# is the controller number. • t# is the target ID. • d# is the logical unit number (LUN) of the drive attached to the target.
1–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
For example, the c0t0d0 device name is connected to controller number 0 in the system, with a target ID of 0, and the physical disk number 0. AIX
Every device in AIX is assigned a location code that describes its connection to the system. The general format of this identifier is AB-CD-EF-GH, where the letters represent decimal digits or uppercase letters. The first two characters represent the bus, the second pair identify the adapter, the third pair represent the connector, and the final pair uniquely represent the device. For example, a SCSI disk drive might have a location identifier of 04-01-00-6,0. In this example, 04 means PCI bus, 01 is the slot number on the PCI bus occupied by the SCSI adapter, 00 means the only or internal connector, and the 6,0 means SCSI ID 6, LUN 0. However, this data is used internally by AIX to locate a device. The device name that a system administrator or software uses to identify a device is less hardware dependant. The system maintains a special database called the Object Data Manager (ODM) that contains essential definitions for most objects in the system, including devices. Through the ODM, a device name is mapped to the location identifier. The device names are referenced by special files found in the /dev directory. For example, the SCSI disk identified above might have the device name hdisk3 (the fourth hard disk identified by the system). The device named hdisk3 is accessed by the file name /dev/hdisk3. If a device is moved so that it has a different location identifier, the ODM is updated so that it retains the same device name, and the move is transparent to users. This is facilitated by the physical volume identifier stored in the first sector of a physical volume. This unique 128-bit number is used by the system to recognize the physical volume wherever it may be attached because it is also associated with the device name in the ODM. Linux
On Linux, device names are displayed in the format: • sdx[N] • hdx[N] In the syntax: • sd refers to a SCSI disk, and hd refers to an EIDE disk. • x is a letter that indicates the order of disks detected by the operating system. For example, sda refers to the first SCSI disk, sdb references the second SCSI disk, and so on. • N is an optional parameter that represents a partition number in the range 1 through 16. For example, sda7 references partition 7 on the first SCSI disk. Primary partitions on a disk are 1, 2, 3, 4; logical partitions have numbers 5 and up. If the partition number is omitted, the device name indicates the entire disk.
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–9
Physical Data Storage • Reads and writes on unmanaged physical disks can be a slow process. • Disk arrays and multipathed disk arrays can improve I/O speed and throughput.
Users
Applications
Databases
Physical Physical Disks/LUNs Disks/LUNs •• Disk Disk array: array: A A collection collection of of physical physical disks disks used used to to balance balance I/O I/O across across multiple multiple disks disks •• Multipathed Multipathed disk disk array: array: Provides Provides multiple multiple ports ports to to access access disks disks to to achieve achieve performance performance and and availability availability benefits benefits
VM40_Solaris_R1.0_20040115
1-6
© Copyright 2002 VERITAS
Disk Arrays Reads and writes on unmanaged physical disks can be a relatively slow process, because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read and write operations are done to individual disks, one at a time, the read-write time can become unmanageable. A disk array is a collection of physical disks. Performing I/O operations on multiple disks in a disk array can improve I/O speed and throughput. Multipathed Disk Arrays Some disk arrays provide multiple ports to access disk devices. These ports, coupled with the host bus adaptor (HBA) controller and any data bus or I/O processor local to the array, make up multiple hardware paths to access the disk devices. This type of disk array is called a multipathed disk array. You can connect multipathed disk arrays to host systems in many different configurations, such as: • Connecting multiple ports to different controllers on a single host • Chaining ports through a single controller on a host • Connecting ports to different hosts simultaneously
1–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Virtual Data Storage • Volume Manager creates a virtual layer of data storage. • Volume Manager volumes appear to applications to be physical disk partitions.
Online Administration Application
Multidisk configurations: • Concatenation • Mirroring • Striping
Volume
• RAID-5
High Availability:
• Volumes have block and character device nodes in the /dev tree: Physical /dev/vx/[r]dsk/… Disks/ LUNs VM40_Solaris_R1.0_20040115
• Disk group import and deport • Hot relocation • Dynamic multipathing
Disk Spanning
Load Balancing 1-7
© Copyright 2002 VERITAS
Virtual Data Storage Virtual Storage Management VERITAS Volume Manager creates a virtual level of storage management above the physical device level by creating virtual storage objects. The virtual storage object that is visible to users and applications is called a volume. What Is a Volume? A volume is a virtual object, created by Volume Manager, that stores data. A volume is made up of space from one or more physical disks on which the data is physically stored. How Do You Access a Volume? Volumes created by VxVM appear to the operating system as physical disks, and applications that interact with volumes work in the same way as with physical disks. All users and applications access volumes as contiguous address space using special device files in a manner similar to accessing a disk partition. Volumes have block and character device nodes in the /dev tree. You can supply the name of the path to a volume in your commands and programs, in your file system and database configuration files, and in any other context where you would otherwise use the path to a physical disk partition. Why Use Volume Manager? Benefits of using Volume Manager for virtual storage management include:
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–11
•
•
•
•
•
1–12
Disk spanning: By using volumes and other virtual objects, Volume Manager enables you to span data over multiple physical disks. The process of logically combining physical devices to enable data to be stored across multiple devices is called spanning. Load balancing: Data can be spread across several disks within an array to distribute or balance I/O operations across the disks. Using parallel I/O across multiple disks improves I/O performance by increasing data transfer speed and overall throughput for the array. Complex multidisk configurations: Volume Manager virtual objects enable you to create complex disk configurations in multidisk systems that enhance performance and reliability. Multidisk configurations, such as striping, mirroring, and RAID-5 configurations, can provide data redundancy, performance improvements, and high availability. Online administration: Volume Manager uses virtual objects to perform administrative tasks on disks without interrupting service to applications and users. High availability: Volume Manager includes automatic failover and recovery features that ensure continuous access to critical data. Volume Manager can move collections of disks between hosts (disk group import and deport), automatically relocate data in case of disk failure (hot relocation), and automatically detect and use multipathed disk arrays (dynamic multipathing, or DMP).
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Volume Manager Control When you place a disk under VxVM control, a CDS disk layout is used which ensures that the disk is accessible on different platforms, regardless of the platform on which the disk was initialized.
CDS Disk (Default) OS-reserved OS-reserved areas areas that that contain: contain: •• Platform Platform blocks blocks •• VxVM VxVM ID ID blocks blocks •• AIX AIX and and HP-UX HP-UX co-existence co-existence labels labels
Metadata
User Data
Offset 128K Private Region
Public Region
Default Default size size of of private private region: region: 2048 2048 sectors sectors on on Solaris, Solaris, AIX, AIX, and and Linux; Linux; 1024 1024 sectors sectors on on HPHPUX UX
Offset last two cylinders VM40_Solaris_R1.0_20040115
All All areas areas within within the the private private and and public public regions regions are are aligned aligned and and sized sized in in multiples multiples of of 8K. 8K.
1-8
© Copyright 2002 VERITAS
Volume Manager-Controlled Disks With Volume Manager, you enable virtual data storage by bringing a disk under Volume Manager control. By default in VxVM 4.0 and later, Volume Manager uses a cross-platform data sharing (CDS) disk layout. A CDS disk is consistently recognized by all VxVM-supported UNIX platforms and consists of. • OS-reserved areas: To accommodate platform-specific disk usage, the first 128K and the last two cylinders on a disk are reserved for disk labels, platform blocks, and platform-coexistence labels. • Private region: The private region stores information, such as disk headers, configuration copies, and kernel logs, and other platform-specific management areas that VxVM uses to manage virtual objects. The private region represents a small management overhead:
•
Operating System
Default Block/Sector Size
Default Private Region Size
Solaris
512 bytes
2048 sectors (1024K)
HP-UX
1024 bytes
1024 sectors (1024K)
AIX
512 bytes
2048 sectors (1024K)
Linux
512 bytes
2048 sectors (1024K)
Public region: The public region consists of the remainder of the space on the disk. The public region represents the available space that Volume Manager can use to assign to volumes and is where an application stores data. Volume Manager never overwrites this area unless specifically instructed to do so.
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–13
Comparing CDS and Sliced Disks CDS Disk (4.0 Default)
Sliced Disk (Pre-4.0 Default)
• Private region (metadata) and public region (user data) are created on a single partition (7).
• Private region and public region are created on separate partitions (3 and 4).
• Suitable for moving between different operating systems.
• Not suitable for moving between different operating systems.
• Not suitable for boot partitions.
• Suitable for boot partitions.
VM40_Solaris_R1.0_20040115
1-9
© Copyright 2002 VERITAS
Comparing CDS Disks and Sliced Disks The sliced disk layout is still available in VxVM 4.0 and later, and is used for bringing the boot disk under VxVM control on operating systems that support that capability. On platforms that support bringing the boot disk under VxVM control, CDS disks cannot be used for boot disks. CDS disks have specific disk layout requirements that enable a common disk layout across different platforms, and these requirements are not compatible with the particular platform-specific requirements of boot disks. Therefore, when placing a boot disk under VxVM control, you must use a sliced disk layout. For non-boot disks, you can convert CDS disks to sliced disks and vice versa by using VxVM utilities. Other disk types, working with boot disks, and transferring data across platforms with CDS disks are topics covered in detail in later lessons.
1–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Volume Manager Storage Objects acctdg
Disk Group Volumes
Plexes
expvol
payvol
acctdg01-01 acctdg02-02 acctdg03-02
acctdg01-02 acctdg03-01 acctdg02-01
expvol-01
payvol-01
VxVM Disks Subdisks
payvol-02
acctdg01
acctdg02
acctdg03
acctdg01-01 acctdg01-02
acctdg02-01 acctdg02-02
acctdg03-01 acctdg03-02
VM40_Solaris_R1.0_20040115
1-10
Physical Disks ©VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
1-10
Volume Manager Storage Objects Disk Groups A disk group is a collection of VxVM disks. You group disks into disk groups for management purposes, such as to hold the data for a specific application or set of applications. For example, data for accounting applications can be organized in a disk group called acctdg. A disk group configuration is a set of records with detailed information about related Volume Manager objects in a disk group, their attributes, and their connections. Disk groups are configured by the system administrator and represent management and configuration boundaries. Volume Manager objects cannot span disk groups. For example, a volume’s subdisks, plexes, and disks must be derived from the same disk group as the volume. You can create additional disk groups as necessary. Disk groups allow you to group disks into logical collections. Disk groups ease the use of devices in a high availability environment, because a disk group and its components can be moved as a unit from one host machine to another. Disk drives can be shared by two or more hosts, but can be accessed by only one host at a time. If one host crashes, the other host can take over the failed host’s disk drives and disk groups. Volume Manager Disks A Volume Manager (VxVM) disk represents the public region of a physical disk that is under Volume Manager control. Each VxVM disk corresponds to one physical disk. Each VxVM disk has a unique virtual disk name called a disk media
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–15
name. The disk media name is a logical name used for Volume Manager administrative purposes. Volume Manager uses the disk media name when assigning space to volumes. A VxVM disk is given a disk media name when it is added to a disk group. Default disk media name: diskgroup## You can supply the disk media name or allow Volume Manager to assign a default name. The disk media name is stored with a unique disk ID to avoid name collision. Once a VxVM disk is assigned a disk media name, the disk is no longer referred to by its physical address. The physical address (for example, c#t#d# or hdisk#) becomes known as the disk access record. Subdisks A VxVM disk can be divided into one or more subdisks. A subdisk is a set of contiguous disk blocks that represent a specific portion of a VxVM disk, which is mapped to a specific region of a physical disk. A subdisk is a subsection of a disk’s public region. A subdisk is the smallest unit of storage in Volume Manager. Therefore, subdisks are the building blocks for Volume Manager objects. A subdisk is defined by an offset and a length in sectors on a VxVM disk. Default subdisk name: DMname-## A VxVM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions of a VxVM disk. Any VxVM disk space that is not reserved or that is not part of a subdisk is free space. You can use free space to create new subdisks. Conceptually, a subdisk is similar to a partition. Both a subdisk and a partition divide a disk into pieces defined by an offset address and length. Each of those pieces represent a reservation of contiguous space on the physical disk. However, while the maximum number of partitions to a disk is limited by some operating systems, there is no theoretical limit to the number of subdisks that can be attached to a single plex, but it has been limited by default to a value of 4096. If required, this default can be changed, using the vol_subdisk_num tunable parameter. For more information on tunable parameters, see the VERITAS Volume Manager System Administrator’s Guide. Plexes Volume Manager uses subdisks to build virtual objects called plexes. A plex is a structured or ordered collection of subdisks that represents one copy of the data in a volume. A plex consists of one or more subdisks located on one or more physical disks. The length of a plex is determined by the last block that can be read or written on the last subdisk in the plex. Plex length may not equal volume length to the exact sector, because the plex is aligned to a cylinder boundary. Default plex name: volumename-## Plex types: • Complete plex: A complete plex holds a complete copy of a volume and 1–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
•
•
therefore maps the entire address space of the volume. Most plexes in VxVM are complete plexes. Sparse plex: A sparse plex is a plex that has a length that is less than the length of the volume or that maps to only part of the address space of a volume. Sparse plexes are not commonly used in newer VxVM versions. Log plex: A log plex is a plex that is dedicated to logging. A log plex is used to speed up data consistency checks and repairs after a system failure. RAID-5 and mirrored volumes typically use a log plex.
A volume must have at least one complete plex that has a complete copy of the data in the volume with at least one associated subdisk. Other plexes in the volume can be complete, sparse, or log plexes. A volume can have up to 32 plexes; however, you should never use more than 31 plexes in a single volume. Volume Manager requires one plex for automatic or temporary online operations. Volumes A volume is a virtual storage device that is used by applications in a manner similar to a physical disk. Due to its virtual nature, a volume is not restricted by the physical size constraints that apply to a physical disk. A VxVM volume can be as large as the total sum of available, unreserved free physical disk space. A volume is comprised of one or more plexes. A volume can span across multiple disks. The data in a volume is stored on subdisks of the spanned disks. A volume must be configured from VxVM disks and subdisks within the same disk group. Default volume name: vol## You should assign meaningful volume names that reflect the nature or use of the data in the volumes. For example, two volumes in acctdg can be expvol, a volume that contains expense data, and payvol, a volume that contains payroll data.
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–17
Volume Layouts Volume layout: The way plexes are configured to remap the volume address space through which I/O is redirected Disk Disk Spanning Spanning
Resilience Resilience
Concatenated
Striped
Layered
RAID-0
RAID-0
RAID-1+0
Data Data Redundancy Redundancy Mirrored
RAID-5
Striped and Mirrored
VM40_Solaris_R1.0_20040115
1-11
RAID-1
RAID-5
RAID-0+1
©VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
1-11
Volume Manager RAID Levels RAID RAID is an acronym for Redundant Array of Independent Disks. RAID is a storage management approach in which an array of disks is created, and part of the combined storage capacity of the disks is used to store duplicate information about the data in the array. By maintaining a redundant array of disks, you can regenerate data in the case of disk failure. RAID configuration models are classified in terms of RAID levels, which are defined by the number of disks in the array, the way data is spanned across the disks, and the method used for redundancy. Each RAID level has specific features and performance benefits that involve a trade-off between performance and reliability.
1–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VxVM-Supported RAID Levels VxVM-supported RAID levels are described in the following table: RAID Level
Description
RAID-0
RAID-0 refers to simple concatenation or striping. Disk space is combined sequentially from two or more disks or striped across two or more disks. RAID-0 does not provide data redundancy.
RAID-1
RAID-1 refers to mirroring. Data from one disk is duplicated on another disk to provide redundancy and enable fast recovery.
RAID-5
RAID-5 is a striped layout that also includes the calculation of parity information, and the striping of that parity information across the disks. If a disk fails, the parity is used to reconstruct the missing data.
RAID-0+1
Adding a mirror to a concatenated or striped layout results in RAID-0+1, a combination of concatenation or striping (RAID-0) with mirroring (RAID-1). Striping plus mirroring is called the mirror-stripe layout. Concatenation plus mirroring is called the mirror-concat layout. In these layouts, the mirroring occurs above the concatenation or striping.
RAID-1+0
RAID-1+0 combines mirroring (RAID-1) with striping or concatenation (RAID-0) in a different way. The mirroring occurs below the striping or concatenation in order to mirror each column of the stripe or each chunk of the concatenation. This type of layout is called a layered volume.
Volume Layouts RAID levels correspond to volume layouts. A volume’s layout refers to the organization of plexes in a volume. Volume layout is the way plexes are configured to remap the volume address space through which I/O is redirected at run-time. Volume layouts are based on the concepts of disk spanning, redundancy, and resilience. Disk Spanning Disk spanning is the combining of disk space from multiple physical disks to form one logical drive. Disk spanning has two forms: • Concatenation: Concatenation is the mapping of data in a linear manner across two or more disks. In a concatenated volume, subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. • Striping: Striping is the mapping of data in equal-sized chunks alternating across multiple disks. Striping is also called interleaving.
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–19
In a striped volume, data is spread evenly across multiple disks. Stripes are equally-sized fragments that are allocated alternately and evenly to the subdisks of a single plex. There must be at least two subdisks in a striped plex, each of which must exist on a different disk. Configured properly, striping not only helps to balance I/O but also to increase throughput. Data Redundancy To protect data against disk failure, the volume layout must provide some form of data redundancy. Redundancy is achieved in two ways: • Mirroring: Mirroring is maintaining two or more copies of volume data. A mirrored volume uses multiple plexes to duplicate the information contained in a volume. Although a volume can have a single plex, at least two are required for true mirroring (redundancy of data). Each of these plexes should contain disk space from different disks for the redundancy to be useful. • Parity: Parity is a calculated value used to reconstruct data after a failure by doing an exclusive OR (XOR) procedure on the data. Parity information can be stored on a disk. If part of a volume fails, the data on that portion of the failed volume can be re-created from the remaining data and parity information. A RAID-5 volume uses striping to spread data and parity evenly across multiple disks in an array. Each stripe contains a parity stripe unit and data stripe units. Parity can be used to reconstruct data if one of the disks fails. In comparison to the performance of striped volumes, write throughput of RAID5 volumes decreases, because parity information needs to be updated each time data is accessed. However, in comparison to mirroring, the use of parity reduces the amount of space required. Resilience A resilient volume, also called a layered volume, is a volume that is built on one or more other volumes. Resilient volumes enable the mirroring of data at a more granular level. For example, a resilient volume can be concatenated or striped at the top level and then mirrored at the bottom level. A layered volume is a virtual Volume Manager object that nests other virtual objects inside of itself. Layered volumes provide better fault tolerance by mirroring data at a more granular level.
1–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Summary You should now be able to: • Identify the structural characteristics of a disk that are affected by placing a disk under Volume Manager control. • Describe the structural characteristics of a disk after it is placed under Volume Manager control. • Identify the virtual objects that are created by Volume Manager to manage data storage, including disk groups, Volume Manager disks, subdisks, plexes, and volumes. • Define VxVM RAID levels and identify virtual storage layout types used by VxVM to remap address space. VM40_Solaris_R1.0_20040115
1-12
© Copyright 2002 VERITAS
Summary This lesson described the virtual storage objects that VERITAS Volume Manager uses to manage physical disk storage. This lesson introduced common virtual storage layouts, illustrated how virtual storage objects relate to physical storage objects, and described the benefits of virtual data storage. Next Steps You are now familiar with Volume Manager objects and how virtual objects relate to physical disks when a disk is controlled by Volume Manager. In the next lesson, you will install and set up Volume Manager. In addition, you install VEA and explore the other Volume Manager interfaces. Additional Resources VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager.
Lesson 1 Virtual Objects Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1–21
Lab 1 Lab 1: Virtual Objects • In this lab, you are introduced to the lab environment, the system, and disks that you will use throughout this course. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B.
VM40_Solaris_R1.0_20040115
1-13
© Copyright 2002 VERITAS
Lab 1: Virtual Objects To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
1–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 2 Installation and Interfaces
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
2-2
© Copyright 2002 VERITAS
Introduction Overview This lesson describes guidelines for a first-time installation of VERITAS Volume Manager (VxVM). Installation prerequisites and procedures for adding license keys and adding software packages are covered. This lesson also provides an introduction to the interfaces used to manage VERITAS Volume Manager. Importance Before you install VxVM, you need to be aware of the contents of your physical disks and decide how you want VxVM to handle those disks. By following these installation guidelines, you can ensure that you set up VxVM in a way that meets the needs of your environment. You can use the three interfaces to VxVM interchangeably to perform administrative functions, which provides flexibility in how you access and manage VxVM objects.
2–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: • Identify operating system compatibility and other preinstallation considerations. • Obtain license keys, add licenses by using vxlicinst, and view licenses by using vxlicrep. • Install VxVM interactively, by using installation utilities, and manually, by adding software packages and running the vxinstall program.
• Describe the three VxVM user interfaces. • Install and start the VEA software packages. • Manage the VEA server by displaying server status, version, task logs, and event logs. VM40_Solaris_R1.0_20040115
2-3
© Copyright 2002 VERITAS
Outline of Topics • Installation Prerequisites • Adding License Keys • VERITAS Software Packages • Installing VxVM • VxVM User Interfaces • Installing and Starting VEA • Managing the VEA Server
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–3
OS Compatibility Solaris Version
VxVM Version
AIX Version
HP-UX Version
Linux Version RedHat AS 3.0 SUSE
4.0
7, 8, 9
11i, 11.23
5.1, 5.2
3.5.x
2.6, 7, 8, 9
11.11i (0902)
No release No release*
3.2.x
2.6, 7, 8
11.11i
5.1.0.15
3.1.1
2.6, 7, 8
No release
No release No release
3.1
2.6, 7, 8
11.0
No release No release
3.0.4, 3.0.3
2.5.1, 2.6, 7, 8 No release
No release No release
3.0.2, 3.0.1
2.5.1, 2.6, 7
No release No release
No release
VM40_Solaris_R1.0_20040115
or higher
RedHat 7.1 (2.4.9-12), 7.2 (2.4.7-10), AS 2.1, SUSE
* Note: VxVM 3.2.2 on Linux has functionality
2-4
equivalent to VxVM 3.5 on Solaris. © VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-4
Installation Prerequisites OS Version Compatibility Before installing VxVM, you should ensure that the version of VxVM that you are installing is compatible with the version of the operating system that you are running. You may need to upgrade your operating system before you install VxVM 4.0. If you are planning to install other VERITAS products, such as VERITAS File System (VxFS), check OS compatibility for those products as well: VxFS Version
2–4
Supported Solaris Versions
Supported HP-UX Versions
Supported AIX Versions
Supported Linux Versions
4.0
7, 8, 9
11i, 11.23
5.1, 5.2
RedHat AS 3.0, SUSE
3.5.x
2.6, 7, 8, 9
11.11i (0902)
No release
No release
3.4.x
2.6, 7, 8
No release
5.1.0.15 or higher
RedHat 7.1 (2.4.9-12), 7.2 (2.4.7-10, AS 2.1, SUSE
3.3.3
2.5.1, 2.6, 7, 8
No release
No release
No release
3.3.2
2.5.1, 2.6, 7
11.0
No release
No release
3.3.1
2.5.1, 2.6
No release
No release
No release
3.3
2.5.1, 2.6
No release
No release
No release
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Support Resources http://support.veritas.com http://support.veritas.com Patches Patches
Products Products
Search for for Technotes Technotes
Alerts Alerts
Email Email services services
Support Support services services VM40_Solaris_R1.0_20040115
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-5
2-5
Version Release Differences With each new release of the VxVM software, changes are made that may affect the installation or operation of VxVM in your environment. By reading version release notes and installation documentation that are included with the product, you can stay informed of any changes. For more information about specific releases of VERITAS Volume Manager, visit the VERITAS Support Web site at: http://support.veritas.com This site contains product and patch information, a searchable knowledge base of technical notes, access to product-specific news groups and e-mail notification services, and other information about contacting technical support staff. Note: If you open a case with VERITAS Support, you can view updates at: http://support.veritas.com/viewcase
You can access your case by entering the email address associated with your case and the case number.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–5
VxVM Licensing •
Licensing utilities are contained in the VRTSvlic package, which is common to all VERITAS products. This package can coexist with previous licensing packages, such as VRTSlic.
•
To obtain a license key: – Complete a License Key Request form and fax it to VERITAS customer support. or – Create a vLicense account and retrieve license keys online. vLicense is a Web site that you can use to retrieve and manage your license keys.
•
To generate a license key, you must provide your: – – – – –
Software serial number Customer number Order number Host ID Machine type
VM40_Solaris_R1.0_20040115
2-6
© Copyright 2002 VERITAS
Adding License Keys You must have your license key before you begin installation, because you are prompted for the license key during the installation process. A new license key is not necessary if you are upgrading VxVM from a previously licensed version of the product. If you have an evaluation license key you must obtain a permanent license key when you purchase the product. The VERITAS licensing mechanism checks the system date to verify that it has not been set back. If the system date has been reset, the evaluation license key becomes invalid. Obtaining a License Key When you purchase VxVM, you receive a License Key Request form issued by VERITAS customer support. By using this form, you can obtain a license key. License keys are uniquely generated based on your system host ID number. To generate a new license key, you must provide the following information: • Software serial number (located in your software media kit) • Customer number (located on your License Key Request form) • Order number (located on your License Key Request form) • Host ID: Solaris: hostid HP-UX: uname -i AIX: uname -m Linux: hostid 2–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
•
Host machine type: Solaris and HP-UX: uname -m HP-UX: model AIX: uname -M Linux: uname -m
Solaris Note
If a Sun StorEdge array is attached to a Sun system, the VxVM license is generated automatically. The license is only valid while the StorEdge is attached to the system. If the StorEdge fails, the license remains valid for an additional 14 days.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–7
Generating License Keys http://vlicense.veritas.com ••
•
To add a license key: # vxlicinst
•
License keys are installed in /etc/vx/licenses/lic.
•
To view installed license key information: # vxlicrep
•• •• •• ••
VM40_Solaris_R1.0_20040115
Access Access automatic automatic license license key key generation generation and and delivery. delivery. Manage Manage and and track track license license key key inventory inventory and and usage. usage. Locate Locate and and reissue reissue lost lost license license keys. keys. Report, Report, track, track, and and resolve resolve license license key key issues issues online. online. Consolidate Consolidate and and share share license license key key information information with with other other accounts. accounts.
2-7
© Copyright 2002 VERITAS
Generating License Keys with vLicense VERITAS vLicense (vlicense.veritas.com) is a self-service online license management system. By setting up an account through vLicense, you can: • Access automatic license key generation and delivery services. License key requests are fulfilled in minutes. • Manage and track license key inventory and usage. Your complete license key inventory is stored online with detailed history and usage information. • Locate and reissue lost license keys. Key history information provides you with an audit trail that can be used to resolve lost license key issues. • Report, track, and resolve license key issues online. The online customer service feature within the license management system enables you to create and track license key service requests. • Consolidate and share license key information with other accounts. For example, an account with Company A can share key information with their parent Company B, depending on the details of their licensing agreements. Notes on vLicense • vLicense currently supports production license keys only. Temporary, evaluation, or demonstration keys must be obtained through your VERITAS sales representative. • Host ID changes cannot be processed through the vLicense system. Contact VERITAS customer support for more details.
2–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Adding a License Key You can add a license key for Volume Manager when you run the installation program or, if the VRTSvlic package is already installed, by using the vxlicinst command. To license optional features, you reenter the vxlicinst command and enter a valid license key for each optional feature. License keys are installed in /etc/vx/licenses/lic. If you have old license keys installed in /etc/vx/elm, then leave this directory on your system. The old and new license utilities can coexist. Viewing Installed License Keys If you are not sure whether license keys have been installed, you can view installed license key information by using the vxlicrep command. Information about installed license keys is displayed. This information includes: • License key number • Name of the VERITAS product that the key enables • Type of license • Features enabled by the key Note: The vxlicrep command reports all currently installed licenses for both VRTSvlic and the previous licensing package, VRTSlic (used for VxVM 3.2.x and earlier releases).
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–9
Comparing Licensing Utilities Description
VRTSvlic
VRTSlic
VxVM Versions
VxVM 3.5 and later
VxVM 3.2.x and earlier
Adding a license key
vxlicinst
vxlicense -c
Viewing license keys
vxlicrep
vxlicense -p
/etc/vx/licenses/lic
/etc/vx/elm
key_string.vxlic Example: ABCD-EFGH-IJKL-MNOPQRST-UVWX-YZ.vxlic
feature_no.lic
Path of installed licenses License key file naming
VM40_Solaris_R1.0_20040115
Example: 95.lic
2-8
2-8
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
Managing Multiple Licensing Utilities The current licensing utilities of the VRTSvlic package can coexist on your system with previous licensing utilities, such as those contained in the VRTSlic package. You should retain the VRTSlic package only if you have older products that rely on the previous licensing technology. Otherwise, you can remove the VRTSlic package. When you remove the VRTSlic package, existing license key files are not deleted and can be accessed by the VRTSvlic utilities.
2–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VERITAS Storage Solutions Old Name
New Name
VERITAS VERITAS Foundation Foundation Suite Suite
VERITAS VERITAS Storage Storage Foundation
VERITAS VERITAS Foundation Suite Suite HA HA
VERITAS VERITAS Storage Storage Foundation Foundation HA HA
VERITAS VERITAS Database Database Edition Edition for for Oracle, Oracle, Sybase, Sybase, or or DB2 DB2
VERITAS VERITAS Storage Storage Foundation Foundation for for Oracle, Sybase, or DB2
VM40_Solaris_R1.0_20040115
2-9
© Copyright 2002 VERITAS
VERITAS Software Packages VERITAS Storage Solutions Products and Suites VERITAS Volume Manager is one of the foundation components of many VERITAS storage solutions. You can install VxVM as a stand-alone product, or as part of a product suite. Installing VxVM As Part of a Product Suite VxVM is included in many product suites. The packages that you install depend on the products and licenses that you have purchased. When you install a product suite, the component product packages are automatically installed. When installing VxVM or any of the product suites, you should always follow the instructions in the product release notes and installation guides.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–11
VxVM Packages
New in 4.0
VERITAS Infrastructure packages: Licensing utilities Common product/platform installer Perl used by installation technology
• VRTSvlic • VRTScpi • VRTSperl
VxVM packages: • • • • •
VxVM binaries VxVM Intelligent Storage Provisioning Device Discovery Layer services provider VxVM documentation VxVM manual pages
VRTSvxvm VRTSalloc VRTSddlpr VRTSvmdoc VRTSvmman
VEA (GUI) packages: VEA service VEA graphical user interface Disk management services provider File system services provider VEA service localized package Installation administration file
• VRTSob • VRTSobgui • VRTSvmpro • VRTSfspro VM40_Solaris_R1.0_20040115 • VRTSmuob • VRTSobadmin © VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-10
2-10
VERITAS Volume Manager Packages You must add VRTSvlic before the other VxVM packages. Package Space Requirements Before you install any of the packages, confirm that your system has enough free disk space to accommodate the installation. VxVM programs and files are installed in the /, /usr, and /opt file systems. Consult the product installation guides for a detailed list of package space requirements. Solaris Note
VRTSvxvm adds forceload lines to /etc/system for vxio, vxspec, and vxdmp drivers.
2–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VxFS Packages
New in 4.0
• VRTSvxfs • VRTSfsdoc
VxFS software and manual pages VxFS documentation
• VRTSfppm • VRTSap • VRTStep
VERITAS File Placement Policy Manager VERITAS Action Provider VERITAS Task Exec Provider
Solaris Only: VxFS installation modifies /etc/system by adding: * vxfs_START -- do not remove the following lines: * * VxFS requires a stack size greater than the default 8K. * The following values allow the kernel stack size * for all threads to be increased to 24K. * set lwp_default_stksize=0x6000 * vxfs_END
The original /etc/system file is copied to /etc/fs/vxfs/system.preinstall.
VM40_Solaris_R1.0_20040115
2-11
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-11
VERITAS File System Packages VxFS programs and files are installed in the /, /usr, and /opt file systems. Solaris Note
VxFS often requires more than the default 8K kernel stack size, so during the installation of VxFS 3.2.x and higher, entries are added to the /etc/system file. This increases the kernel thread stack size of the system to 24K. VxFS is a kernel-loadable driver, which means that it may load ahead of or behind other drivers when the system reboots. To avoid the possibility of problems in a failover scenario, you should maintain the same VxFS product version on all systems. HP-UX Note
The VxFS installation procedure modifies the /stand/system file, and saves the old system file as /stand/system.prev.: • The procedure replaces vxfs and vxportal drivers with vxfs35 and vxportal35. • The procedure adds the fdd and qlog drivers. • The procedure adds the following line to tune max_thread_proc: max_thread_proc 2100
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–13
Optional Features
Features Features are are included included in in the the VxVM VxVM package, package, but but require require aa separate separate license. license.
Features Features are are included included in in the the VxFS VxFS package, package, but but requires requires aa separate separate license. license.
•
VERITAS FlashSnap – Enables point-in-time copies of data with minimal performance overhead – Includes disk group split/join, FastResync, and storage checkpointing (in conjunction with VxFS)
•
VERITAS Volume Replicator – Enables replication of data to remote locations – VRTSvrdoc: VVR documentation
•
VERITAS Cluster Volume Manager Used for high availability environments
•
VERITAS Quick I/O for Databases Enables applications to access preallocated VxFS files as raw character devices
•
VERITAS Cluster File System Enables multiple hosts to mount and perform file operations concurrently on the same file
VM40_Solaris_R1.0_20040115
2-12
© Copyright 2002 VERITAS
VxVM Optional Features Several optional features do not require separate packages, only additional licenses. The following optional features are built-in to VxVM that you can enable with additional licenses: • VERITAS FlashSnap: The VRTSvxvm package contains a set of optional features called VERITAS FlashSnap. FlashSnap is an integral part of the Volume Manager software, but requires a separate license key for use. FlashSnap facilitates point-in-time copies of data, while enabling applications to maintain optimal performance, by enabling features such as FastResync and disk group split and join functionality. FlashSnap provides an efficient method to perform offline and off-host processing tasks, such as backup and decision support. • VERITAS FastResync: The FastResync option can be purchased separately or as part of the VERITAS FlashSnap option. The FastResync option speeds mirror synchronization by writing only changed data blocks when split mirrors are rejoined, minimizing the effect of mirroring operations. • VERITAS Volume Replicator: The VRTSvxvm package also contains the VERITAS Volume Replicator (VVR) software. VVR is an integral part of the Volume Manager software but requires a separate license key to activate the functionality.
2–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
•
Volume Replicator augments Volume Manager functionality to enable you to replicate data to remote locations over any IP network. Replicated copies of data can be used for disaster recovery, off-host processing, off-host backup, and application migration. Volume Replicator ensures maximum business continuity by delivering true disaster recovery and flexible off-host processing. Cluster Functionality: VxVM includes optional cluster functionality that enables VxVM to be used in a cluster environment. Cluster functionality is an integral part of the Volume Manager software but requires a separate license key to activate the features. A cluster is a set of hosts that share a set of disks. Each host is referred to as a node in a cluster. The cluster functionality of VxVM allows up to 16 nodes in a cluster to simultaneously access and manage a set of disks under VxVM control. The same logical view of disk configuration and any configuration changes are available on all of the nodes. When the cluster functionality is enabled, all of the nodes in the cluster can share VxVM objects. Disk groups can be simultaneously imported on up to 16 hosts, and Cluster File System (an option to VERITAS File System) is used to ensure that only one host can write to a disk group during write operations. The main benefits of cluster configurations are high availability and off-host processing.
VxFS Optional Features The VRTSvxfs package also contains these optionally licensable features: • VERITAS Quick I/O for Databases: VERITAS Quick I/O for Databases (referred to as Quick I/O) enables applications to access preallocated VxFS files as raw character devices. This provides the administrative benefits of running databases on file systems without the performance degradation usually associated with databases created on file systems. Quick I/O is a separately licensable feature available only with VERITAS Editions products. Note: In previous VxFS distributions, the QuickLog and Quick I/O features were supplied in separate packages (VRTSqlog and VRTSqio, respectively). • VERITAS Cluster File System: VERITAS Storage Foundation Cluster File System (CFS) is a shared file system that enables multiple hosts to mount and perform file operations concurrently on the same file. CFS is a separately licensable feature that requires an integrated set of VERITAS products to function. To configure a cluster and to provide failover support, CFS requires: – VERITAS Cluster Server (VCS): VCS supplies two major components integral to CFS: the Low Latency Transport (LLT) package and the Group Membership and Atomic Broadcast (GAB) package. LLT provides nodeto-node communications and monitors network communications. GAB provides cluster state, configuration, and membership service, and monitors the heartbeat links between systems to ensure that they are active. – VERITAS Volume Manager (VxVM): CFS requires the Cluster Volume Manager (CVM) feature of VxVM to create the cluster volumes necessary Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–15
for mounting cluster file systems. Other Options Included with Foundation Suite In addition to VERITAS Volume Manager and VERITAS File System, VERITAS Foundation Suite includes these optional products and features: • VERITAS QuickLog: VERITAS QuickLog is part of the VRTSvxfs package and is a feature designed to enhance file system performance. Although QuickLog can improve file system performance, VxFS does not require QuickLog to operate effectively. The VERITAS QuickLog license is included with VERITAS Foundation Suite and VERITAS Foundation Suite HA. • VERITAS SANPoint Control QuickStart: VERITAS SANPoint Control is a separate software tool that you can use in a Storage Area Network (SAN) environment to provide comprehensive resource management and end-to-end data path management from host to storage. With SANPoint Control, you can have a single, centralized, consistent storage management interface to simplify the complex tasks involved in deploying, managing and growing a multivendor networked storage environment. The QuickStart version is a limited-feature version of this tool that consists of the following packages on your VERITAS CD-ROM: – VRTSspc: The VERITAS SANPoint Control console – VRTSspcq: The VERITAS SANPoint Control QuickStart software Installing and operating VERITAS SANPoint Control is beyond the scope of this course. For detailed training, attend the VERITAS SANPoint Control course. Licenses Required for Optional Features The following table describes the products and licenses required to enable optional volume management features:
2–16
Feature
Built into the Software Package of:
Licenses needed to enable the feature are:
Disk group split and join
VxVM
FlashSnap
Volume replication
VxVM
Volume Replicator
Cluster Volume Manager
VxVM
SANPoint Foundation Suite
FastResync
VxVM
FlashSnap or FastResync
Storage Checkpoints
VxFS
FlashSnap or Database Edition
QuickLog
VxFS
Foundation Suite
Quick I/O
VxFS
Database Edition
Cluster File System
VxFS
SANPoint Foundation Suite
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
What Is Enclosure-Based Naming? When When you you install install VxVM, VxVM, you you are are asked asked if you you want want to use use enclosure-based enclosure-based naming. naming.
•
Standard device naming is based on controllers, for example, c1t0d0s2.
•
Enclosure-based naming is based on disk enclosures, for example, enc0.
•
Enclosure-based naming benefits a SAN environment.
Host Host
c1 Disk Disk Enclosures Enclosures
Fibre Fibre Channel Channel Hub/Switch Hub/Switch
enc2
Note: Note: Disk Disk naming naming is is covered covered in in more more detail detail in in aa later later lesson. lesson. VM40_Solaris_R1.0_20040115
enc1 enc0
2-13
© Copyright 2002 VERITAS
Before Installing VxVM: What Is Enclosure-based Naming? When you install VxVM, you are asked if you want to use enclosure-based naming. As an alternative to standard disk device naming (for example, c0t0d0 or hdisk2), VxVM 3.2 and later versions provide enclosure-based naming. An enclosure, or disk enclosure, is an intelligent disk array, which permits hotswapping of disks. With VxVM, disk devices can be named for enclosures rather than for the controllers through which they are accessed. In a storage area network (SAN) that uses Fibre Channel hubs or fabric switches, information about disk location provided by the operating system may not correctly indicate the physical location of the disks. For example, c#t#d#s# naming assigns controller-based device names to disks in separate enclosures that are connected to the same host controller. Enclosure-based naming allows VxVM to access enclosures as separate physical entities. By configuring redundant copies of your data on separate enclosures, you can safeguard against failure of one or more enclosures. Enclosure-based naming is also useful when managing the dynamic multipathing (DMP) feature of VxVM. For example, if two paths (c1t99d0 and c2t99d0) exist to a single disk in an enclosure, VxVM can use a single DMP metanode, represented by an enclosure name such as enc0_0, to access the disk. You can also change the naming scheme at a later time by using VxVM utilities.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–17
What Is a Default Disk Group? When When you you install install VxVM, VxVM, you you are are asked asked if you you want want to set up up aa default default disk disk group. •
In VxVM 4.0, the rootdg requirement no longer exists.
•
You can set up a system-wide default disk group to which VxVM commands default if you do not specify a disk group.
•
If you are upgrading from VxVM 3.5 to 4.0, then you must reboot in order to set the default disk group.
•
If you choose not to set a default disk group at installation, you can set the default disk group later from the command line.
VM40_Solaris_R1.0_20040115
Note: Note: The The default default disk disk group group and and other other new new disk disk group group requirements requirements are are covered covered in in more more detail detail in in aa later later lesson. lesson.
2-14
© Copyright 2002 VERITAS
Before Installing VxVM: What Is a Default Disk Group? When you install VxVM, you are asked whether you want to set up a system-wide default disk group. The main benefit of creating a default disk group is that VxVM commands default to that disk group if you do not specify a disk group on the command line. defaultdg specifies the default disk group and is an alias for the disk group name that should be assumed if a disk group is not specified in a command. Note: You cannot use the following names for the default disk group because they are reserved words: bootdg, defaultdg, and nodg. These terms are covered in more detail in a later lesson. In releases prior to Volume Manager 4.0, the default disk group was rootdg (the root disk group). For Volume Manager to function, the rootdg disk group had to exist and it had to contain at least one disk. The rootdg requirement no longer exists. There is no longer a requirement that you name any disk group rootdg, and any disk group that is named rootdg has no special properties by virtue of this name. If you have upgraded your system to VxVM 4.0, you may find it convenient to continue to configure a disk group named rootdg as the default disk group (defaultdg). There is no requirement that both defaultdg and bootdg refer to the same disk group, nor that either the default disk group or the boot disk group be named rootdg.
2–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Installing VERITAS Products VERITAS Storage Solutions Installation Menu • installer Install multiple VERITAS products
Individual Product Installation Scripts • installvm • installfs • installsf
Install VxVM, VxFS, or Storage Foundation individually
OS Package Installation Commands • • • •
pkgadd (Solaris) swinstall (HP-UX) installp (AIX) rpm (Linux)
Install packages individually
Then, run vxinstall.
With installation scripts, adding packages and initial VxVM configuration are performed.
With native package installation commands, initial VxVM configuration is a separate step.
VM40_Solaris_R1.0_20040115
2-15
© Copyright 2002 VERITAS
Installing VxVM Methods for Adding VxVM Packages A first-time installation of VxVM involves adding the software packages and configuring VxVM for first-time use. You can add VERITAS product packages by using one of three methods: Method
Command
Notes
VERITAS Installation Menu
installer
Use to install multiple VERITAS products interactively. Installs packages and configures VxVM for first time use.
Product installation scripts
installvm installfs installsf
Install individual VERITAS products interactively. Installs packages and configures VxVM for first time use.
Native operating system package installation commands
pkgadd (Solaris) swinstall (HP-UX) installp (AIX) rpm (Linux) Then, to configure VxVM, run: vxinstall
Install individual packages, for example when using your own custom installation scripts. First-time VxVM configuration must be run as a separate step.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–19
Installation Menu VERITAS Storage Solutions 4.0 VERITAS Product Version Installed Licensed ================================================================ File System Volume Manager Volume Replicator Cluster Server Cluster Server Traffic Director Storage Foundation Storage Foundation for Oracle Storage Foundation for DB2 Storage Foundation for Sybase SANPoint Control QuickStart Enterprise Administrator GUI
no no no no no no no no no no no
no no no no no no no no no no
Selection Menu: I) Install/Upgrade a Product L) License a Product U) Uninstall a Product VM40_Solaris_R1.0_20040115 Q) Quit
C) P) D) ?)
Configure an Installed Product Perform a Preinstallation Check View a Product Description Help
Enter a Selection: [I,C,L,P,U,D,Q,?]
© Copyright 2002 VERITAS
2-16
2-16
Adding Packages with the VERITAS Installation Menu The Installer is a menu-based installation utility that you can use to install any product contained on the VERITAS Storage Solutions CD-ROM. This utility acts as a wrapper for existing product installation scripts and is most useful when you are installing multiple VERITAS products or bundles, such as VERITAS Storage Foundation or VERITAS Storage Foundation for Databases. When you add VxVM packages by using the installer utility, all VxVM and VEA packages are installed. If you want to add a specific package only, for example, only the VRTSvmdoc package, then you must add the package manually from the command line. Note: The VERITAS Storage Solutions CD-ROM contains an installation guide (storage_solutions_ig.pdf) that describes how to use the installer utility. You should also read all product installation guides and release notes even if you are using the Installer utility. To add the VxVM packages using the installer utility: 1 Log on as superuser. 2 Mount the VERITAS Storage Solutions CD-ROM. 3 Locate and invoke the installer script: # cd /cdrom/CD_name # ./installer 4 The VERITAS licensing package is installed automatically, and the product status page is displayed. This list displays the VERITAS products on the CD-
2–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
ROM and the installation and licensing status of each product. A menu of choices is also displayed. VERITAS Storage Solutions 4.0 VERITAS Product Version Installed Licensed =================================================================== File System no no Volume Manager no no Volume Replicator no no Cluster Server no no Cluster Server Traffic Director no no Storage Foundation no no Storage Foundation for Oracle no no Storage Foundation for DB2 no no Storage Foundation for Sybase no no SANPoint Control QuickStart no no Enterprise Administrator GUI no Selection Menu: I) L) U) Q)
Install/Upgrade a Product License a Product Uninstall a Product Quit
C) P) D) ?)
Configure an Installed Product Perform a Preinstallation Check View a Product Description Help
Enter a Selection: [I,C,L,P,U,D,Q,?]
Type L to enter a license for a product. Follow the instructions to enter a license key and return to the product status page. 6 Type I to install a product. Follow the instructions to select the product that you want to install. Installation begins automatically. 7 Follow the instructions to interactively install the product. When the installation is complete, you return to the product status page, where you can install another product or exit the installation menu. 5
After installation, the installer creates three text files that can be used for auditing or debugging. The names and locations of each file are displayed at the end of the installation and are located in /opt/VRTS/install/logs: File
Description
Installation log file
Contains all commands executed during installation, their output, and any errors generated by the commands. Used for debugging installation problems and for analysis by VERITAS Support.
Response file
Contains configuration information entered during the procedure. Can be used for future installation procedures when using a product installation script with the -responsefile option.
Summary file
Contains the output of VERITAS product installation scripts. Shows products that were installed, locations of log and response files, and installation messages displayed.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–21
Installation Scripts # cd /cdrom/CD_name/product_name # ./installvm 1. Enter the system name(s). 2. Initial system check is performed. 3. Infrastructure packages are installed. 4. Select optional packages to install. 5. VxVM package requirements are checked and then packages are installed. 6. Licenses are checked, and you can install licenses as needed. 7. You can configure VxVM now: a. b. c. d.
Use enclosure-based naming? Start VxVM? Set a default disk group? If you chose to start VxVM now, then VxVM is started.
VM40_Solaris_R1.0_20040115
2-17
© Copyright 2002 VERITAS
Adding Packages Using the Product Installation Scripts You can install VERITAS products by using the individual product installation scripts. For example: • installvm installs VERITAS Volume Manager. • installfs installs VERITAS File System. • installsf installs VERITAS Storage Foundation (both VxVM and VxFS). Using these utilities provides the option to specify various installation options on the command line, for example:
2–22
Command Line Option
Function
-precheck system1 system2 ...
Performs a preinstallation check to determine if systems meet all installation requirements
-installonly system1 system2 ...
Installs packages, but does not configure the product
-configure system1 system2 ...
Configures the product, if the -installonly option was previously used
-pkgpath package_path
Designates a path to VERITAS packages. Useful, for example, in cluster configurations to enable package installation without copying packages to all systems in the cluster
-responsefile file_name
Uses installation and configuration information stored in a response file instead of prompting for information.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
See the VERITAS Volume Manager Installation Guide for a complete list of options. Once you log on as superuser, mount the VERITAS CD-ROM, and invoke the product installation script, the installvm script follows this sequence: # cd /cdrom/CD_name/product_name # ./installvm 1 When prompted, enter the name of the system or systems on which you want to install VxVM. 2 The installation script performs an initial system check to ensure that a compatible OS version is installed, that the VRTSvxvm package is not already installed, and that any required patches are installed. If the initial system check is not successful, messages are displayed to explain what actions to take before attempting installation again. 3 The installvm script proceeds to install the infrastructure packages: VRTScpi and VRTSvlic. 4 Next, you have the option to install some or all of the optional packages: VRTSobgui, VRTSvmman, and VRTSvmdoc. 5 Next, all VxVM packages are displayed and installation requirement checks are performed. If requirement checks are successful, then you can begin installing the required VxVM packages. Output is displayed as each package is installed. 6 The script then checks whether you have a license installed. You can install or add licenses as needed. 7 At this stage, you can choose to configure VxVM now or later. If you choose to configure now: a You are prompted to specify whether you want to use enclosure-based naming. b You are prompted to specify whether you want to start VxVM. c You are prompted to specify whether you want to set up a default disk group. If so, enter a disk group name to be used as the system-wide default. d If you chose to start VxVM now, then VxVM is started.
If you are installing VxFS, then you can invoke the script installfs and follow a similar procedure.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–23
Adding Packages Manually • Ensure that packages are installed in the correct order. • Always install VRTSvlic first. • Always install the VRTSvxvm package before other VxVM packages. • Documentation and manual pages are optional. • Follow the product installation guides. • After installing the packages using OS-specific commands, run vxinstall to configure VxVM for the first time.
VM40_Solaris_R1.0_20040115
2-18
© Copyright 2002 VERITAS
Adding Packages Manually Solaris
After logging on as superuser and mounting the CD-ROM, you use the pkgadd command: # pkgadd -d . VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman # pkgadd -a VRTSobadmin -d . VRTSob VRTSobgui # pkgadd -d . VRTSalloc VRTSddlpr # pkgadd -d . VRTSfspro VRTSvmpro # pkgadd -d . VRTScpi VRTSperl # pkgadd -d . VRTSvxfs VRTSfsdoc HP-UX
After logging on as superuser and mounting the CD-ROM, use the swinstall command: # swinstall -x autoreboot=true -s /cdrom/product_name /pkgs packages
For example: # swinstall -x autoreboot=true -s -d /cdrom/ volume_manager3.5/pkgs VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman VRTSob VRTSobgui VRTSalloc VRTSddlpr VRTSfspro VRTSvmpro VRTScpi VRTSperl VRTSvxfs VRTSfsdoc
2–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
HP-UX 11i Note: Starting with the December 2001 release of HP-UX 11i, the Base version of VxVM is installed by default when you install any HP-UX 11i Operating Environment (OE). You do not need to add the VxVM packages separately. The Base version of VxVM provides basic online disk management features. To access enhanced volume manager capabilities, such as mirroring, RAID-5, and DMP; or other VxVM add-on products, such as fast mirror resynchronization, you must add licenses for these features. All enhanced features and add-on products are included in the VxVM software; however, you must add the appropriate licenses to enable their use. AIX
After logging on as superuser and mounting the CD-ROM, use the standard AIX installp command. The syntax to add and commit one or more filesets is: installp -ac [options] -d location fileset [...]
Useful additional options include: • -g (install prerequisite filesets) • -N (override saving of replaced files) • -X (attempt to expand any file systems where there is insufficient space to do the installation) # installp –acgNX -d . VRTSvlic VRTScpi VRTSperl # installp –acgNX -d . VRTSvxvm VRTSvmman VRTSvmdoc # installp –acgNX -d . VRTSvxfs VRTSfsdoc # installp –acgNX -d . VRTSob VRTSobgui # installp –acgNX -d . VRTSvmpro VRTSfspro Linux
On Linux, you use the RedHat package manager, rpm, to add packages: rpm -ihv package_name
The -i option signifies installation mode. You use the -h and -v options to format the installation output. # rpm -ihv VRTSvlic-3.00-007.i386.rpm # rpm -ihv VRTSvxvm-3.2-update1_GA.i686.rpm # rpm -ihv VRTSvmdoc-3.2-update1_GA.i686.rpm # rpm -ihv VRTSvmman-3.2-update1_GA.i686.rpm # rpm -ihv VRTSob-3.0.2-261.i686.rpm # rpm -ihv VRTSobgui-3.0.2-261.i686.rpm # rpm -ihv VRTSvmpro-3.2-update1_GA.i686.rpm # rpm -ihv VRTSfspro-3.4.2-R7_GA.i686.rpm # rpm -ihv VRTSvxfs-3.4.2-R7_GA_2.4.7-10.i686.rpm # rpm -ihv VRTSfsdoc-3.4.2-R7_GA.i686.rpm Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–25
Verifying Package Installation To verify package installation, use OS-specific commands: • Solaris: # pkginfo -l VRTSvxvm
• HP-UX: # swlist -l product VRTSvxvm
• AIX: # lslpp -l VRTSvxvm
• Linux: # rpm -qal VRTSvxvm VM40_Solaris_R1.0_20040115
2-19
© Copyright 2002 VERITAS
Verifying Package Installation If you are not sure if VERITAS packages are installed, or if you want to verify which packages are installed on the system, you can view information about installed packages by using OS-specific commands to list package information. Solaris
To list all installed packages on the system: # pkginfo
To restrict the list to installed VERITAS packages: # pkginfo | grep VRTS
To display detailed information about a package: # pkginfo -l VRTSvxvm HP-UX
To list all installed packages on the system: # swlist -l product
To restrict the list to installed VERITAS packages: # swlist -l product | grep VRTS
To display detailed information about a package: # swlist -l product VRTSvxvm AIX
To list all installed packages on the system: 2–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# lslpp
To restrict the list to installed VERITAS packages, type: # lslpp -l 'VRTS*'
To verify that a particular fileset has been installed, use its name, for example: # lslpp -l VRTSvxvm Linux
To verify package installation on the system: rpm -q[al] package_name
For example, to verify that the VRTSvxvm package is installed: # rpm -q VRTSvxvm
The -al option lists detailed information about the package.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–27
The vxinstall Program •
After manually adding VxVM packages, you must run vxinstall to configure VxVM for first-time use.
•
The vxinstall program is an interactive program that guides you through the initial VxVM configuration.
•
The main steps in the vxinstall process are: 1. Start the vxinstall program: # vxinstall 2. Enter license keys. 3. Select a naming method, either enclosure-based or traditional naming. 4. If desired, set up a system-wide default disk group.
VM40_Solaris_R1.0_20040115
2-20
© Copyright 2002 VERITAS
Configuring VxVM Using vxinstall If you manually added VxVM packages, you must run vxinstall to configure Volume Manager for initial use. You must be logged on as superuser in order to run the vxinstall program.
2–28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VxVM User Interfaces VxVM supports three user interfaces: • VERITAS Enterprise Administrator (VEA): A GUI that provides access through icons, menus, wizards, and dialog boxes • Command Line Interface (CLI): UNIX utilities that you invoke from the command line • Volume Manager Support Operations (vxdiskadm): A menu-driven, text-based interface also invoked from the command line Note: vxdiskadm only provides access to certain disk and disk group management functions. VM40_Solaris_R1.0_20040115
2-21
© Copyright 2002 VERITAS
VxVM User Interfaces Volume Manager User Interfaces Volume Manager supports three user interfaces. Volume Manager objects created by one interface are compatible with those created by the other interfaces. • VERITAS Enterprise Administrator (VEA): VERITAS Enterprise Administrator (VEA) is a graphical user interface to Volume Manager and other VERITAS products. VEA provides access to VxVM functionality through visual elements such as icons, menus, wizards, and dialog boxes. Using VEA, you can manipulate Volume Manager objects and also perform common file system operations. • Command Line Interface (CLI): The command line interface (CLI) consists of UNIX utilities that you invoke from the command line to perform Volume Manager and standard UNIX tasks. You can use the CLI not only to manipulate Volume Manager objects, but also to perform scripting and debugging functions. Most of the CLI commands require superuser or other appropriate privileges. The CLI commands perform functions that range from the simple to the complex, and some require detailed user input. • Volume Manager Support Operations (vxdiskadm): The Volume Manager Support Operations interface, commonly called vxdiskadm, is a menu-driven, text-based interface that you can use for disk and disk group administration functions. The vxdiskadm interface has a main menu from which you can select storage management tasks. A single VEA task may perform multiple command-line tasks.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–29
VEA: Main Window Menu Menu Bar Bar Toolbar Toolbar
Object Object Tree Tree
Grid Grid
Console/ Console/ Task History Task History VM40_Solaris_R1.0_20040115
2-22
Status Status Area Area © VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-22
Using the VEA Interface The VERITAS Enterprise Administrator (VEA) is the graphical user interface for Volume Manager and other VERITAS products. You can use the VxVM features of VEA to administer disks, volumes, and file systems on local or remote machines. VEA replaces the earlier graphical user interface, Volume Manager Storage Administrator (VMSA). VEA is a Java-based interface that consists of a server and a client. You must install the VEA server on a UNIX machine that is running VERITAS Volume Manager. The VEA client can run on any machine that supports the Java 1.4 Runtime Environment, which can be Solaris, HP-UX, AIX, Linux, or Windows. Some VxVM features of VEA include: • Remote Administration: You can perform VxVM administration remotely or locally. The VEA client runs on UNIX or Windows machines. • Security: VEA can only be run by users with appropriate privileges, and you can restrict access to a specific set of users. • Multiple Host Support: The VEA client can provide simultaneous access to multiple host machines. You can use a single VEA client session to connect to multiple hosts, view the objects on each host, and perform administrative tasks on each host. Each host machine must be running the VEA server. • Multiple Views of Objects: VEA provides multiple ways to view Volume Manager objects, including a hierarchical tree layout, a list format, and a variety of graphical views.
2–30
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
The VEA Main Window VEA provides a variety of ways to view and manipulate Volume Manager objects. When you launch VEA, the VEA main window is displayed. The VEA main window consists of the following components: • The hierarchical object tree provides a dynamic display of VxVM objects and other objects on the system. • The grid lists objects that belong to the group selected in the object tree. • The menu bar and toolbar provide access to tasks. • The Console/Task History tabs display a list of alerts and a list of recently performed tasks. • The status area identifies the currently selected server host and displays an alert icon when there is a problem with the task being performed. Click the icon to display the VEA Error console, which contains a list of messages related to the error. Other Views in VEA In addition to the main window, you can also view VxVM objects in other ways: • The Disk View window provides a graphical view of objects in a disk group. • The Volume View window provides a close-up graphical view of a volume. • The Volume to Disk Mapping window provides a tabular view of the relationship between volumes and their underlying disks. • The Object Properties window provides information about a selected object.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–31
VEA: Accessing Tasks 1 2
3
Three Threeways waysto to access accesstasks: tasks: 1. 1.Menu Menubar bar 2. 2.Toolbar Toolbar 3. 3.Context Context menu menu
VM40_Solaris_R1.0_20040115
2-23
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-23
Accessing Tasks Through VEA Specific procedures for using VEA to perform specific tasks are covered in detail throughout this training. While this course describes one method for using VEA to access a task, you can access most VEA tasks in three ways: • Through the menu bar: You can launch most tasks from the menu bar in the main window. The Actions menu is context sensitive and changes its options based on the type of object that you select in the tree or grid. • Through the toolbar: You can launch some tasks from the toolbar in the main window by clicking on one of the icons. The icons are disabled when the related actions are not appropriate. • Through context-sensitive popup menus: You can access context-sensitive popup menus by right-clicking an object. Popup menus provide access to tasks or options that are appropriate for the selected object. Setting VEA Preferences You can customize general VEA environment attributes through the Preferences window (Tools—>Preferences).
2–32
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VEA: Viewing Tasks The The Task Task History History window window contains contains aa list list of of tasks tasks performed performed in the current current session. session.
To To view view underlying command command lines, lines, right-click right-click aa task task and and select select Properties. Properties.
VM40_Solaris_R1.0_20040115
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-24
2-24
Viewing Commands Through the Task History Window The Task History window displays a history of the tasks performed in the current session. Each task is listed with properties, such as the target object of the task, the host, the start time, the task status, and task progress. • Displaying the Task History window: To display the Task History window, click the Tasks tab at the bottom of the main window. • Aborting a Task: To abort a task, right-click a task and select Abort Task. • Pausing a Task: To temporarily stop a task, right-click a task and select Pause Task. • Resuming a Task: To restart a paused task, right-click a task and select Resume Task. • Reducing Task Priority: To slow down an I/O intensive task in progress and reduce the impact on system performance, right-click a task and select Throttle Task. In the Throttle Task dialog box, indicate how much you want to slow down the task. You can select Throttle All Tasks to slow all VxVM tasks. • Clearing the Task History: Tasks are persistent in the Task History window. To remove completed tasks from the window, right-click a task and select Clear All Finished Tasks. • Viewing CLI Commands: To view the command lines executed for a task, right-click a task and select Properties. The Properties window is displayed for the task. The CLI commands issued are displayed in the “Commands executed” field.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–33
VEA: Viewing Commands Command Log File •
Located in /var/adm/vx/veacmdlog
•
Displays a history of tasks performed in the current session and in previous sessions
Example command log file entry: Description: Description: Create Create Volume Volume Date: Thu May Date: Thu May 99 15:53:49 15:53:49 2002 2002 Command: Command: /usr/sbin/vxassist /usr/sbin/vxassist -g -g datadg datadg -b -b make make data2vol data2vol 122880s 122880s layout=striped layout=striped stripeunit=128 stripeunit=128 ncolumn=2 ncolumn=2 comment="" comment="" alloc= alloc= Output: Output: Exit Exit Code: Code: 00 VM40_Solaris_R1.0_20040115
2-25
© Copyright 2002 VERITAS
Viewing Commands Through the Command Log File The command log file contains a history of VEA tasks performed in current and previous sessions. The command log file contains a description of each task and its properties, such as the description, date, command issued, output, and the exit code. For failed tasks, the Output field includes relevant error messages. By default, the command log is located in /var/adm/vx/veacmdlog on the server. This file is created after the first execution of a task in VEA. To display the commands that are executed in a separate window—for example, to use for scripting—you can open a separate window and use the tail command: # tail -f /var/adm/vx/veacmdlog
Note: veacmdlog replaces the command log from earlier versions of VEA called /var/vx/isis/command.log. veacmdlog serves the same purpose and behaves in the same way as the former command.log.
2–34
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VEA: Viewing Help Information To access access VEA Help, Help, select select Help—>Contents. Help—>Contents.
VM40_Solaris_R1.0_20040115
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-26
2-26
Displaying VEA Help Information VEA contains an extensive database of Help information that is accessible from the menu bar. To access VEA Help information, select Help—>Contents. The Help window is displayed. In the Help window, you can view help information in three ways: • Click a topic in the Contents tab. • Select a topic in the alphabetical index listing on the Index tab. • Search for a specific topic by using the Search tab.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–35
Command Line Interface • You can administer CLI commands from the UNIX shell prompt. • Commands can be executed individually or combined into scripts. • Most commands are located in: /etc/vx/bin /usr/sbin /usr/lib/vxvm/bin
• Add these directories to your PATH environment variable to access the commands. • Examples of CLI commands include:
VM40_Solaris_R1.0_20040115
vxassist vxprint vxdg vxdisk
Creates and manages volumes Lists VxVM configuration records Creates and manages disk groups Administers disks under VM control
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-27
2-27
Using the Command Line Interface The Volume Manager command line interface (CLI) provides commands used for administering VxVM from the shell prompt on a UNIX system. CLI commands can be executed individually for specific tasks or combined into scripts. The VxVM command set ranges from commands requiring minimal user input to commands requiring detailed user input. Many of the VxVM commands require an understanding of Volume Manager concepts. Most VxVM commands require superuser or other appropriate access privileges.
2–36
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Accessing Manual Pages • CLI commands are detailed in manual pages. • Manual pages are installed by default in /opt/VRTS/man. • Add this directory to the MANPATH environment variable. • To access a manual page: # man command_name
• Examples: # man vxassist # man mount_vxfs VM40_Solaris_R1.0_20040115
2-28
© Copyright 2002 VERITAS
Accessing Manual Pages for CLI Commands Detailed descriptions of VxVM and VxFS commands, the options for each utility, and details on how to use them are located in VxVM and VxFS manual pages. Linux Note
On Linux, you must also set the MANSECT variable.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–37
The vxdiskadm Interface # vxdiskadm Volume Manager Support Operations Menu: VolumeManager/Disk 1 Add or initialize one or more disks 2 Encapsulate one or more disks 3 Remove a disk 4 Remove a disk for replacement 5 Replace a failed or removed disk ... list List disk information ? ?? q
Display help about menu Display help about the menuing system Exit from menus
VM40_Solaris_R1.0_20040115
2-29
© Copyright 2002 VERITAS
Using the vxdiskadm Interface The vxdiskadm command is a CLI command that you can use to launch the Volume Manager Support Operations menu interface. You can use the Volume Manager Support Operations interface, commonly referred to as vxdiskadm, to perform common disk management tasks. The vxdiskadm interface is restricted to managing disk objects and does not provide a means of handling all other VxVM objects. Each option in the vxdiskadm interface invokes a sequence of CLI commands. The vxdiskadm interface presents disk management tasks to the user as a series of questions, or prompts. To start vxdiskadm, you type vxdiskadm at the command line to display the main menu. The vxdiskadm main menu contains a selection of main tasks that you can use to manipulate Volume Manager objects. Each entry in the main menu leads you through a particular task by providing you with information and prompts. Default answers are provided for many questions, so you can easily select common answers. The menu also contains options for listing disk information, displaying help information, and quitting the menu interface. The tasks listed in the main menu are covered throughout this training. Options available in the menu differ somewhat by platform. See the vxdiskadm(1m) manual page for more details on how to use vxdiskadm.
2–38
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Installing VEA • Install the the VEA VEA server server on on aa UNIX UNIX machine machine running running VxVM. • Install VEA client on any any machine machine that supports supports the the Java 1.4 1.4 Runtime Runtime Environment. Environment. Installation administration file: VRTSobadmin
UNIX
VxVM VxVM VEA VEA server server VEA VEA client
VEA VEA client client Windows
Server packages: • VRTSob • VRTSvmpro • VRTSfspro • VRTSmuob VEA VEA client client UNIX
Client packages: • VRTSobgui (UNIX) • win32/VRTSobgui.msi (Windows)
VM40_Solaris_R1.0_20040115
VEA is automatically installed when you run the VxVM installation scripts. You can also install by manually adding packages.
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-30
2-30
Installing and Starting the VEA Software VEA consists of a server and a client. You must install the VEA server on a UNIX machine that is running VERITAS Volume Manager. You can install the VEA client on the same machine or on any other UNIX or Windows machine that supports the Java 1.4 Runtime Environment. Installing the VEA Server and Client on UNIX If you install VxVM by using the Installer utility, you are prompted to install both the VEA server and client packages automatically. If you did not install all of the components by using the Installer, then you can add the VEA packages separately. VEA is not compatible with earlier VxVM GUIs, such as VMSA. You cannot run VMSA with VxVM 3.5 and later. If you currently have VMSA installed on your machine, close any VMSA clients, kill the VMSA server, and remove the VRTSvmsa package before you add the VEA packages. It is recommended that you upgrade VEA to the latest version released with VxVM 4.0 in order to take advantage of new functionality built into VEA. However, you can use the VEA with 4.0 to manage 3.5.2 and later releases. When manually adding packages, you must install VRTSvlic and VRTSvxvm before installing the VEA packages, VRTSob, VRTSobgui, VRTSvmpro, VRTSfspro. After installation, you should also add the VEA startup scripts directory, /opt/VRTSob/bin to the PATH environment variable.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–39
Installing the VEA Client on Windows The VEA client runs on Windows NT, Windows 2000, Windows XP, Windows ME, Windows 98, and Windows 95 machines. If you plan to run VEA from a Windows machine, install the optional Windows package after you have installed the VEA server on a UNIX machine. If you are upgrading the VEA client, remove the old VEA program before installing the new one. Installing the VEA Client on Windows To install the VEA client on a Windows machine, locate the VRTSobgui.msi program on the VERITAS CD-ROM. Double-click the program and follow the instructions presented by the installation wizard.
2–40
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Starting the VEA Server and Client Once installed, the VEA server starts up automatically at system startup. To start the VEA server manually: 1. Log on as superuser. 2. Start the VEA server by invoking the server program: # /opt/VRTSob/bin/vxsvc When the VEA server is started: • /var/vx/isis/vxisis.lock ensures that only one instance of the VEA server is running. • /var/vx/isis/vxisis.log contains server process log messages. To start the VEA client: • On UNIX: # vea • On Windows: Start—>Programs—>VERITAS—> VERITAS VM40_Solaris_R1.0_20040115 Enterprise Administrator
© VM40_Solaris_R1.0_20040115 Copyright 2002 VERITAS
2-31
2-31
Starting the VEA Server In order to use VEA, the VEA server must be running on the UNIX machine to be administered. Only one instance of the VEA server should be running at a time. Once installed, the VEA server starts up automatically at system startup. You can manually start the VEA server by invoking vxsvc, or by invoking the startup script itself, for example, on Solaris: # /etc/rc2.d/S50isisd start
The VEA client can provide simultaneous access to multiple host machines. Each host machine must be running the VEA server. Starting the VEA Client To start the vea client on UNIX: # vea
In the Connection dialog box, specify your host name, user name, and password. You can mark the “Remember password” check box to avoid typing credentials on subsequent connections from that machine. Note: Entries for your user name and password must exist in the password file or corresponding Network Information Name Service table on the machine to be administered. Your user name must also be included in the VERITAS administration group (vrtsadm, by default) in the group file or NIS group table. If the vrtsadm entry does not exist, only root can run VEA.
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–41
You can configure VEA to automatically connect to hosts when you start the VEA client. In the VEA main window, the Favorite Hosts node can contain a list of hosts that are reconnected by default at the startup of the VEA client. • To add a host to the Favorite Hosts list, right-click the name of a currently connected host or a host listed under History, and select Add to Favorites. • To remove a host from the Favorite Hosts list, right-click the host under Favorite Hosts, and select Remove from Favorite Hosts. • To temporarily disable automatic connection to a host, right-click the host under Favorite Hosts, and select Reconnect at Startup. A check mark beside the menu entry indicates that you must reconnect to that host at startup. By removing the check mark, you reenable automatic connection to the host.
2–42
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Managing VEA The VEA server program is: /opt/VRTSob/bin/vxsvc To confirm that the VEA server is running: # vxsvc –m To stop and restart the VEA server: # /etc/init.d/isisd restart (Solaris) # /sbin/init.d/isisd restart (HP-UX) To kill the VEA server process: # vxsvc –k To display the VEA version number: # vxsvc -v To monitor VEA tasks and events, click the Logs node in the VEA object tree. VM40_Solaris_R1.0_20040115
2-32
© Copyright 2002 VERITAS
Managing the VEA Server The VEA server program is /opt/VRTSob/bin/vxsvc. Confirming VEA Server Startup # vxsvc -m Current state of server: RUNNING
Stopping and Restarting the VEA Server # /etc/init.d/isisd restart (Solaris) # /sbin/init.d/isisd restart (HP-UX)
You can kill the server process using either of the following commands: # vxsvc -k # kill ‘cat /var/vx/isis/vxisis.lock‘
Displaying the VEA Version # vxsvc -v 3.0.2.255
Monitoring VEA Event and Task Logs You can monitor VEA server events and tasks from the Event Log and Task Log nodes in the VEA object tree. You can also view the VEA log file, which is located at /var/vx/isis/vxisis.log. This file contains trace messages for the VEA server and VEA service providers. Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–43
Controlling Access to VEA Create the group vrtsadm in /etc/group and specify users who have permission to access VEA: root::0:root root::0:root other::1: other::1: bin::2:root,bin,daemon bin::2:root,bin,daemon sys::3:root,bin,sys,adm sys::3:root,bin,sys,adm adm::4:root,adm,daemon adm::4:root,adm,daemon ... ... sysadmin::14: sysadmin::14: nobody::60001: nobody::60001: noaccess::60002: noaccess::60002: nogroup::65534: nogroup::65534: teleback::100: teleback::100: vrtsadm::20:root,maria,bill vrtsadm::20:root,maria,bill VM40_Solaris_R1.0_20040115
2-33
© Copyright 2002 VERITAS
Controlling User Access to VEA Only users with appropriate privileges can run VEA. By default, only root can run VEA. If users other than root need to access VEA, you can set up the optional security feature and specify which users can run VEA. You specify which users have access to VEA after you install the software. To set up a list of users who have permission to use VEA, add a group named vrtsadm to the group file /etc/group or to the Network Information Name Service group table on the machine to be administered. The vrtsadm group does not exist by default. If the vrtsadm group does not exist, only root has access to VEA. If the vrtsadm group exists, the vrtsadm group must include the user names of any users, including root, that you want to have access to VEA. root must be included in the vrtsadm group for root to access VEA. For example, to give users root, maria, and bill access to VEA, you add the following line in the /etc/group file: vrtsadm::999:root,maria,bill
2–44
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Customizing VEA Security •
VEA uses a registry file to store configuration information, attributes, and values: /etc/vx/isis/Registry
•
Use vxregctl to modify the registry values.
•
To authorize both the vrtsadm and vxadmins groups to access VEA: # vxregctl /etc/vx/isis/Registry setvalue SOFTWARE/VERITAS/VxSvc/CurrentVersion/Security AccessGroups REG_SZ "vrtsadm;vxadmins“ • To verify the value of the AccessGroups attribute: # vxregctl /etc/vx/isis/Registry queryvalue SOFTWARE/VERITAS/VxSvc/CurrentVersion/Security AccessGroups Value of AccessGroups is: vrtsadm;vxadmins VM40_Solaris_R1.0_20040115
2-34
© Copyright 2002 VERITAS
Modifying Group Access All VEA configuration information is stored in a registry file, which is located by default at /etc/vx/isis/Registry. The registry file is used to contain VEA configuration settings, values, and other information. You can control some aspects of VEA, such as modifying group access, by modifying the values stored in the registry file. Note: Normally, the default registry settings are adequate. It is good practice to back up the registry file before making any changes. To modify, add, or delete registry entries in the registry file, use the vxregctl command: vxregctl /etc/vx/isis/Registry setvalue keyname [attribute...]
For example, the vrtsadm group is the default group name. You can change the groups that are granted VEA access by changing the string value AccessGroups under the key HKEY_LOCAL_MACHINE/SOFTWARE/VERITAS/VxSvc/ CurrentVersion/Security in the Registry file. To authorize both vrtsadm and vxadmins, type: # vxregctl /etc/vx/isis/Registry setvalue SOFTWARE/VERITAS/VxSvc/CurrentVersion/Security AccessGroups REG_SZ "vrtsadm;vxadmins"
Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–45
You can authorize individual users without adding them to a specific group with the value named AccessUsers under the same key, with similar syntax. No users are authorized this way by default. It is better practice to authorize groups rather than users. When you make a change to the registry file, you can use the vxregctl queryvalue command to verify the value that you set: vxregctl /etc/vx/isis/Registry queryvalue keyname [attribute...]
For example, to verify the value of the AccessGroups attribute: # vxregctl /etc/vx/isis/Registry queryvalue SOFTWARE/VERITAS/VxSvc/CurrentVersion/Security AccessGroups Value of AccessGroups is: vrtsadm;vxadmins
For more information on the vxregctl command, see the vxregctl(1m) manual page.
2–46
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Summary You should now be able to: • Identify operating system compatibility and other preinstallation considerations. • Obtain license keys, add licenses by using vxlicinst, and view licenses by using vxlicrep. • Install VxVM interactively, by using installation utilities, and manually, by adding software packages and running the vxinstall program. • Describe the three VxVM user interfaces. • Install and start the VEA software packages. • Manage the VEA server by displaying server status, version, task logs, and event logs. VM40_Solaris_R1.0_20040115
2-35
© Copyright 2002 VERITAS
Summary This lesson described guidelines for a first-time installation of VERITAS Volume Manager (VxVM). Procedures for adding license keys, adding the VxVM software packages, and running the VxVM installation program were covered, as well as an introduction to the three interfaces used to manage VERITAS Volume Manager. Next Steps In the next lesson, you begin using Volume Manager by learning how to manage disks. Additional Resources • VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on volume management and system administration using VERITAS Volume Manager. • VERITAS Volume Manager Installation Guide This guide provides information on installing and initializing VxVM and the VERITAS Enterprise Administrator graphical user interface. • VERITAS Volume Manager User’s Guide—VERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. • VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager. Lesson 2 Installation and Interfaces Copyright © 2004 VERITAS Software Corporation. All rights reserved.
2–47
Lab 2 Lab 2: Installation and Interfaces • In this lab, you install VxVM and VxFS, set up VEA, and explore VEA interfaces and options. You also invoke the vxdiskadm menu interface and display information about CLI commands by accessing the VxVM manual pages. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B.
VM40_Solaris_R1.0_20040115
2-36
© Copyright 2002 VERITAS
Lab 2: Installation and Interfaces To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
2–48
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 3 Managing Disks and Disk Groups
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
3-2
Introduction Overview In this lesson, you learn how to perform tasks associated with the management of disks and disk groups. This lesson describes device-naming schemes, how to add a disk to a disk group, how to view disk and disk group information, and how to add, remove, rename, and move a disk. This lesson also describes procedures for creating, deporting, importing, destroying, and upgrading a disk group. Importance Before you can create virtual volumes, you must learn how to configure your physical disks so that VERITAS Volume Manager (VxVM) can manage the disks. By bringing physical disks under Volume Manager control and adding those disks to a disk group, you enable VxVM to use the disk space to create volumes. A disk group is an organizational structure that enables VxVM to perform disk management tasks. Managing disk groups is important in effectively managing your virtual storage environment.
3–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: •
Describe the features and benefits of the two device-naming schemes: traditional and enclosure-based naming.
•
Identify the stages of VxVM disk configuration.
•
Create a disk group by using VEA and command line utilities.
•
View disk and disk group information and identify disk status.
•
Manage disks, including adding a disk to a VxVM disk group, removing a disk from a disk group, changing the disk media name, and moving an empty disk from one disk group to another.
•
Manage disk groups, including deporting and importing a disk group, moving a disk group, renaming a disk group, destroying a disk group, and upgrading the disk group version.
VM40_Solaris_R1.0_20040115
3-3
Outline of Topics • Naming Disk Devices • Disk Configuration Stages • Creating a Disk Group • Viewing Disk and Disk Group Information • Managing Disks • Managing Disk Groups
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–3
Traditional Device Naming Traditional device naming in VxVM is: • Operating system-dependent • Based on physical connectivity information Examples: Solaris:
/dev/[r]dsk/c1t9d0s2
HP-UX:
/dev/[r]dsk/c3t2d0 (no slice)
AIX:
/dev/hdisk2
Linux:
/dev/sda, /dev/hda
VM40_Solaris_R1.0_20040115
3-4
Naming Disk Devices Device Naming Schemes In VxVM, device names can be represented using the traditional operating systemdependent format or using an OS-independent format based on enclosure names. Traditional Device Naming Traditionally, device names in VxVM have been represented in the way that the operating system represents them. For example, Solaris and HP-UX both use the format c#t#d# in device naming, which is derived from the controller, target, and disk number. VxVM parses disk names in this format to retrieve connectivity information for disks. Other operating systems have different conventions.
3–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Enclosure-Based Naming
Host Host
Enclosure-based naming: • Is OS-independent • Is based on the logical name of the enclosure • Can be customized to make names meaningful
c2 c1
Fibre Fibre Channel Switches Switches
englab0 englab0 englab2 englab2 englab1 englab1
Disk Disk Enclosures Enclosures VM40_Solaris_R1.0_20040115
3-5
Enclosure-Based Naming With VxVM version 3.2 and later, VxVM provides a new device naming scheme, called enclosure-based naming. With enclosure-based naming, the name of a disk is based on the logical name of the enclosure, or disk array, in which the disk resides. The default logical name of an enclosure is typically based on the vendor ID. For example: Disk Array
Default Enclosure Name
Default Enclosure-Based Disk Names
Sun SENA A5000
sena0
sena0_1, sena0_2, sena0_3, ...
Sun StorEdge T3
purple0
purple0_1, purple0_2, purple0_3, ...
EMC
emc0
emc0_1, emc0_2, emc0_3, ...
You can customize logical enclosure names to provide meaningful names, such as based on the location of an enclosure in a building or lab. For example, you can rename three T3 disk arrays in an engineering lab as follows: Default Enclosure Name
Location
Customized Enclosure Name
purple0
Engineering Lab
englab0
purple1
Engineering Lab
englab1
purple2
Engineering Lab
englab2
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–5
Enclosure names are: • Persistent: Logical enclosure names are persistent across reboots. • Customizable: Logical enclosure names are customizable. You can provide meaningful names that are, for example, based on their location in a building or lab site. You can rename enclosures during the installation process or later by using command line utilities. • Used by VxVM utilities: With enclosure-based naming, VxVM disk and volume management utilities, such as vxdiskadm, vxdisk, and vxassist, display disk device names in terms of the enclosures in which they are located. When you create volumes and allocate disk space to volumes, you can take advantage of VxVM’s enclosure awareness to specify data placement policies. Enclosure awareness is also used in administering multipathed disks, and internally, the VxVM configuration daemon vxconfigd uses enclosure information to determine metadata placement policies. Benefits of Enclosure-Based Naming Benefits of enclosure-based naming include: • Easier fault isolation: By using enclosure information in establishing data placement policies, VxVM can more effectively place data and metadata to ensure data availability. You can configure redundant copies of your data on separate enclosures to safeguard against failure of one or more enclosures. • Device-name independence: By using enclosure-based naming, VxVM is independent of arbitrary device names used by third-party drivers. • Improved SAN management: By using enclosure-based disk names, VxVM can create better location identification information about disks in large disk farms and SAN environments. In a typical SAN environment, host controllers are connected to multiple enclosures in a daisy chain or through a Fibre Channel hub or fabric switch. In this type of configuration, enclosure-based naming can be used to refer to each disk within an enclosure, which enables you to quickly determine where a disk is physically located in a large SAN configuration. • Improved cluster management: In a cluster environment, disk array names on all hosts in a cluster can be the same. • Improved dynamic multipathing (DMP) management: With multipathed disks, the name of a disk is independent of the physical communication paths, avoiding confusion and conflict.
3–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Selecting a Naming Scheme You can select a naming scheme: • •
When you run VxVM installation scripts Anytime using the vxdiskadm option, “Change the disk naming scheme” Note: This operation requires the VxVM configuration daemon, vxconfigd, to be stopped and restarted.
If you select enclosure-based naming, disks are displayed in three categories: • Enclosures: Supported RAID disk arrays are displayed in the enclosurename_# format. • Disks: Supported JBOD disk arrays are displayed with the prefix Disk_. • Others: Disks that do not return a path-independent identifier to VxVM are displayed in the traditional OS-based format. VM40_Solaris_R1.0_20040115
3-6
Selecting a Naming Scheme When you set up VxVM using installation scripts, you are prompted to specify whether you want to use the traditional or enclosure-based naming scheme. • If you choose to display devices in the traditional format, the operating systemspecific naming scheme is used for all disk devices except for fabric mode disks. Fabric disks, disks connected through a Fibre Channel hub or fabric switch, are always displayed in the enclosure-based naming format. • If you select enclosure-based naming, VxVM detects the devices connected to your system and displays the devices in three categories: Enclosures, Disks (formerly known as JBOD disks), and Others. The naming convention used is based on these categories: – Enclosures: Recognized RAID disk arrays are named by default with a manufacturer-specific name in the format enclosurename_#. – Disks: Recognized JBOD disk arrays are classified in the DISKS category and are named with the prefix Disk_. – Others: Disks that do not return a path-independent identifier to VxVM are categorized as OTHERS and are named in the c#t#d# format. Fabric disks in this category are named with the prefix fabric_. Changing the Disk-Naming Scheme You can change the disk-naming scheme at any time by using the vxdiskadm menu interface. To change the disk-naming scheme, select the “Change the disk naming scheme” option from the vxdiskadm menu. The vxconfigd daemon is restarted to bring the naming scheme into effect and no reboot is required. Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–7
Disk Group Purposes sysdg
acctdg volume
volume
VM disks
VM disks
engdg
hrdg
volume
volume
VM disks
VM disks
Disk groups enable you to: •
Group disks into logical collections for a set of users or applications.
•
Easily move groups of disks from one host to another.
•
Ease administration of high availability environments through deport and import operations.
VM40_Solaris_R1.0_20040115
3-7
Disk Configuration Stages What Is a Disk Group? A disk group is a collection of physical disks, volumes, plexes, and subdisks which are used for a common purpose. A disk group is created when you place at least one disk in the disk group. When you add a disk to a disk group, a disk group entry is added to the private region header of that disk. Because a disk can only have one disk group entry in its private region header, one disk group does not “know about” other disk groups, and therefore disk groups cannot share resources, such as disk drives, plexes, and volumes. A volume with a plex can belong to only one disk group, and subdisks and plexes of a volume must be stored in the same disk group.You can never have an “empty” disk group, because you cannot remove all disks from a disk group without destroying the disk group. Why Are Disk Groups Needed? Disk groups assist disk management in several ways: • Disk groups enable the grouping of disks into logical collections for a particular set of users or applications. • Disk groups enable data, volumes, and disks to be easily moved from one host machine to another. • Disk groups ease the administration of high availability environments. Disk drives can be shared by two or more hosts, but accessed by only one host at a time. If one host crashes, the other host can take over its disk groups and therefore its disks.
3–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
System-Wide Reserved Disk Groups Reserved names: • bootdg • defaultdg defaultdg • nodg nodg
System System A A
bootdg = sysdg defaultdg = acctdg
sysdg
rootvol
acctdg vol01
B
System System B B
engdg vol01
bootdg = nodg defaultdg = nodg
VM40_Solaris_R1.0_20040115
VM40_Solaris_R1.0_20040115
To display what is set as bootdg or defaultdg: # vxdg bootdg # vxdg defaultdg To set the default disk group after VxVM installation: # vxdctl defaultdg diskgroup
3-8
3-8
System-Wide Reserved Disk Groups VxVM has reserved three disk group names that are used to provide boot disk group and default disk group functionality. The names “bootdg”, “defaultdg” and “nodg” are system-wide reserved disk group names and cannot be used as names for any of the disk groups that you set up. If you choose to place your boot disk under VxVM control, VxVM assigns bootdg as an alias for the name of the disk group that contains the volumes that are used to boot the system. defaultdg is an alias for the disk group name that should be assumed if the -g option is not specified to a command. You can set defaultdg when you install VERITAS Volume Manager or anytime after installation. By default, both bootdg and defaultdg are set to nodg. Note: The definitions of bootdg and defaultdg are written to the volboot file. The definition of bootdg results in a symbolic link from the named bootdg in /dev/vx/dsk and /dev/vx/rdsk. Displaying Reserved Disk Group Definitions To display what is currently defined as the boot disk group: # vxdg bootdg
To display what is currently defined as the default disk group: # vxdg defaultdg
If these have not been defined, then nodg is displayed. Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–9
Setting the Default Disk Group If you did not define a default disk group at installation, you can specify a default disk group by using: # vxdctl defaultdg diskgroup
The specified disk group does not need to currently exist on the system. If bootdg is specified as the argument to this command, the default disk group is set to be the same as the currently defined system-wide boot disk group. If nodg is specified as the argument to the command, the value of the default disk group is cleared.
3–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Disk Configuration Stages Stage 1: Initialize Initialize disk. disk.
Uninitialized Disk
Private region
Stage 2: Assign disk to to disk disk group. group.
Public region
Disk Media Name: datadg01 Disk Access Name: /dev/[r]dsk/device
Volume
Free Disk Pool
Disk Group: datadg
VM40_Solaris_R1.0_20040115
VM40_Solaris_R1.0_20040115
3-9
Stage 3: Assign Assign disk space to to volumes. volumes.
3-9
Before Configuring a Disk for Use by VxVM In order to use the space of a physical disk to build VxVM volumes, you must place the disk under Volume Manager control. Before a disk can be placed under Volume Manager control, the disk media must be formatted outside of VxVM using standard operating system formatting methods. SCSI disks are usually preformatted. Once a disk is formatted, the disk can be initialized for use by Volume Manager. In other words, disks must be detected by the operating system, before VxVM can detect the disks. Stage One: Initialize a Disk A formatted physical disk is considered uninitialized until it is initialized for use by VxVM. When a disk is initialized, the public and private regions are created, and VM disk header information is written to the private region. Any data or partitions that may have existed on the disk are removed. An initialized disk is placed into the VxVM free disk pool. The VxVM free disk pool contains disks that have been initialized but that have not yet been assigned to a disk group. These disks are under Volume Manager control but cannot be used by Volume Manager until they are added to a disk group. Note: Encapsulation is another method of placing a disk under VxVM control in which existing data on the disk is preserved. This method is covered in a later lesson.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–11
Changing the Disk Layout To display or change the default values that are used for initializing disks, select the “Change/display the default disk layouts” option in vxdiskadm: • For disk initialization, you can change the default format and the default length of the private region. The attribute settings for initializing disks are stored in the file, /etc/default/vxdisk. • For disk encapsulation, you can additionally change the offset values for both the private and public regions. The attribute settings for encapsulating disks in /etc/default/vxencap. Stage Two: Assign a Disk to a Disk Group When you add a disk to a disk group, VxVM assigns a disk media name to the disk and maps this name to the disk access name. • Disk media name: A disk media name is the logical disk name assigned to a drive by VxVM. VxVM uses this name to identify the disk for volume operations, such as volume creation and mirroring. • Disk access name: A disk access name represents all UNIX paths to the device. A disk access record maps the physical location to the logical name and represents the link between the disk media name and the disk access name. Disk access records are dynamic and can be re-created when vxdctl enable is run. The disk media name and disk access name, in addition to the host name, are written to the private region of the disk. Space in the public region is made available for assignment to volumes. Volume Manager has full control of the disk, and the disk can be used to allocate space for volumes. Whenever the VxVM configuration daemon is started (or vxdctl enable is run), the system reads the private region on every disk and establishes the connections between disk access names and disk media names. Once disks are placed under Volume Manager control, storage is managed in terms of the logical configuration. File systems mount to logical volumes, not to physical partitions. Logical names, such as /dev/vx/[r]dsk/diskgroup_name/ volume, replace physical locations, such as /dev/[r]dsk/device_name. The free space pool in a disk group refers to the space on all disks within the disk group that has not been allocated as subdisks. When you place a disk into a disk group, its space becomes part of the free space pool of the disk group. Stage Three: Assign Disk Space to Volumes When you create volumes, space in the public region of a disk is assigned to the volumes. Some operations, such as removal of a disk from a disk group, are restricted if space on a disk is in use by a volume.
3–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Creating a Disk Group To create a disk group, you add a disk to a disk group. • You can add a single disk or multiple disks. • You cannot add a disk to more than one disk group. • Default disk media names vary with the interface used to add the disk to a disk group, but are conventionally in the format diskgroup##, such as datadg00, datadg01, and so on. • Disk media names must be unique within a disk group. • Adding disks to a disk group provides additional storage capacity for creating volumes. VM40_Solaris_R1.0_20040115
3-10
Creating a Disk Group Creating a Disk Group A disk must be placed into a disk group before it can be used by VxVM. A disk group cannot exist without having at least one associated disk. When you create a new disk group, you specify a name for the disk group and at least one disk to add to the disk group. The disk group name must be unique for the host machine. Adding Disks Adding a disk to a disk group makes the disk space available for use in creating VxVM volumes. • You can add a single disk or multiple disks to a disk group. • You cannot add a disk to more than one disk group. To add a disk to a disk group, you select an uninitialized disk or a free disk. If the disk is uninitialized, you must initialize the disk before you can add it to a disk group. Disk Naming When you add a disk to a disk group, the disk is assigned a disk media name. The disk media name is a logical name used for VxVM administrative purposes. The disk media name must be unique within the disk group. You can assign a meaningful name or use the default name assigned by VxVM.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–13
Default Disk Naming The default disk media names depend on the interface used to add them to a disk group: • If you add a disk to a disk group using VEA or vxdiskadm, default disk media names for disks are in the format diskgroup##, where diskgroup is the name of the disk group and ## is a two-digit number starting with either 00 (in VEA) or 01 (in vxdiskadm). • If you add a disk to a disk group by using a CLI command, such as vxdg adddisk, default disk media names are the same as the device tag, for example, c#t#d#. Notes on Disk Naming You can change disk media names after the disks have been added to disk groups. However, if you must change a disk media name, it is recommended that you make the change before using the disk for any volumes. Renaming a disk does not rename the subdisks on the disk, which may be confusing. You should assign logical media names, rather than use the device names, to facilitate transparent logical replacement of failed disks. Assuming that you have a sensible disk group naming strategy, the VEA or vxdiskadm default disk naming scheme is a reasonable policy to adopt.
3–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Creating a Disk Group: VEA Select Select Actions—>New Actions—>New Disk Disk Group. Group.
Specify Specifyaaname namefor for the new disk the new diskgroup. group.
Add Addat atleast least one disk. one disk.
VM40_Solaris_R1.0_20040115
To To add add another another disk: disk: Actions—>Add Disk to Disk Group
Specify Specifydisk diskmedia media names for names fordisks disksthat that you youadd. add.
VM40_Solaris_R1.0_20040115
3-11
3-11
Creating a Disk Group: VEA Select:
Disk Groups folder, or a free or uninitialized disk
Navigation path:
Actions—>New Disk Group
Input:
Group Name: Type the name of the disk group to be created. Create cluster group: To create a shared disk group, mark this check box. Only applicable in a cluster environment. Available/Selected disks: Select at least one disk to be placed in the new disk group. Disk name(s): To specify a disk media name for the disk that you are placing in the disk group, type a name in the Disk name(s) field. If no disk name is specified, VxVM assigns a default name. If you are adding multiple disks and specify only one disk name, VxVM appends numbers to create unique disk names. Organization Principle: In an Intelligent Storage Provisioning (ISP) environment, you can organize the disk group based on policies that you set up. This option is covered in a later lesson.
Note: When working in a SAN environment, or any environment in which multiple hosts may share access to disks, it is recommended that you perform a rescan operation to update the VEA view of the disk status before allocating any disks. From the command line, you can run vxdctl enable.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–15
Adding a Disk: VEA Select:
A free or uninitialized disk
Navigation path:
Actions—>Add Disk to Disk Group
Input:
Disk Group name: Select an existing disk group. New disk group: Click the “New disk group” button to add the disk to a new disk group. Select the disk to add: You can move disks between the Selected disks and Available disks fields by using the Add and Remove buttons. Disk Name(s): By default, Volume Manager assigns a disk media name that is based on the disk group name of a disk. You can assign a different name to the disk by typing a name in the Disk name(s) field. If you are adding more than one disk, place a space between each name in the Disk name(s) field.
When the disk is placed under VxVM control, the Type property changes to Dynamic, and the Status property changes to Imported. Note: You cannot add a disk to the free disk pool with VEA.
3–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Creating a Disk Group: CLI To create a disk group or add disks using vxdiskadm: “Add or initialize one or more disks” Initialize disk(s): vxdisksetup -i device_tag [attributes] # vxdisksetup -i Disk_1 (Enclosure-based naming) # vxdisksetup -i c2t0d0 (Solaris and HP-UX) # vxdisksetup -i hdisk2 (AIX) # vxdisksetup -i sda2 (Linux) Initialize the disk group by adding at least one disk: vxdg init diskgroup disk_name=device_tag # vxdg init newdg newdg01=Disk_1 Add more disks to the disk group: vxdg -g diskgroup adddisk disk_name=device_tag VM40_Solaris_R1.0_20040115
3-12
# vxdg -g newdg adddisk newdg02=Disk_2 VM40_Solaris_R1.0_20040115
3-12
Creating a Disk Group: vxdiskadm From the vxdiskadm main menu, select the “Add or initialize one or more disks” option. Specify the disk group to which the disk should be added. To add the disk to a new disk group, you type a name for the new disk group. You use this same menu option to add additional disks to the disk group. Initializing a Disk: CLI The vxdisksetup command configures a disk for use by Volume Manager by creating the private and public region partitions on a disk. vxdisksetup -i device_tag [attributes]
The -i option writes a disk header to the disk, making the disk directly usable, for example, as a new disk in a disk group. Creating a Disk Group: CLI To create a disk group from the command line, use the vxdg init command: # vxdg init diskgroup disk_name=device_tag
For example, to create a disk group named newdg on device c1t1d0 and specify a disk media name of newdg01, you type: # vxdg init newdg newdg01=c1t1d0
To verify that the disk group was created, you can use vxdisk list.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–17
Adding a Disks to a Disk Group: CLI After configuring a disk for VxVM, you use the vxdg adddisk command to add the disk to a disk group. vxdg -g diskgroup adddisk disk_name=device_tag
When you add a disk to a disk group, the disk group configuration is copied onto the disk, and the disk is stamped with the system host ID.
3–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
View All Disks: VEA Select Selectthe theDisks Disksnode nodein inthe theobject objecttree. tree.Disks Disksand andtheir their properties are displayed in the grid. properties are displayed in the grid. Free: Free: Initialized, Initialized, but but not not in in aa disk disk group group
Imported: Imported: Initialized Initialized and and added added to to aa disk disk group group VM40_Solaris_R1.0_20040115
3-13
Not Not Setup/Not Setup/Not initialized: initialized: Not Not under under VxVM VxVM control control VM40_Solaris_R1.0_20040115
3-13
Viewing Disk and Disk Group Information Keeping Track of Your Disks By viewing disk information, you can: • Determine if a disk has been initialized and placed under Volume Manager control. • Determine if a disk has been added to a disk group. • Verify the changes that you make to disks. • Keep track of the status and configuration of your disks. Displaying Disk Information: VEA In VEA, disks are represented under the Disks node in the object tree, in the Disk View window, and in the grid for several object types, including controllers, disk groups, enclosures, and volumes. In the grid of the main window, under the Disks tab, you can identify many disk properties, including disk name, disk group name, size of disk, amount of unused space, and disk status. In particular, the status of a disk can be: • Not Setup/Not Initialized: The disk is not under VxVM control. The disk may be in use as a raw device by an application. • Free: The disk is in the free disk pool; it is initialized by VxVM but is not in a disk group. You cannot place a disk in this state using VEA, but VEA recognizes disks that have been initialized through other interfaces. • Imported: The disk is in an imported disk group.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–19
• •
•
Deported: The disk is in a deported disk group. Disconnected: The disk contains subdisks that are not available because of hardware failure. This status applies to disk media records for which the hardware has been unavailable and has not been replaced within VxVM. External: The disk is in use by a foreign manager, such as Logical Volume Manager.
Viewing Disk Details: VEA When you select a disk in the object tree, many details of the disk layout are displayed in the grid. You can access these details by clicking the associated tab: • Volumes: This page displays the volumes that use this disk. • Disk Regions: This page displays the disk regions of the disk. • Controllers: This page displays the controllers to which this disk is connected. • Paths: This page shows the dynamic multipaths available to this disk. • Disk View: This page displays the layout of any subdisks created on this disk media, and details of usage. The Disk View window has the same view of all related disks with more options available. To launch the Disk View window, select an object (such as a disk group or volume), the select Actions—>Disk View. • Alerts: This page displays any problems with a drive.
3–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
View Disk Properties: VEA Right-click Right-clickaadisk diskand andselect selectProperties. Properties. The TheDisk DiskProperties Properties window windowis isdisplayed. displayed. Select Select aa unit unit to to display display capacity capacity and and unallocated unallocated space space in in other other units. units.
VM40_Solaris_R1.0_20040115
3-14
VM40_Solaris_R1.0_20040115
3-14
Viewing Disk Properties: VEA In VEA, you can also view disk properties in the Disk Properties window. To open the Disk Properties window, right-click a disk and select Properties. The Disk Properties window includes the capacity of the disk and the amount of unallocated space. You can select the units for convenient display in the unit of your choice.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–21
Viewing Disk Groups: VEA Right-click Right-clickaadisk diskgroup, group,and andselect selectProperties. Properties.
Refers Refersto todisk disk group groupversioning versioning Refers Refersto tocluster cluster environments environments
VM40_Solaris_R1.0_20040115
3-15
VM40_Solaris_R1.0_20040115
3-15
Viewing Disk Group Properties: VEA The object tree in the VEA main window contains a Disk Groups node that displays all of the disk groups attached to a host. When you click a disk group, the VxVM objects contained in the disk group are displayed in the grid. To view additional information about a disk group, right-click a disk group and select Properties. The Disk Group Properties window is displayed. This window contains basic disk group properties, including: • Disk group name, status, ID, and type • Number of disks and volumes • Disk group version • Disk group size and free space
3–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
View Disk Information: CLI To display basic information about all disks: # vxdisk -o alldgs list DEVICE Disk_0 Disk_1 Disk_2 Disk_3 Disk_4 Disk_5 Disk_6 Disk_7
TYPE auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:none auto:none auto:none auto:none auto:none
DISK datadg01 datadg02 -
GROUP datadg datadg -
STATUS online online online online online online online online
VxVM Disks invalid invalid invalid invalid invalid
Note: In a shared access environment, when displaying disks, you should frequently run vxdctl enable to rescan for disk changes. VM40_Solaris_R1.0_20040115
Free Disk
Uninitialized Disks 3-16
Displaying Basic Disk Information: CLI You use the vxdisk list command to display basic information about all disks attached to the system. The vxdisk list command displays: • Device names for all recognized disks • Type of disk, that is, how a disk is placed under VxVM control • Disk names • Disk group names associated with each disk • Status of each disk In the output: • A status of online in addition to entries in the Disk and Group columns indicates that the disk has been initialized or encapsulated, assigned a disk media name, and added to a disk group. The disk is under Volume Manager control and is available for creating volumes. • A status of online without entries in the Disk and Group columns indicates that the drive has been initialized or encapsulated, but is not currently assigned to a disk group. The disk is in the free disk pool. • A status of online invalid indicates that the disk has neither been initialized nor encapsulated by VxVM. The disk is not under VxVM control.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–23
View Detailed Information: CLI To display detailed information for a disk: vxdisk -g diskgroup list disk_name # vxdisk -g datadg list datadg01 Device: Disk_1 devicetag:Disk_1 type: auto hostid: train12 disk: name=datadg01 id=1000753057.1114.train12 group: name=datadg id=1000753077.1117.train12
. . . To display a summary for all disks: # vxdisk -s list VM40_Solaris_R1.0_20040115
3-17
Displaying Detailed Disk Information: CLI To display detailed information about a disk, you use the vxdisk list command with the name of the disk group and disk: # vxdisk -g diskgroup list disk_name
In the output: • Device is the VxVM name for the device access path. • devicetag is the name used by VxVM to reference the physical disk. • type is how a disk was placed under VM control. auto is the default type. • hostid is the name of the system that currently manages the disk group to which the disk belongs; if blank, no host is currently controlling this group. • disk is the VM disk media name and internal ID. • group is the disk group name and internal ID. To view a summary of information for all disks, you use the -s option with the vxdisk list command. Note: The disk name and the disk group name are changeable. The disk ID and disk group ID are never changed as long as the disk group exists or the disk is initialized.
3–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
The complete output from displaying detailed information about a disk is as follows: # vxdisk -g datadg list datadg01
Descriptions of Flags Flag
Description
online ready
The specified disk is “online” and is “ready” to use.
private
The disk has a private region where the configuration database and kernel log are defined and enabled/disabled.
autoconfig
The specified disk is part of a disk group that is autoconfigured.
autoimport
The specified disk is part of a disk group that can be imported at boot time.
imported
The specified disk is part of a disk group that is currently imported. When the disk group is deported, this field is empty.
shared
The specified disk is part of a cluster “shareable” disk group.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–25
Viewing Disk Groups: CLI To display imported disk groups only: # vxdg list NAME STATE ID datadg enabled,cds 969583613.1025.cassius newdg enabled,cds 971216408.1133.cassius To display all disk groups, including deported disk groups: # vxdisk -o alldgs list DEVICE Disk_1 Disk_7
TYPE auto:cdsdisk auto:cdsdisk
DISK datadg01 -
GROUP STATUS datadg online (acctdg) online
To display free space in a disk group: # vxdg free (for all disk groups that the host can detect) # vxdg -g diskgroup free (for a specific disk group) VM40_Solaris_R1.0_20040115
3-18
Displaying Disk Group Information: CLI To display disk group information: • Use vxdg list to display disk group names, states, and IDs for all imported disk groups in the system. • Use vxdisk -o alldgs list to display all disk groups, including deported disk groups. In the example, the deported disk group acctdg is displayed in parentheses. • Use vxdg free to display free space on each disk. This command displays free space on all disks in all disk groups that the host can detect. Note: This command does not show space on spare disks. Reserved disks are displayed with an “r” in the FLAGS column. Add -g diskgroup to restrict the output to a specific disk group.
3–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Creating a Non-CDS Disk and Disk Group • If you are working with sliced disks and non-CDS disk groups, you can initialize a disk as a sliced disk and create a non-CDS disk group. • To initialize a disk as a sliced disk: # vxdisksetup -i device_tag format=sliced • To initialize a non-CDS disk group: # vxdg init diskgroup disk_name=device_tag cds=off
VM40_Solaris_R1.0_20040115
3-19
Managing Disks Creating a Non-CDS Disk and Disk Group At times, you may be working with sliced disks and non-CDS disk groups, for example, if you have not upgraded all of your systems to the latest VxVM version or are working with a boot disk group. To create a sliced disk, you add the format=sliced attribute to the vxdisksetup command. To create a non-CDS disk group, you add the cds=off attribute to the vxdg init command.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–27
Before Removing a Disk • When removing a disk from a disk group, you have two options: – Move the disk to the free disk pool. – Return the disk to an uninitialized state.
• You cannot remove the last disk in a disk group, unless you destroy the disk group: – In CLI, you must destroy the disk group to free the last disk in the disk group. – In VEA, when you remove the last disk in a disk group, the disk group is automatically destroyed.
• Before removing a disk, ensure that the disk does not contain data that is needed. VM40_Solaris_R1.0_20040115
3-20
Removing Disks If a disk is no longer needed in a disk group, you can remove the disk. After you remove a disk from a disk group, the disk cannot be accessed. When removing a disk from a disk group, you have two options: • Move the disk to the free disk pool. With this option, the disk remains under Volume Manager control. • Send the disk back to an uninitialized state. With this option, the disk is no longer under Volume Manager control. Note: The remove operation fails if there are any subdisks on the disk. However, the destroy disk group operation does not fail if there are any volumes in the disk group. Before You Remove a Disk Before removing a disk, make sure that the disk contains no data, the data is no longer needed, or the data is moved to other disks. Removing a disk that is in use by a volume can result in lost data or lost data redundancy.
3–28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Evacuating a Disk Before removing a disk, you may need to evacuate data from the disk to another disk in the disk group.
VEA: • •
Select the disk that you want to evacuate. Select Actions—>Evacuate Disk.
vxdiskadm: “Move volumes from a disk”
CLI: vxevac -g diskgroup from_disk [to_disk] # vxevac -g datadg datadg02 datadg03 To evacuate to any disk except for datadg03: # vxevac -g datadg datadg02 !datadg03 VM40_Solaris_R1.0_20040115
3-21
Evacuating a Disk Evacuating a disk moves the contents of the volumes on a disk to another disk. The contents of a disk can be evacuated only to disks in the same disk group that have sufficient free space. Evacuating a Disk: VEA Select:
The disk that contains the objects and data to be moved to another disk
Navigation path:
Actions—>Evacuate Disk
Input:
Auto Assign destination disks: VxVM selects the destination disks to contain the content of the disk to be evacuated. Manually assign destination disks: To manually select a destination disk, highlight the disk in the left field and click Add to move the disk to the right field.
Evacuating a Disk: vxdiskadm Select the “Move volumes from a disk” option. When prompted, specify the name of the disk that contains the data that you want to move and the disks onto which you want to move the data. Evacuating a Disk: CLI To evacuate a disk from the command line, use the vxevac command: vxevac -g diskgroup from_diskname [to_diskname] Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–29
Removing a Disk from VxVM VEA: • •
Select the disk that you want to remove. Select Actions—>Remove Disk from Dynamic Disk Group.
vxdiskadm: “Remove a disk” CLI: vxdg -g diskgroup rmdisk disk_name vxdiskunsetup [-C] device_tag Example:
Remove Remove the the disk disk from from the the disk disk group, group, then then uninitialize uninitialize it. it.
# vxdg -g newdg rmdisk newdg02 # vxdiskunsetup -C Disk_2 VM40_Solaris_R1.0_20040115
3-22
Removing a Disk: VEA Select:
The disk to be removed
Navigation path:
Actions—>Remove Disk from Dynamic Disk Group
Input:
Disk group name: The disk group that contains the disk to be removed. Selected disks: The disk to be removed should be displayed in the Selected disks field. Only empty disks are displayed in the list of available disks as candidates for removal.
Note: If you select all disks for removal from the disk group, the disk group is automatically destroyed. Removing a Disk: vxdiskadm To remove a disk from a disk group using vxdiskadm, select the “Remove a disk” option. At the prompt, enter the disk media name of the disk to be removed. When you remove a disk using the vxdiskadm interface, the disk is returned to the free disk pool. The vxdiskadm interface does not have an option to return a disk to an uninitialized state.
3–30
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Removing a Disk: CLI The vxdg rmdisk Command To remove a disk from a disk group from the command line, you use the command vxdg rmdisk. This command removes the disk from a disk group and places it in the free disk pool. You can verify the removal by using the vxdisk list command to display disk information. A deconfigured disk has a status of online but no longer has a disk media name or disk group assignment. The vxdiskunsetup Command Once the disk has been removed from its disk group, you can remove it from Volume Manager control completely by using the vxdiskunsetup command. This command reverses the configuration of a disk by removing the public and private regions that were created by the vxdisksetup command. The vxdiskunsetup command does not operate on disks that are active members of a disk group. This command does not usually operate on disks that appear to be imported by some other host—for example, a host that shares access to the disk. You can use the -C option to force deconfiguration of the disk, removing host locks that may be detected. You can verify the deconfiguration by using the vxdisk list command to display disk information. A deconfigured disk has a status of online invalid.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–31
Renaming a Disk VEA: • • •
Select the disk that you want to rename. Select Actions—>Rename Disk. Specify the original disk name and the new name.
vxedit rename: vxedit -g diskgroup rename old_name new_name
Example: # vxedit -g datadg rename datadg01 datadg03
Notes: • •
The new disk name must be unique within the disk group. Renaming a disk does not automatically rename subdisks on that disk.
VM40_Solaris_R1.0_20040115
3-23
Changing the Disk Media Name VxVM creates a unique disk media name for a disk when you add a disk to a disk group. Sometimes you may need to change a disk name to reflect changes of ownership or use of the disk. Renaming a disk does not change the physical disk device name. The new disk name must be unique within the disk group. Before You Rename a Disk Before you rename a disk, you should carefully consider the change. VxVM names subdisks based on the disks on which they are located. A disk named datadg01 contains subdisks that are named datadg01-01, datadg01-02, and so on. Renaming a disk does not automatically rename its subdisks. Volumes are not affected when subdisks are named differently from the disks. Renaming a Disk: VEA Select:
The disk to be renamed
Navigation path:
Actions—>Rename Disk
Input:
Disk name: The disk media name of the disk to be renamed. New name: The new disk media name for the disk.
Renaming a Disk: CLI You can rename a disk by using the vxedit rename command. This command can be used to rename any VxVM objects except for disk groups.
3–32
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Deporting a Disk Group acctdg volume
What is a deported disk group? •
The disk group and its volumes are unavailable.
•
The disks cannot be removed.
•
The disk group cannot be accessed until it is imported.
VM disks
Deport Deport
olddg volume
VM40_Solaris_R1.0_20040115
Before deporting a disk group: • •
Unmount file systems Stop volumes
When you deport a disk group, you can specify: •
A new host
•
A new disk group name
VM disks
3-24
Managing Disk Groups Deporting a Disk Group A deported disk group is a disk group over which management control has been surrendered. The objects within the disk group cannot be accessed, its volumes are unavailable, and the disk group configuration cannot be changed. (You cannot access volumes in a deported disk group because the directory containing the device nodes for the volumes are deleted upon deport.) To resume management of the disk group, it must be imported. A disk group cannot be deported if any volumes in that disk group are in use. Before you deport a disk group, you must unmount file systems and stop any volumes in the disk group. Deporting and Specifying a New Host When you deport a disk group using VEA or CLI commands, you have the option to specify a new host to which the disk group is imported at reboot. If you know the name of the host to which the disk group will be imported, then you should specify the new host during the operation. If you do not specify the new host, then the disks could accidentally be added to another disk group, resulting in data loss. You cannot specify a new host using the vxdiskadm utility. Deporting and Renaming When you deport a disk group using VEA or CLI commands, you also have the option to rename the disk group when you deport it. You cannot rename a disk group when deporting using the vxdiskadm utility. Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–33
Deporting a Disk Group: VEA Select SelectActions—>Deport Actions—>DeportDisk DiskGroup. Group.
Disk Diskgroup groupto to be bedeported deported Options Optionsenable enableyou youto to specify specifyaanew newname name and andaanew newhost hostfor forthe the disk diskgroup. group.
VM40_Solaris_R1.0_20040115
3-25
Deporting a Disk Group: VEA Select:
The disk group to be deported
Navigation path:
Actions—>Deport Disk Group
Input:
Group name: Verify the name of the disk group to be deported. New name: To change the name of the disk group when you deport it, type a new disk group name in the New name field. New Host: To specify a host machine to import the deported disk group at reboot, type the host ID in the New Host field. If you are importing the disk group to another system, then you should specify the name of the new host.
Disks that were in the disk group now have a state of Deported. If the disk group was deported to another host, the disk state is Locked.
3–34
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Deporting a Disk Group: CLI vxdiskadm: “Remove access to (deport) a disk group”
vxdg deport: vxdg deport diskgroup # vxdg deport newdg To deport and rename a disk group: vxdg -n new_name deport old_name # vxdg -n newerdg deport newdg To deport a disk group and specify a new host: vxdg -h hostname deport diskgroup # vxdg -h server1 deport newdg VM40_Solaris_R1.0_20040115
3-26
Deporting a Disk Group: vxdiskadm To deport a disk group using vxdiskadm, select “Remove access to (deport) a disk group,” from the main menu. You are asked if you want to disable, or offline, the disks in the disk group. You should offline the disks if you plan to remove a disk from a system without rebooting or physically move a disk to reconnect it to another system. Note: If you offline the disks, you must manually online the disks before you import the disk group. To online a disk, use vxdiskadm option “Enable (online) a disk device.” Deporting a Disk Group: CLI To deport a disk group, you can use the vxdg deport command. Before deporting a disk group, unmount all file systems used within the disk group that is to be deported, and stop all volumes in the disk group: # umount mount_point # vxvol -g diskgroup stopall
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–35
Importing a Disk Group newacctdg volume
When you import a disk group, you can:
VM disks
•
Specify a new disk group name.
olddg
•
Clear host locks.
volume
•
Import as temporary.
•
Force an import.
Import Import
VM40_Solaris_R1.0_20040115
Importing a disk group reenables access to the disk group.
3-27
VM disks
Importing a Deported Disk Group Importing a disk group reenables access to the objects in a deported disk group by bringing the disk group under VxVM control in a new system. All volumes are stopped by default after importing a disk group and must be started before data can be accessed. Importing and Renaming A deported disk group cannot be imported if another disk group with the same name has been created since the disk group was deported. You can import and rename a disk group at the same time. Importing and Clearing Host Locks When a disk group is created, the system writes a lock on all disks in the disk group. The lock is actually a value in the hostname field within the disk group header. The lock ensures that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time. If a system crashes, the locks stored on the disks remain, and if you try to import a disk group containing those disks, the import fails. If you are sure that the disk group is not in use by another host, you can clear the host locks when you import.
3–36
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Importing As Temporary You can temporarily import a disk group by using options in the VxVM interfaces. A temporary import does not persist across reboots. A temporary import can be useful, for example, if you need to perform administrative operations on the temporarily imported disk group. If there is name collision, temporary importing can be used to keep the original name. Note: Temporary imports are also useful in a cluster environment. Because a temporary import changes the autoimport parameter, the disk group is not automatically reimported after a system crash. Forcing an Import A disk group import fails if the VxVM configuration daemon cannot find all of the disks in the disk group. If the import fails because a disk has failed, you can force the disk group to be imported using options in the VxVM interfaces. Forcing an import should always be performed with caution.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–37
Importing a Disk Group: VEA Select SelectActions—>Import Actions—>ImportDisk DiskGroup. Group.
Options Optionsinclude: include: •• Clearing Clearinghost hostIDs IDs at import at import •• Forcing Forcingan animport import •• Starting Startingall allvolumes volumes
VM40_Solaris_R1.0_20040115
3-28
Importing a Disk Group: VEA Select:
The disk group to be imported
Navigation path:
Actions—>Import Disk Group
Input:
Group name: Verify the name of the disk group to be imported. New name: To change the name of the disk group at import, type a new disk group name in this field. Clear host ID: This option clears the existing host ID stamp on all disks in the disk group at import. Do not use this option if another host is using any disks in the disk group. Force: Use this option with caution. This option forces the disk group import when the host cannot access all disks in the disk group. This option can cause disk group inconsistency if all disks are still usable. Start all volumes: This option starts all volumes upon import and is selected by default. Import shared: This option imports the disk group as a shared disk group (applicable only in a cluster environment).
By default, when you import a disk group by using VEA, all volumes in the disk group are started automatically. Note: VEA does not support temporary import of a disk group.
3–38
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Importing a Disk Group: CLI vxdiskadm: “Enable access to (import) a disk group” vxdg import: vxdg import diskgroup # vxdg import newdg
After importing the disk group, start all volumes: # vxvol -g newdg startall
To import and rename a disk group: vxdg -n new_name import old_name # vxdg -n newerdg import newdg
To import and rename temporarily: vxdg -t -n new_name import old_name # vxdg -t -n newerdg import newdg
To clear import locks, add the -C option:
VM40_Solaris_R1.0_20040115
3-29
# vxdg -tC -n newerdg import newdg VM40_Solaris_R1.0_20040115
3-29
Importing a Disk Group: vxdiskadm To import a disk group using the vxdiskadm utility, you select “Enable access to (import) a disk group”. A disk group must be deported from its previous system before it can be imported to the new system. During the import operation, the system checks for host import locks. If any locks are found, you are prompted to clear the locks. By default, the vxdiskadm import option starts all volumes in the disk group. Importing a Disk Group: CLI To import a disk group from the command line, you use vxdg import. When you import a disk group from the command line, you must manually start all volumes in the disk group by using the command: vxvol -g diskgroup startall
To temporarily rename an imported disk group, you use the -t option. This option imports the disk group temporarily and does not set the autoimport flag, which means that the import cannot survive a reboot: Typically, a disk group is imported if some disks in the disk group cannot be found by the local host. You can use the -f option to force an import if, for example, one of the disks is currently unusable or inaccessible. # vxdg -f import newdg
Note: Be careful when using the -f option, because it can import the same disk group twice from disjointed sets of disks and make the disk group inconsistent.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–39
Example: HA Environment Computer Computer A A
Computer Computer B B
sysdg
osdg
rootvol
rootvol
Boot Boot Disks Disks
VM40_Solaris_R1.0_20040115
acctdg
engdg
vol01
vol01
Additional Additional Disks Disks
3-30
Example: Disk Groups and High Availability The example in the diagram represents a high availability environment. In the example, Computer A and Computer B each have their own bootdg on their own private SCSI bus. The two hosts are also on a shared SCSI bus. On the shared bus, each host has a disk group, and each disk group has a set of VxVM disks and volumes. There are additional disks on the shared SCSI bus that have not been added to a disk group. If Computer A fails, then Computer B, which is on the same SCSI bus as disk group acctdg, can take ownership or control of the disk group and all of its components.
3–40
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Moving a Disk Group Host B
Host A
acctdg
Deport Deport
acctdg
acctdg
VM40_Solaris_R1.0_20040115
Import Import
3-31
Moving Disk Groups Between Systems One of the main benefits of disk groups is that they can be moved between systems. When you move a disk group from one system to another, all of the VxVM objects within the disk group are moved, and you do not have to specify the configuration again. The disk group configuration is relocated to the new system. To move a disk group from one system to another, you deport the disk group from one host and then import the disk group to another host. Moving a Disk Group: VEA To move a disk group from one machine to another: 1 Unmount file systems and stop all volumes in the disk group to be moved. 2 Deport the disk group to be moved to the other system. 3 Attach all of the physical disks in the disk group to the new system. 4 On the new system, import the deported disk group. 5 Restart and recover all volumes in the disk group on the new system. Note: To move a disk group between two systems, VxVM must be running on both systems.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–41
Moving a Disk Group: vxdiskadm To move a disk group between systems using the vxdiskadm utility, you perform the deport and import options in sequence: 1 Deport a disk group from one system using the “Remove access to (deport) a disk group” option. 2 Move all the disks to the second system and perform necessary systemdependent steps to make the second system and Volume Manager recognize the new disks. A reboot may be required. 3 Import the disk group to the new system using option 8, “Enable access to (import) a disk group.” Moving a Disk Group: CLI To move a disk group between systems: 1 On the first system, deport the disk group to be moved. # vxdg -h hostname deport diskgroup Note: The -h hostname option is not required, but is useful because it “reserves” a disk group for the target host. 2 Move all the disks to the second system and perform necessary systemdependent steps to make the second system and Volume Manager recognize the new disks. A reboot may be required. 3 Import the disk group to the new system: # vxdg import diskgroup 4 After the disk group is imported, start all volumes in the disk group.
3–42
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Renaming a Disk Group Host A
oldnamedg
newnamedg
Deport Deport
Import Import
VM40_Solaris_R1.0_20040115
3-32
Renaming a Disk Group Only one disk group of a particular name can exist for each system. You cannot import or deport a disk group when the target system already has a disk group of the same name. To avoid name collision when moving disk groups or to provide a more appropriate name for a disk group, you can rename a disk group. • To rename a disk group when moving it from one system to another, you specify the new name during the deport or during the import operations. • To rename a disk group without moving the disk group, you must still deport and reimport the disk group on the same system. Note: If the disk group contains a volume with a file system that is mounted using /etc/vfstab, then the paths specified in this file must be manually updated. Renaming a Disk Group: VEA The VEA interface has a Rename Disk Group menu option. On the surface, this option appears to be simply renaming the disk group. However, the option works by deporting and reimporting the disk group with a new name. Select:
The disk group to be renamed
Navigation path:
Actions—>Rename Disk Group
Input:
Group name: Specify the disk group to be renamed. New name: Type the new name for the disk group.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–43
Renaming a Disk Group: CLI To rename a disk group from the command line, use the -n new_name option in the vxdg deport or vxdg import commands. You can specify the new name during the deport or during the import operation: vxdg -n new_name deport old_name vxdg import new_name
or vxdg deport old_name vxdg -n new_name import old_name
Starting Volumes After Renaming a Disk Group When you rename a disk group from the command line, you must restart all volumes in the disk group by using the vxvol command: vxvol -g new_name startall
The vxvol utility performs operations of Volume Manager volumes. For more information on vxvol, see the vxvol(1m) manual page. Renaming a Disk Group: CLI Example For example, to rename the disk group datadg to mktdg, you can use either of the following sequences of commands: # vxdg -n mktdg deport datadg # vxdg import mktdg # vxvol -g mktdg startall
or # vxdg deport datadg # vxdg -n mktdg import datadg # vxvol -g mktdg startall
3–44
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Destroying a Disk Group olddg
Destroying a disk group: •
Means that the disk group no longer exists
•
Returns all disks to the free disk pool
•
Is the only method for freeing the last disk in a disk group
VEA: Actions—>Destroy Disk Group.
Destroy Destroy
CLI: vxdg destroy diskgroup Example: To destroy the disk group olddg and place its disks in the free disk pool: # vxdg destroy olddg
VM40_Solaris_R1.0_20040115
3-33
Free Disk Pool
Destroying a Disk Group Destroying a disk group permanently removes a disk group from Volume Manager control and the disk group ceases to exist. When you destroy a disk group, all of the disks in the disk group are reinitialized as empty disks and are returned to the free disk pool. Volumes and configuration information about the disk group are removed. Because you cannot remove the last disk in a disk group, destroying a disk group is the only method to free the last disk in a disk group for reuse. A disk group cannot be destroyed if any volumes in that disk group are in use or contain mounted file systems. The bootdg disk group cannot be destroyed. Caution: Destroying a disk group can result in data loss. Only destroy a disk group if you are sure that the volumes and data in the disk group are not needed. Destroying a Disk Group: VEA Select:
The disk group to be destroyed
Navigation path:
Actions—>Destroy Disk Group
Input:
Group name: Specify the disk group to be destroyed.
Destroying a Disk Group: CLI To destroy a disk group from the command line, use the vxdg destroy command.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–45
Disk Group Versioning All disk groups have a version number based on the VxVM release. Each disk group version supports a set of features. You must upgrade old disk group versions in order to use new features. VxVM Release 1.2 1.3 2.0, 2.1 2.2 2.3, 2.4 2.5 3.0 3.1 3.1.1 3.2, 3.5 4.0
Disk Group Version 10 15 20 30 40 50 60 70 80 90 110
Supported Disk Group Versions 10 15 20 30 40 50 20–40, 60 20–70 20–80 20–90 20–110
VM40_Solaris_R1.0_20040115
3-34
Upgrading a Disk Group All disk groups have an associated version number. Each VxVM release supports a specific set of disk group versions and can import and perform tasks on disk groups with those versions. Some new features and tasks only work on disk groups with the current disk group version, so you must upgrade existing disk groups in order to perform those tasks. Prior to the release of VxVM 3.0, the disk group version was automatically upgraded (if needed) when the disk group was imported. Starting with VxVM release 3.0, the two operations of importing a disk group and upgrading its version are separate. You can import a disk group from a previous version and use it without upgrading it. The first disk group version supported by a particular platform corresponds to the first VxVM release on that platform. For example, the first VxVM release on HPUX was VxVM 3.1. Therefore, the first supported disk group version on that platform was 70. On AIX and Linux, the first VxVM version was version 3.2, so the earliest supported disk group version is 90. You must upgrade older version disk groups before you can use new VxVM features with those disk groups. Once you upgrade a disk group, the disk group becomes incompatible with earlier releases of VxVM that do not support the new version. Upgrading the disk group version is an online operation. You cannot downgrade a disk group version.
3–46
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
If you do not upgrade older version disk groups, the disk groups can still be used provided that you do not try to use the features of the current version. Attempts to use a feature of the current version that is not a feature of the version the disk group was imported from result in an error message similar to this: vxvm:vxedit: ERROR: Disk group version doesn’t support feature
Summary of Supported Features for Disk Group Versions The following table summarizes supported features for each disk group version: VxVM Release
Disk Group Version
New Features Supported
Previous Version Features Supported
4.0
110
Cross-platform data sharing (CDS), device discovery layer (DDL) 2.0, disk group configuration backup and restore (CBR), elimination of rootdg as a special disk group, instant and space-optimized snapshots, serial split brain detection, VERITAS Intelligent Storage Provisioning (ISP), volume sets
20, 30, 40, 50, 60, 70, 80, 90
3.2, 3.5
90
Disk group move, split and join, device discovery layer (DDL), ordered allocation, OS-independent naming support, persistent FastResync, cluster support for Oracle resilvering, layered volume support in clusters
20, 30, 40, 50, 60, 70, 80
3.1.1
80
VERITAS Volume Replicator (VVR) enhancements
20, 30, 40, 50, 60, 70
3.1
70
Nonpersistent FastResync, VVR enhancements, Unrelocate
20, 30, 40, 50, 60
3.0
60
Online relayout, safe RAID-5 subdisk moves
20, 30, 40
2.5
50
Storage Replicator for Volume Manager (an earlier version of what is now VVR)
20, 30, 40
2.3, 2.4
40
Hot relocation
20, 30
2.2
30
VxSmartSync Recovery Accelerator
20
2.0, 2.1
20
Dirty region logging, disk group configuration copy limiting, mirrored volumes logging, new-style stripes, RAID-5 volumes, recovery checkpointing
15
You can upgrade the disk group version using VEA or from the command line. The vxdiskadm utility does not have an option to upgrade a disk group.
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–47
Upgrading a Disk Group: VEA In the Disk Group Properties window: •
If the Current version property is Yes, then the disk group version is current.
•
The Version property displays the version number.
To upgrade a disk group: 1. Select the disk group to be upgraded. 2. Select Actions—>Upgrade Disk Group Version. 3. Confirm the upgrade when prompted. Note: You cannot upgrade to a specific version using VEA. You can only upgrade to the current version. To upgrade to a specific version, use a CLI command.
VM40_Solaris_R1.0_20040115
3-35
Upgrading a Disk Group: VEA To determine if a disk group needs to be upgraded, you can view the status of the disk group version in the Disk Group Properties window. The Current version field states whether or not the disk group has been upgraded to the latest version. The Version field indicates the version number. To upgrade a disk group: Select:
The disk group to be upgraded
Navigation path:
Actions—>Upgrade Disk Group Version
Note: You cannot upgrade to a specific disk group version by using VEA. You can only upgrade to the current version. To upgrade to a specific version, use the command line.
3–48
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Upgrading a Disk Group: CLI To display the disk group version: # vxdg list newdg Group: dgid: . . . version:
newdg 971216408.1133.cassius 110
To upgrade the disk group version: vxdg [-T version] upgrade diskgroup To upgrade datadg from version 40 to the current version: # vxdg upgrade datadg To upgrade datadg from version 20 to version 40: # vxdg -T 40 upgrade datadg To create a version 50 disk group: # vxdg -T 50 init newdg newdg01=Disk_4
VM40_Solaris_R1.0_20040115
3-36
VM40_Solaris_R1.0_20040115
3-36
Upgrading a Disk Group: CLI To display the disk group version for a specific disk group, you use the command: vxdg list diskgroup
You can also determine the disk group version by using the vxprint command with the -l option. To upgrade a disk group from the command line, you use the vxdg upgrade command. By default, VxVM upgrades a disk group to the highest version supported by the VxVM release: vxdg [-T version] upgrade diskgroup
To specify a different version, you use the -T version option. You can also use the -T version option when creating a disk group. For example, to create a disk group that can be imported by a system running VxVM 2.5, the disk group must be version 50 or less. To create a version 50 disk group, you add -T 50 to the vxdg init command: # vxdg -T 50 init newdg newdg01=device_name
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–49
Summary You should now be able to: •
Describe the features and benefits of the two device-naming schemes: traditional and enclosure-based naming.
•
Identify the stages of VxVM disk configuration.
•
Create a disk group by using VEA and command line utilities.
•
View disk and disk group information and identify disk status.
•
Manage disks, including adding a disk to a VxVM disk group, removing a disk from a disk group, changing the disk media name, and moving an empty disk from one disk group to another.
•
Manage disk groups, including deporting and importing a disk group, moving a disk group, renaming a disk group, destroying a disk group, and upgrading the disk group version.
VM40_Solaris_R1.0_20040115
3-37
Summary In this lesson, you learned how to perform tasks associated with the management of disks and disk groups. This lesson described device-naming schemes, how to add a disk to a disk group, how to view disk and disk group information, and how to add, remove, rename, and move a disk. This lesson also described procedures for creating, deporting, importing, destroying, and upgrading a disk group. Next Steps In the next lesson, you learn how to create a volume. Additional Resources • VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. • VERITAS Volume Manager Installation Guide This guide provides detailed procedures for installing and initializing VERITAS Volume Manager and VERITAS Enterprise Administrator. • VERITAS Volume Manager User’s Guide—VERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager.
3–50
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 3 Lab 3: Managing Disks and Disk Groups • In this lab, you use the VxVM interfaces to view the status of disks, initialize disks, move disks to the free disk pool, and move disks into and out of a disk group. • You also create new disk groups, remove disks from disk groups, deport and import disk groups, and destroy disk groups. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B. VM40_Solaris_R1.0_20040115
3-38
Lab 3: Managing Disks and Disk Groups To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
Lesson 3 Managing Disks and Disk Groups Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3–51
3–52
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 4 Creating Volumes
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
4-2
Introduction Overview This lesson describes how to create a volume in VxVM. This lesson covers how to create a volume using different volume layouts, how to display volume layout information, and how to remove a volume. Importance By creating volumes, you begin to take advantage of the VxVM concept of virtual storage. Volumes enable you to span data across multiple disks using a variety of storage layouts and to achieve data redundancy and resilience.
4–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: • Identify the features, advantages, and disadvantages of volume layouts (concatenated, striped, mirrored, and RAID-5) supported by VxVM. • Create concatenated, striped, mirrored, and RAID-5 volumes by using VEA and from the command line. • Display volume layout information by using VEA and by using the vxprint command. • Create and view layered volumes by using VEA and from the command line. • Remove a volume from VxVM by using VEA and from the command line. VM40_Solaris_R1.0_20040115
4-3
Outline of Topics • Selecting a Volume Layout • Creating a Volume • Displaying Volume Layout Information • Creating a Layered Volume • Removing a Volume
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–3
Concatenated Layout Disk Group
datadg
Volume
datavol
Plex
datavol-01
Subdisks
datadg01-01 datadg02-03
VxVM Disks Subdisks
88GB GB
14 14GB GB
datadg01
datadg02
datadg01-01
datadg02-01 datadg02-02 datadg02-03
datadg01-02
66GB GB
VM40_Solaris_R1.0_20040115
4-4
Selecting a Volume Layout Each volume layout has different advantages and disadvantages. For example, a volume can be extended across multiple disks to increase capacity, mirrored on another disk to provide data redundancy, or striped across multiple disks to improve I/O performance. The layouts that you choose depend on the levels of performance and reliability required by your system. Concatenated Layout A concatenated volume layout maps data in a linear manner onto one or more subdisks in a plex. Subdisks do not have to be physically contiguous and can belong to more than one VM disk. Storage is allocated completely from one subdisk before using the next subdisk in the span. Data is accessed in the remaining subdisks sequentially until the end of the last subdisk. For example, if you have 14 GB of data, then a concatenated volume can logically map the volume address space across subdisks on different disks. The addresses 0 GB to 8 GB of volume address space map to the first 8-gigabyte subdisk, and addresses 9 GB to 14 GB map to the second 6-gigabyte subdisk. An address offset of 12 GB, therefore, maps to an address offset of 4 GB in the second subdisk.
4–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Striped Layout Disk Group Volume Plex
VxVM Disks Subdisks
SU SU==Stripe StripeUnit Unit 64K 64K(default) (default)
datavol datavol-01 Columns Columns
Subdisks
datadg
SU1 SU2 SU3
Stripes Stripes
SU4 SU5 SU6 SU7 SU8 SU9 SU10 SU11 SU12
datadg01
datadg02
datadg03
datadg01-01 datadg01-02 datadg01-03
datadg02-01 datadg02-02 datadg02-03
datadg03-01 datadg03-02 datadg03-03
VM40_Solaris_R1.0_20040115
4-5
Striped Layout A striped volume layout maps data so that the data is interleaved, or allocated in stripes, among two or more subdisks on two or more physical disks. Data is allocated alternately and evenly to the subdisks of a striped plex. The subdisks are grouped into “columns.” Each column contains one or more subdisks and can be derived from one or more physical disks. To obtain the maximum performance benefits of striping, you should not use a single disk to provide space for more than one column. All columns must be the same size. The minimum size of a column should equal the size of the volume divided by the number of columns. The default number of columns in a striped volume is one-half the number of disks in the disk group. Data is allocated in equal-sized units, called stripe units, that are interleaved between the columns. Each stripe unit is a set of contiguous blocks on a disk. The stripe unit size can be in units of sectors, kilobytes, megabytes, or gigabytes. The default stripe unit size is 64K, which provides adequate performance for most general purpose volumes. Performance of an individual volume may be improved by matching the stripe unit size to the I/O characteristics of the application using the volume.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–5
Mirrored Layout Disk Group
datadg
Volume
datavol
Plexes
datavol-01 datavol-02
Subdisks
datadg01-02
VxVM Disks Subdisks
Each Eachplex plexmust must have disk have diskspace space from fromdifferent different disks disksto toachieve achieve redundancy. redundancy.
datadg03-01 datadg02-02
datadg01
datadg02
datadg03
datadg01-01
datadg02-01 datadg02-02 datadg02-03
datadg03-01 datadg03-02 datadg03-03
datadg01-02
VM40_Solaris_R1.0_20040115
4-6
Mirrored Layout By adding a mirror to a concatenated or striped volume, you create a mirrored layout. A mirrored volume layout consists of more than one plex that duplicate the information contained in a volume. Each plex in a mirrored layout contains an identical copy of the volume data. In the event of a physical disk failure and when the plex on the failed disk becomes unavailable, the system can continue to operate using the unaffected mirrors. Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy. Volume Manager uses true mirrors, which means that all copies of the data are the same at all times. When a write occurs to a volume, all plexes must receive the write before the write is considered complete. You should distribute mirrors across controllers to eliminate the controller as a single point of failure.
4–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
RAID-5 Layout datadg datavol
Disk Group Volume Plex PP==Parity; Parity;aa calculated calculatedvalue valueused used to toreconstruct reconstructdata data after afterdisk diskfailure. failure.
VxVM Disks Subdisks
datadg01 datadg01-01
Columns Columns
datavol-01
SU SU==Stripe StripeUnit Unit 16K 16K(default) (default)
Stripes SU1 SU2 SU3 P Stripes SU5 SU6 P SU4 SU9 P SU7 SU8 P SU10 SU11 SU12
datadg02 datadg02-01
datadg03 datadg03-01
VM40_Solaris_R1.0_20040115
datadg04 datadg04-01
4-7
RAID-5 A RAID-5 volume layout has the same attributes as a striped plex, but includes one additional column of data that is used for parity. Parity provides redundancy. Parity is a calculated value used to reconstruct data after a failure. While data is being written to a RAID-5 volume, parity is calculated by doing an exclusive OR (XOR) procedure on the data. The resulting parity is then written to the volume. If a portion of a RAID-5 volume fails, the data that was on that portion of the failed volume can be re-created from the remaining data and parity information. RAID-5 volumes keep a copy of the data and calculated parity in a plex that is striped across multiple disks. Parity is spread equally across columns. Given a five-column RAID-5 where each column is 1 GB in size, the RAID-5 volume size is 4 GB. One column of space is devoted to parity, and the remaining four 1-GB columns are used for data. The default stripe unit size for a RAID-5 volume is 16K. Each column must be the same length but may be made from multiple subdisks of variable length. Subdisks used in different columns must not be located on the same physical disk. RAID-5 requires a minimum of three disks for data and parity. When implemented as recommended, an additional disk is required for the log. RAID-5 cannot be mirrored.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–7
Comparing Volume Layouts
Disadvantages
Advantages
Concatenation
Striping
• Removes size restrictions
• Parallel data transfer
• Better utilization of free space
• Load balancing
• Simplified administration
• No redundancy.
• Improved performance (if properly configured)
• No redundancy.
• Single disk • Single disk failure causes failure causes volume failure. volume VM40_Solaris_R1.0_20040115 failure.
Mirroring
RAID-5
• Improved reliability and availability
• Redundancy through parity
• Improved read performance • Fast recovery through logging
• Requires less space than mirroring • Improved read performance • Fast recovery through logging
• Requires more • Slower write disk space performance than mirroring • Slightly slower write performance
VM40_Solaris_R1.0_20040115
4-8
4-8
Comparing Volume Layouts Concatenation: Advantages • Removes size restrictions: Concatenation removes the restriction on size of storage devices imposed by physical disk size. • Better utilization of free space: Concatenation enables better utilization of free space on disks by providing for the ordering of available discrete disk space on multiple disks into a single addressable volume. • Simplified administration: Concatenation enables large file systems to be created and reduces overall system administration complexity. Concatenation: Disadvantages No protection against disk failure: Concatenation does not protect against disk failure. A single disk failure results in the failure of the entire volume. Striping: Advantages • Improved performance through parallel data transfer: Improved performance is obtained by increasing the effective bandwidth of the I/O path to the data. This may be achieved by a single volume I/O operation spanning across a number of disks or by multiple concurrent volume I/O operations to more than one disk at the same time. • Load balancing: Striping is also helpful in balancing the I/O load from multiuser applications across multiple disks.
4–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Striping: Disadvantages • No redundancy: Striping alone offers no redundancy or recovery features. • Disk failure: Striping a volume increases the chance that a disk failure results in failure of that volume. For example, if you have three volumes striped across two disks, and one of the disks is used by two of the volumes, then if that one disk goes down, both volumes go down. Mirroring: Advantages • Improved reliability and availability: With concatenation or striping, failure of any one disk makes the entire plex unusable. With mirroring, data is protected against the failure of any one disk. Mirroring improves the reliability and availability of a striped or concatenated volume. • Improved read performance: Reads benefit from having multiple places from which to read the data. Mirroring: Disadvantages • Requires more disk space: Mirroring requires twice as much disk space, which can be costly for large configurations. Each mirrored plex requires enough space for a complete copy of the volume’s data. • Slightly slower write performance: Writing to volumes is slightly slower, because multiple copies have to be written in parallel. The overall time the write operation takes is determined by the time needed to write to the slowest disk involved in the operation. The slower write performance of a mirrored volume is not generally significant enough to decide against its use. The benefit of the resilience that mirrored volumes provide outweighs the performance reduction. RAID-5: Advantages • Redundancy through parity: With a RAID-5 volume layout, data can be re-created from remaining data and parity in case of the failure of one disk. • Requires less space than mirroring: RAID-5 stores parity information, rather than a complete copy of the data. • Improved read performance: RAID-5 provides similar improvements in read performance as in a normal striped layout. • Fast recovery through logging: RAID-5 logging minimizes recovery time in case of disk failure. RAID-5: Disadvantages Slow write performance: The performance overhead for writes can be substantial, because a write can involve much more than simply writing to a data block. A write can involve reading the old data and parity, computing the new parity, and writing the new data and parity.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–9
Creating a Volume datadg
Disk Group
Before creating a volume, initialize disks and assign them them to to disk disk groups. groups. • Striped:
Minimum two disks
• Mirrored: Minimum one disk for each plex • RAID-5:
VxVM Disks
datadg01
Minimum three disks plus one disk to contain the log
datadg02
datadg03
datadg04
VM40_Solaris_R1.0_20040115
4-9
VM40_Solaris_R1.0_20040115
4-9
Creating a Volume Creating a Volume When you create a volume using VEA or CLI commands, you indicate the desired volume characteristics, and VxVM automatically creates the underlying plexes and subdisks. The VxVM interfaces require minimal input if you use default settings. For experienced users, the interfaces also enable you to enter more detailed specifications regarding all aspects of volume creation. Note: Most volume tasks cannot be performed with the vxdiskadm menu interface—a management tool used for disk objects. When you create a volume, two device node files are created that can be used to access the volume: • /dev/vx/dsk/diskgroup/volume_name • /dev/vx/rdsk/diskgroup/volume_name Before You Create a Volume Before you create a volume, you should ensure that you have enough disks to support the layout type. • A striped volume requires at least two disks. • A mirrored volume requires at least one disk for each plex. A mirror cannot be on the same disk that other plexes are using. • A RAID-5 volume requires at least three disks. A RAID-5 log is created by default and must use a separate disk, so you need one additional disk for the log. 4–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Assigning Disks: VEA Actions—>New Actions—>NewVolume Volume
Step Step1: 1:Select Selectdisks disksto touse usefor forthe thevolume. volume.
Disks Diskscan canbe be Included Includedfor foror or Excluded Excludedfrom from volume volumeuse. use.
VM40_Solaris_R1.0_20040115
4-10
4-10
Creating a Volume: VEA Select:
A disk group
Navigation path:
Actions—>New Volume
Input:
Disks for this volume: Let VxVM decide (default), or manually select disks to use. Volume attributes: Specify a volume name, the size of the volume, the type of volume layout, and other layout characteristics. Assign a meaningful name to the volume that describes the data stored in the volume. File system: Create a file system on the volume and set file system options.
New Volume Wizard Step 1: Assigning Disks to Use for a New Volume By default, VxVM locates available space on all disks in the disk group and assigns the space to a volume automatically based on the layout you choose. Alternatively, you can choose specific disks, mirror or stripe across controllers, trays, targets, or enclosures, or implement ordered allocation. Ordered allocation is a method of allocating disk space to volumes based on a specific set of VxVM rules.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–11
Setting Volume Attributes: VEA Step Step2: 2:Specify Specifyvolume volumeattributes. attributes.
Default Defaultoptions options change changebased basedon on the thelayout layouttype type you youselect. select.
VM40_Solaris_R1.0_20040115
4-11
New Volume Wizard Step 2: Specifying Attributes for a New Volume Volume name: Assign a meaningful name to the volume that describes the data stored in the volume. Size: Specify a size for the volume. The default unit is GB. If you select the Max Size button, VxVM determines the largest size possible for the volume based on the layout selected and the disks to which the volume is assigned. • Select a size for the volume based on the volume layout and the space available in the disk group. The size of the volume must be less than or equal to the available free space on the disks. • The size specified in the Size field is the usable space in the volume. For a volume with redundancy (RAID-5, mirrored), VxVM allocates additional free space for the volume’s parity information (RAID-5) or additional plexes (mirrored). • The free space available for constructing a volume of a specific layout is generally less than the total free space in the disk group unless the layout is concatenated or striped with no mirroring or logging. Layout: Select a layout type from the group of options. The default layout is concatenated. • Concatenated: The volume is created using one or more regions of specified disks. • Striped: The volume is striped across two or more disks. The default number of columns across which the volume is striped is two, and the default stripe unit size is 128 sectors (64K) on Solaris, AIX, and Linux; 64 sectors (64K) on
4–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
•
•
HP-UX. You can specify different values. RAID-5: In the Number of Columns field, specify the number of columns (disks) across which the volume is striped. The default number of columns is three, and the default stripe unit size is 16K (32 sectors on Solaris, AIX, and Linux; 16 sectors on HP-UX). RAID-5 requires one more column than the number of data columns. The extra column is used for parity. A RAID-5 volume requires at least one more disk than the number of column; one disk is needed for logging, which is enabled by default. Concatenated Mirrored and Striped Mirrored: These options denote layered volume layouts.
Mirror info: • Mirrored: Mirroring is recommended. To mirror the volume, mark the Mirrored check box. Only striped or concatenated volumes can be mirrored. RAID-5 volumes cannot be mirrored. • Total mirrors: Type the total number of mirrors for the volume. A volume can have up to 32 plexes; however, the practical limit is 31. One plex is reserved by VxVM to perform restructuring or relocation operations. Enable logging: To enable logging, mark the Enable logging check box. If you enable logging, a log is created that tracks regions of the volume that are currently being changed by writes. In case of a system failure, the log is used to recover only those regions identified in the log. VxVM creates a dirty region log or a RAID-5 log, depending on the volume layout. If the layout is RAID-5, logging is enabled by default, and VxVM adds an appropriate number of logs to the volume. Enable FastResync: To enable FastResync, mark the Enable FastResync check box. This option is displayed only if you have licensed the FastResync option. Initialize zero: To clear the volume before enabling it for general use, mark the Initialize zero check box. In what situations should you consider using the Initialize zero option? • Under RAID-5 creation, creation time of the RAID-5 volume can be up to 25 percent faster when you initialize zero. With this method of initialization, 0’s are written unconditionally to the volume, instead of the traditional initialization method of XORing each cell. • For security purposes, you can use the Initialize Zero option to overwrite all existing data in the volume area. • You should also consider this option when creating a new pair of volumes on remote systems while using VERITAS Volume Replicator (VVR). By zeroing, you are assured that corresponding volumes in the primary and secondary replicated volume groups (RVGs) are initialized accordingly, avoiding the need for full synchronization of the volumes. No layered volumes: To prevent the creation of a layered volume, mark the No layered volumes check box. This option ensures that the volume has a nonlayered layout. If a layered layout is selected, this option is ignored. Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–13
Adding a File System: VEA Step Step3: 3:Create Createaafile file system on system onthe thevolume. volume. File File system system type type
Mount Mountpoint point
Create Createand and mount mountoptions options Mount Mountat atboot boot
VM40_Solaris_R1.0_20040115
4-12
New Volume Wizard Step 3: Creating a File System on a New Volume When you create a volume, you can place a file system on the volume and specify options for mounting the file system. You can place a file system on a volume when you create a volume or any time after creation. The default option is “No file system.” To place a file system on the volume, select the “Create a file system” option and specify: • File system type: Specify the file system type as either vxfs (VERITAS File System) or other OS-supported file system types (UFS on Solaris; HFS on HPUX; on AIX, JFS and JFS2 are not supported on VxVM volumes). To add a VERITAS file system, the VxFS product must be installed with appropriate licenses. • Create Options: – Compress: If your platform supports file compression, this option compresses the files on your file system (not available on Solaris). – Allocation unit or Block size: Select an allocation unit size (for OSsupported file system types); or a block size (for VxFS file systems). – New File System Details: Click this button to specify additional file system-specific mkfs options. For VxFS, the only explicitly available additional options are large file support and log size. You can specify other options in the Extra Options field. • Mount Options: – Mount Point: Specify the mount point directory on which to mount the file system. The new file system is mounted immediately after it is created.
4–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
– – – –
– – –
Leave this field empty if you do not want to mount the file system. Create mount point: Mark this check box to create the directory if it does not exist. The mount point must be specified. Read only: Mark this check box to mount the file system as read only. Honor setuid: Mark this check box to mount the file system with the suid mount option. This option is marked by default. Add to file system table: Mark this check box to include the file system in the /etc/vfstab file (Solaris), the /etc/fstab file (HP-UX, Linux), or the /etc/filesystems file (AIX). Mount at boot: Mark this check box to mount the file system automatically whenever the system boots. fsck pass: Specify how many fsck passes will be run if the file system is not clean at mount time. Mount File System Details: Click this button to specify additional mount options. For VxFS, the explicitly available additional options include disabling Quick I/O, setting directory permissions and owner, and setting caching policy options. You can specify other options, such as quota, in the Extra options field.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–15
Creating a Volume: CLI To create a volume: vxassist -g diskgroup make volume_name length [attributes] Block and character (raw) device files are set up that you can use to access the volume: • Block device file for the volume: /dev/vx/dsk/diskgroup/volume_name • Character device file for the volume: /dev/vx/rdsk/diskgroup/volume_name To display volume attributes, use: vxassist -g diskgroup help showattrs VM40_Solaris_R1.0_20040115
4-13
Creating a Volume: CLI To create a volume from the command line, you use the vxassist command. You specify the basic attributes of the desired volume layout, and VxVM automatically creates the underlying plexes and subdisks. This command uses default values for volume attributes, unless you provide specific values. vxassist [-g diskgroup] make volume_name length [attributes]
In the syntax: • Use the -g option to specify the disk group in which to create the volume. If you do not specify a disk group, VxVM creates the volume in your default disk group. • make is the keyword for volume creation. • volume_name is a name you give to the volume. Specify a meaningful name. • length specifies the number of sectors in the volume. You can specify the length in kilobytes, megabytes, or gigabytes by adding an m, k, or g to the length. If no unit is specified, sectors are assumed. You can specify many additional attributes, such as volume layout or specific disks. For detailed descriptions of all attributes that you can use with vxassist, see the vxassist(1m) manual page.
4–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Concatenated Volume: CLI To create a concatenated volume: # vxassist -g datadg make datavol 10g Disk Diskgroup groupname name
Volume Volumename name
Volume Volumesize size
If an /etc/default/vxassist file exists with a different default layout, use: # vxassist -g datadg make datavol 10g layout=nostripe
To create a concatenated volume on specific disks: # vxassist -g datadg make datavol 10g datadg02 datadg03 Disk Diskmedia medianames names
VM40_Solaris_R1.0_20040115
4-14
Creating a Concatenated Volume: CLI By default, vxassist creates a concatenated volume that uses one or more sections of disk space. The vxassist command attempts to locate sufficient contiguous space on one disk for the volume. However, if necessary, the volume is spanned across multiple disks. VxVM selects the disks on which to create the volume. To create a concatenated volume called datavol with a length of 10 gigabytes, in the disk group datadg, using any available disks, you type: # vxassist -g datadg make datavol 10g
Note: To guarantee a concatenated volume is created, you should include the attribute layout=nostripe in the vxassist make command. Without the layout attribute, the default layout is used that may have been changed by the creation of the /etc/default/vxassist file. For example: # vxassist -g datadg make datavol 10g layout=nostripe
If you want the volume to reside on specific disks, you can designate the disks by adding the disk media names to the end of the command. More than one disk can be specified. vxassist [-g diskgroup] make volume_name length [disks...]
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–17
Striped Volume: CLI To create a striped volume: vxassist -g diskgroup make volume_name length layout=stripe ncol=n stripeunit=size [disks...]
Examples: # vxassist -g acctdg make payvol 2g layout=stripe ncol=3 !acctdg04 # vxassist -g acctdg make expvol 2g layout=stripe ncol=3 stripeunit=64k acctdg01 acctdg02 acctdg03 VM40_Solaris_R1.0_20040115
4-15
Creating a Striped Volume: CLI To create a striped volume, you add the layout type and other attributes to the vxassist make command. vxassist [-g diskgroup] make volume_name length layout=stripe ncol=n stripeunit=size [disks...]
In the syntax: • layout=stripe designates the striped layout. • ncol=n designates the number of stripes, or columns, across which the volume is created. This attribute has many aliases. For example, you can also use nstripe=n or stripes=n. If you do not provide a number of columns, then VxVM selects a number of columns based on the number of free disks in the disk group. The minimum number of stripes in a volume is 2, and the maximum is 8. You can edit these minimum and maximum values in /etc/default/vxassist. • stripeunit=size specifies the size of the stripe unit to be used. The default is 64K. • To stripe the volume across specific disks, you can specify the disk media names at the end of the command. The order in which disks are listed on the command line does not imply any ordering of disks within the volume layout. By default, VxVM selects any available disks with sufficient space. To exclude a disk or list of disks, add an exclamation point (!) before the disk media names. For example, !datadg01 specifies that the disk datadg01 should not be used to create the volume. 4–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
RAID-5 Volume: CLI To create a RAID-5 volume: vxassist -g diskgroup make volume_name length layout=raid5 ncol=n stripeunit=size [disks...]
• Default ncol=3 • Default stripeunit=16K • Log is created by default. Therefore, you need at least one more disk than the number of columns. Example: # vxassist -g acctdg make payvol 10g layout=raid5 VM40_Solaris_R1.0_20040115
4-16
Creating a RAID-5 Volume: CLI To create a RAID-5 volume from the command line, you use the same syntax as for creating a striped volume, except that you use the attribute layout=raid5: vxassist [-g diskgroup] make volume_name length layout=raid5 ncol=n stripeunit=size [disks...]
Notes: • For a RAID-5 volume, the default stripe unit size is 32 sectors (16K). • When a RAID-5 volume is created, a RAID-5 log is created by default. This means that you must have at least one additional disk available for the log. • If you do not want the default log, then add the nolog option in the syntax, layout=raid5,nolog. • If you specify too few disks when creating a volume, you receive the error message “Cannot allocate space for a x block volume”, even if there is enough space in the disk group.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–19
Mirrored Volume: CLI To create a mirrored volume: vxassist -g diskgroup [-b] make volume_name length layout=mirror [nmirror=number] Examples: Concatenated Concatenated and andmirrored mirrored
# vxassist -g datadg make datavol 5g layout=mirror
Specify Specifythree three mirrors. mirrors.
# vxassist -g datadg make datavol 5g layout=stripe,mirror nmirror=3
Run Runprocess processin in background. background.
# vxassist -g datadg -b make datavol 5g layout=stripe,mirror nmirror=3
VM40_Solaris_R1.0_20040115
4-17
Creating a Mirrored Volume: CLI To mirror a concatenated volume, you add the layout=mirror attribute in the vxassist command. vxassist -g diskgroup [-b] make volume_name length layout=mirror [nmirror=number_of_mirrors] • To specify more than two mirrors, you add the nmirror attribute. • When creating a mirrored volume, the volume initialization process requires that the mirrors be synchronized. The vxassist command normally waits for the mirrors to be synchronized before returning to the system prompt. To run the process in the background, you add the -b option.
4–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Mirrored Volume with Log: CLI To create a mirrored volume with a log: vxassist -g diskgroup [-b] make volume_name length layout=mirror logtype=drl [nlog=n] • logtype=drl enables dirty region logging. • nlog=n creates n logs and is used when you want more than one log plex to be created.
To create a concatenated volume that is mirrored and logged: # vxassist -g datadg make datavol 5m layout=mirror logtype=drl
VM40_Solaris_R1.0_20040115
4-18
Creating a Mirrored and Logged Volume: CLI When you create a mirrored volume, you can add a dirty region log by adding the logtype=drl attribute: vxassist -g diskgroup [-b] make volume_name length layout=mirror logtype=drl [nlog=n]
In the syntax: • Specify logtype=drl to enable dirty region logging. A log plex that consists of a single subdisk is created. • If you plan to mirror the log, you can add more than one log plex by specifying a number of logs using the nlog=n attribute, where n is the number of logs.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–21
Estimating Volume Size: CLI To determine largest possible size for a volume: vxassist -g diskgroup maxsize attributes
Example: # vxassist -g datadg maxsize layout=raid5 Maximum volume size: 376832 (184Mb)
To determine how much a volume can expand: vxassist -g diskgroup maxgrow volume
Example: # vxassist -g datadg maxgrow datavol Volume datavol can be extended by 366592 to 1677312 (819Mb) VM40_Solaris_R1.0_20040115
4-19
Estimating Volume Size: CLI The vxassist command can determine the largest possible size for a volume that can currently be created with a given set of attributes. vxassist can also determine how much an existing volume can be extended under the current conditions. To determine the largest possible size for the volume to be created, use the command: vxassist -g diskgroup maxsize attributes...
This command does not create the volume but returns an estimate of the maximum volume size. The output value is displayed in sectors, by default. If the volume with the specified attributes cannot be created, an error message is returned: vxvm:vxassist: ERROR: No volume can be created within the given constraints
To determine how much an existing volume can be expanded, use the command: vxassist -g diskgroup maxgrow volume_name
This command does not resize the volume but returns an estimate of how much an existing volume can be expanded. The output indicates the amount by which the volume can be increased and the total size to which the volume can grow. The output is displayed in sectors, by default.
4–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Object Views in the Main Window Highlight aa disk disk group, and and click click the the Volumes Volumes tab. tab.
VM40_Solaris_R1.0_20040115
Highlight Highlight a volume, and click the the tabs to display details.
4-20
4-20
Displaying Volume Layout Information Displaying Volume Information: VEA To display information about volumes in VEA, you can select from several different views. Object Views in Main Window You can view volumes and volume details by selecting an object in the object tree and displaying volume properties in the grid: • To view the volumes in a disk group, select a disk group in the object tree, and click the Volumes tab in the grid. • To explore detailed components of a volume, select a volume in the object tree, and click each of the tabs in the grid.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–23
Disk View Window Highlight Highlightaavolume, volume,and andselect selectActions—>Disk Actions—>DiskView. View.
VM40_Solaris_R1.0_20040115
4-21
Disk View Window The Disk View window displays a close-up graphical view of the layout of subdisks in a volume. To display the Disk View window, select a volume or disk group and select Actions—>Disk View. Display options in the Disk View window include: • Expand: Click the Expand button to display detailed information about all disks in the Disk View window. • Vol Details: Click the Vol Details button to include volume names, layout types, and volume status for each subdisk. • Projection: Click the Projection button to highlight objects associated with a selected subdisk or volume. Projection shows the relationships between objects by highlighting objects that are related to or part of a specific object. Caution: You can move subdisks in the Disk View window by dragging subdisk icons to different disks or to gaps within the same disk. Moving subdisks reorganizes volume disk space and must be performed with care.
4–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Volume View Window Highlight Highlightaavolume, volume,and andselect selectActions—>Volume Actions—>VolumeView. View.
VM40_Solaris_R1.0_20040115
4-22
Volume View Window The Volume View window displays characteristics of the volumes on the disks. To display the Volume View window, select a volume or disk group and select Actions—>Volume View. Display options in the Volume View window include: • Expand: Click the Expand button to display detailed information about volumes. • New volume: Click the New Volume button to invoke the New Volume wizard.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–25
Volume to Disk Map Window
Click Clickaatriangle triangleto todisplay displayor or hide subdisks. hide subdisks.
Highlight Highlightaadisk diskgroup, group, and select Actions—> and select Actions—> Disk/Volume Disk/VolumeMap. Map.
Click Clickaadot dotto to highlight highlightan an VM40_Solaris_R1.0_20040115intersecting row intersecting row and andcolumn. column. VM40_Solaris_R1.0_20040115
4-23
4-23
Volume to Disk Mapping Window The Volume to Disk Mapping window displays a tabular view of volumes and their relationships to underlying disks. To display the Volume to Disk Mapping window, highlight a disk group, and select Actions—>Disk/Volume Map. To view subdisk layouts, click the triangle button to the left of the disk name, or select View—>Expand All. To help identify the row and column headings in a large grid, click a dot in the grid to highlight the intersecting row and column.
4–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Volume Layout Window Highlight Highlightaavolume, volume,and andselect selectActions—>Layout Actions—>LayoutView. View.
Select SelectView—>Horizontal View—>Horizontalor or View—>Vertical View—>Verticalto tochange changethe the orientation orientationof ofthe thediagram. diagram. VM40_Solaris_R1.0_20040115
4-24
Volume Layout Window The Volume Layout window displays a graphical view of the selected volume’s layout, components, and properties. You can select objects or perform tasks on objects in the Volume Layout window. This window is dynamic, so the objects displayed in this window are automatically updated when the volume’s properties change. To display the Volume Layout window, highlight a volume, and select Actions—>Layout View. The View menu changes the way objects are displayed in this window. Select View—>Horizontal to display a horizontal layout and View—>Vertical to display a vertical layout.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–27
Volume Properties Window Right-click Right-clickaa volume volumeand andselect select Properties. Properties.
Used Used for for FastResync FastResync Refers Refers to to volumes volumes managed managed under under Intelligent Intelligent Storage Storage Provisioning Provisioning (ISP) (ISP) Refers Refers to to volume volume set set VM40_Solaris_R1.0_20040115
4-25
VM40_Solaris_R1.0_20040115
4-25
Volume Properties Window The Volume Properties window displays a summary of volume properties. To display the Volume Properties window, right-click a volume and select Properties.
4–28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Displaying Volume Info: CLI To display volume configuration information: vxprint -g diskgroup [options] • -vpsd
Select only volumes (v), plexes (p), subdisks (s), or disks (d).
• -h
List hierarchies below selected records.
• -r
Display related records of a volume containing subvolumes.
• -t
Print single-line output records that depend upon the configuration record type.
• -l
Display all information from each selected record.
• -a
Display all information about each selected record, one record per line.
• -A
Select from all active disk groups.
• -e pattern Show records that match an editor pattern. VM40_Solaris_R1.0_20040115
4-26
Displaying Volume Layout Information: CLI The vxprint Command You can use the vxprint command to display information about how a volume is configured. This command displays records from the VxVM configuration database. vxprint -g diskgroup [options]
The vxprint command can display information about disk groups, disk media, volumes, plexes, and subdisks. You can specify a variety of options with the command to expand or restrict the information displayed. Only some of the options are presented in this training. For more information about additional options, see the vxprint(1m) manual page.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–29
Common Options Option
Description
-vpsd
Select only volumes (v), plexes (p), subdisks (s), or disks (d). Options can be used individually or in combination.
-h
List hierarchies below selected records.
-r
Display related records of a volume containing subvolumes. Grouping is done under the highest-level volume.
-t
Print single-line output records that depend upon the configuration record type. For disk groups, the output consists of the record type, the disk group name, and the disk group ID.
-l
Display all information from each selected record. Most records that have a default value are not displayed. This information is in a free format that is not intended for use by scripts.
-a
Display all information about each selected record—one record per line, with a one-space character between each field; the list of associated records is displayed.
-A
Select from all active disk groups.
-e pattern
Show records that match an editor pattern.
Additional Options
4–30
Option
Description
-F[type:]format_spec
Enable the user to define which fields to display.
-D -
Read a configuration from the standard input. The standard input is expected to be in standard vxmake input format.
-m
Display all information about each selected record in a format that is useful as input to the vxmake utility.
-f
Display information about each record as one-line output records.
-n
Display only the names of selected records.
-G
Display only disk group records.
-Q
Suppress the disk group header that separates each disk group. A single blank line separates each disk group.
-q
Suppress headers that would otherwise be printed for the default and the -t and -f output formats.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Displaying Volume Info: CLI ## vxprint vxprint -g -g datadg datadg -ht -ht || more more DG ST DM RV RL CO VT V PL SD SV SC DC SP
NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME
NCONFIG STATE DEVICE RLINK_CNT RVG CACHEVOL NVOLUME RVG VOLUME PLEX PLEX PLEX PARENTVOL SNAPVOL
NLOG DM_CNT TYPE KSTATE KSTATE KSTATE KSTATE KSTATE KSTATE DISK VOLNAME CACHE LOGVOL DCO
MINORS GROUP-ID SPARE_CNT APPVOL_CNT PRIVLEN PUBLEN STATE STATE PRIMARY DATAVOLS SRL STATE REM_HOST REM_DG REM_RLNK STATE STATE STATE LENGTH READPOL PREFPLEX UTYPE STATE LENGTH LAYOUT NCOL/WID MODE DISKOFFS LENGTH [COL/]OFF DEVICE MODE NVOLLAYR LENGTH [COL/]OFF AM/NM MODE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
dg datadg
default
default
91000
1000753077.1117.train12
dm dm dm dm
c1t10d0s2 c1t11d0s2 c1t14d0s2 c1t15d0s2
auto auto auto auto
2048 2048 2048 2048
4191264 4191264 4191264 4191264
-
ACTIVE ACTIVE 0
20480 21168 21168
SELECT CONCAT 0
datadg01 datadg02 datadg03 datadg04
VM40_Solaris_R1.0_20040115 v datavol01
ENABLED pl datavol01-01 datavol01 ENABLED sd datadg01-01 datavol01-01 datadg01
To Tointerpret interpretthe the output, output,match matchheader header lines lineswith withoutput outputlines. lines. fsgen RW c1t10d0 ENA
VM40_Solaris_R1.0_20040115
4-27
4-27
Displaying Information for All Volumes To display the volume, plex, and subdisk record information for a disk group: vxprint -g diskgroup -ht
In the output, the top few lines indicate the headers that match each type of output line that follows. Each volume is listed along with its associated plexes and subdisks and other VxVM objects. • dg is a disk group. • st is a storage pool (used in Intelligent Storage Provisioning). • dm is a disk. • rv is a replicated volume group (used in VERITAS Volume Replicator). • co is a cache object. • vt is a volume template (used in Intelligent Storage Provisioning). • rl is an rlink (used in VERITAS Volume Replicator). • v is a volume. • pl is a plex. • sd is a subdisk. • sv is a subvolume. • sc is a storage cache. • dc is a data change object. • sp is a snap object. For more information, see the vxprint(1m) manual page. Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–31
What Is a Layered Volume? Original Mirroring •
The loss of disk results in the loss of the complete plex.
•
A second disk failure could result in the loss of the complete volume.
Layered Volumes •
Mirroring is performed at the column or subdisk level.
•
Disk losses are less likely to affect the complete volume.
VM40_Solaris_R1.0_20040115
4-28
Creating a Layered Volume What Is a Layered Volume? VxVM provides two ways to mirror your data: • Original VxVM mirroring: With the original method of mirroring, data is mirrored at the plex level. The loss of a disk results in the loss of a complete plex. A second disk failure could result in the loss of a complete volume if the volume has only two mirrors. To recover the volume, the complete volume contents must be copied from backup. • Enhanced mirroring: VxVM 3.0 introduced support for an enhanced type of mirrored volume called a layered volume. A layered volume is a virtual Volume Manager object that mirrors data at a more granular level. To do this, VxVM creates subvolumes from traditional bottom-layer objects, or subdisks. These subvolumes function much like volumes and have their own associated plexes and subdisks. With this method of mirroring, data is mirrored at the column or subdisk level. Loss of a disk results in the loss of a copy of a column or subdisk within a plex. Further disk losses may occur without affecting the complete volume. Only the data contents of the column or subdisk affected by the loss of the disk need to be recovered. This recovery can be performed from an up-to-date mirror of the failed disk. Note: Only VxVM versions 3.0 and later support layered volumes. To create a layered volume, you must upgrade the disk group that owns the layered volume to version 60 or above.
4–32
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Traditional Mirroring What What happens happens ififtwo two disks fail? disks fail?
Mirrored volume
Underlying Disks sd1 disk01
X X X
sd2
sd3
X X X
VM40_Solaris_R1.0_20040115
XX==failed faileddisk disk
X X
sd2
sd3
sd4
Plex
Plex
sd4
disk02 disk03 disk04
X X
sd1
X X
sd sd==subdisk subdisk Volume Status
Down Up Down Down Up Down
When Whentwo twodisks disks fail, fail,volume volume survives survives2/6, 2/6,or or 1/3 times. 1/3 times.
VM40_Solaris_R1.0_20040115
4-29
4-29
Comparing Regular Mirroring with Enhanced Mirroring To understand the purpose and benefits of layered volume layouts, compare regular mirroring with the enhanced mirroring of layered volumes in a disk failure scenario. Regular Mirroring The example illustrates a regular mirrored volume layout called a mirror-stripe layout. Data is striped across two disks, disk01 and disk03, to create one plex, and that plex is mirrored and striped across two other disks, disk02 and disk04. If two drives fail, the volume survives 2 out of 6 (1/3) times. As more subdisks are added to each plex, the odds of a traditional volume surviving a two-disk failure approach (but never equal) 50 percent. If a disk fails in a mirror-stripe layout, the entire plex is detached, and redundancy is lost on the entire volume. When the disk is replaced, the entire plex must be brought up-to-date, or resynchronized.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–33
Layered Volumes What What happens happens ifif two twodisks disks fail? fail?
Layered volume
Plex Subvolumes Subvolumes
Underlying Disks sd1 disk01
X X X
sd2
sd3
sd1
sd4
disk02 disk03 disk04
Plex
X X X X
VM40_Solaris_R1.0_20040115
sd3
Plex
Plex
X X
Down Up Up Up Up Down
XX==failed faileddisk disk
sd4
Plex
sd sd==subdisk subdisk
Volume Status
X
X X
sd2
When Whentwo twodisks disks fail, fail,volume volume survives survives4/6, 4/6,or or 2/3 times. 2/3 times.
VM40_Solaris_R1.0_20040115
4-30
4-30
Layered Volumes The example illustrates a layered volume layout called a stripe-mirror layout. In this layout, VxVM creates underlying volumes that mirror each subdisk. These underlying volumes are used as subvolumes to create a top-level volume that contains a striped plex of the data. If two drives fail, the volume survives 4 out of 6 (2/3) times. In other words, the use of layered volumes reduces the risk of failure rate by 50 percent without the need for additional hardware. As more subvolumes are added, the odds of a volume surviving a two-disk failure approach 100 percent. For volume failure to occur, both subdisks that make up a subvolume must fail. If a disk fails, only the failing subdisk must be detached, and only that portion of the volume loses redundancy. When the disk is replaced, only a portion of the volume needs to be recovered, which takes less time. Failed Subdisks
4–34
Volume Status Stripe-Mirror (Layered)
Mirror-Stripe (Nonlayered)
1 and 2
Down
Down
1 and 3
Up
Up
1 and 4
Up
Down
2 and 3
Up
Down
2 and 4
Up
Up
3 and 4
Down
Down VERITAS Volume Manager 4.0 for UNIX: Operations
Copyright © 2004 VERITAS Software Corporation. All rights reserved.
How Do Layered Volumes Work? Volume
Top-Level Volume
•• Volumes Volumesare are constructed constructedfrom from subvolumes. subvolumes. •• Top-level Top-levelvolume volumeis is accessible accessibleto to applications. applications.
Subvolume Subvolume Plex
Subvolumes
Underlying Disks
VM40_Solaris_R1.0_20040115
Volume
Volume
Subdisk Subdisk
Subdisk Subdisk
Plex
Plex
Disk 1 Subdisk
Plex
Plex
Disk 2
Disk 3
Disk 4
Subdisk
Subdisk
Subdisk
Advantages Advantages •• Improved Improved redundancy redundancy •• Faster Fasterrecovery recovery times times Disadvantages Disadvantages •• Requires Requiresmore more VxVM VxVMobjects objects
VM40_Solaris_R1.0_20040115
4-31
4-31
How Do Layered Volumes Work? In a regular mirrored volume, top-level plexes are made up of subdisks. In a layered volume, these subdisks are replaced by subvolumes. Each subvolume is associated with a second-level volume. This second-level volume contains secondlevel plexes, and each second-level plex contains one or more subdisks. In a layered volume, only the top-level volume is accessible as a device for use by applications. Note: You can also build a layered volume from the bottom up by using the vxmake command. For more information, see the vxmake(1m) manual page. Layered Volumes: Advantages Improved redundancy: Layered volumes tolerate disk failure better than nonlayered volumes and provide improved data redundancy. Faster recovery times: If a disk in a layered volume fails, a smaller portion of the redundancy is lost, and recovery and resynchronization times are usually quicker than for a nonlayered volume that spans multiple drives. For a stripe-mirror volume, recovery of a single subdisk failure requires resynchronization of only the lower plex, not the top-level plex. For a mirror-stripe volume, recovery of a single subdisk failure requires resynchronization of the entire plex (full volume contents) that contains the subdisk.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–35
Layered Volumes: Disadvantages Requires more VxVM objects. Layered volumes consist of more VxVM objects than nonlayered volumes. Therefore, layered volumes may fill up the disk group configuration database sooner than nonlayered volumes. When the configuration database is full, you cannot create more volumes in the disk group. The minimum size of the private region is 2048 sectors rounded up to the cylinder boundary. With modern disks with large cylinder sizes, this size can be quite large. Each VxVM object requires about 256 bytes. The private region can be made larger when a disk is initialized, but only from the command line. The size cannot be changed once disks have been initialized. Note: With VxVM 3.2 and later, the maximum size of the private region was doubled in order to better accommodate layered volumes.
4–36
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
mirror-concat (Non-Layered) Volume 1.5 GB Top-level volume contains more than one plex (mirror).
Subdisk 1 1.5 GB
Subdisk 4
Plexes are concatenated.
Concat Plex 1.5 GB
Disk 1 Underlying Disks
VM40_Solaris_R1.0_20040115
Subdisk 3 1 GB
Subdisk 1 1.5 GB
Disk 2
500 MB
Concat Plex 1.5 GB
Disk 3
Disk 4
Subdisk 3 Subdisk 4 1 GB 500 MB
VM40_Solaris_R1.0_20040115
4-32
4-32
Layered Volume Layouts In general, you should use regular mirrored layouts for smaller volumes and layered layouts for larger volumes. Before you create layered volumes, you need to understand the terminology that defines the different types of mirrored layouts in VxVM. mirror-concat Layout Type
Description
mirror-concat
The top-level volume contains more than one plex (mirror), and the plexes are concatenated in structure.
This layout mirrors data across concatenated plexes. The concatenated plexes can be comprised of subdisks of different sizes. In the example, the plexes are mirrors of each other; each plex is a concatenation of one or more subdisks, and the plexes are of equal size. When you create a simple mirrored volume that is less than 1 GB in size, a nonlayered mirrored volume is created by default.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–37
mirror-stripe (Non-Layered) Volume 1.5 GB Top-level volume contains more than one plex (mirror). Plexes are striped.
Subdisk 1 750 MB
Subdisk 3 750 MB
Subdisk 2 750 MB
Subdisk 4 750 MB
Striped Plex 1.5 GB
Striped Plex 1.5 GB
Disk 1 Underlying Disks
VM40_Solaris_R1.0_20040115
Disk 2
Disk 3
Disk 4
Subdisk 1 Subdisk 2 Subdisk 3 Subdisk 4 750 MB 750 MB 750 MB 750 MB
VM40_Solaris_R1.0_20040115
4-33
4-33
mirror-stripe Layout Type
Description
mirror-stripe
The top-level volume contains more than one plex (mirror), and the plexes are striped in structure.
This layout mirrors data across striped plexes. The striped plexes can be made up of different numbers of subdisks. In the example, plexes are mirrors of each other; each plex is striped across the same number of subdisks. Each striped plex can have different numbers of columns and different stripe unit sizes. One plex could also be concatenated. When you create a striped mirrored volume that is less than one gigabyte in size, a nonlayered mirrored volume is created by default.
4–38
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
concat-mirror (Layered) Volume 3.5 GB
Top-Level Volume
Subvolume 1.5 GB
Top-level volume comprises a concatenated plex.
Subvolume 2 GB
Subvolumes are mirrored.
Concat Plex 3.5 GB
Volume 1.5 GB Subvolumes
Underlying VM40_Solaris_R1.0_20040115 Disks
Subdisk 1 1.5 GB
Subdisk 2 1.5 GB
Concat Plex 1.5 GB
Concat Plex 1.5 GB
Disk 1
Disk 2
Subdisk 1 Subdisk 2 1.5 GB 1.5 GB
Volume 2 GB Subdisk 3 2 GB
Subdisk 4 2 GB
Concat Plex 2 GB
Concat Plex 2 GB
Disk 3 Disk 4 Subdisk 3 Subdisk 4 2 GB 2 GB
VM40_Solaris_R1.0_20040115
4-34
4-34
concat-mirror Layout Type
Description
concat-mirror
The top-level volume comprises one plex, and the component subdisks (subvolumes) are mirrored.
This volume layout contains a single plex made up of one or more concatenated subvolumes. Each subvolume comprises two concatenated plexes (mirrors) made up of one or more subdisks. If you have two subdisks in the top-level plex, then a second subvolume is created, which is used as the second concatenated subdisk of the plex. Additional subvolumes can be added and concatenated in the same manner. In the VEA interface, the GUI term used for a layered, concatenated layout is Concatenated Mirrored. Concatenated Mirrored volumes are mirrored by default and therefore require more disks than unmirrored concatenated volumes. Concatenated Mirrored volumes require at least two disks.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–39
stripe-mirror (Layered) Volume 1.5 GB
Top-Level Volume
Subvolume 750 MB
Top-level volume comprises a striped plex.
Subvolume 750 MB
Subvolumes are mirrored.
Striped Plex 1.5 GB
Volume 750 MB Subvolumes
Volume 750 MB
Subdisk 1 750 MB
Subdisk 2 750 MB
Subdisk 3 750 MB
Subdisk 4 750 MB
Concat Plex 750 MB
Concat Plex 750 MB
Concat Plex 750 MB
Concat Plex 750 MB
Disk 1
Disk 2
Underlying VM40_Solaris_R1.0_20040115 Subdisk 1 Subdisk 2 Disks 750 MB 750 MB
Disk 3
Disk 4
Subdisk 3 Subdisk 4 750 MB 750 MB
VM40_Solaris_R1.0_20040115
4-35
4-35
stripe-mirror Layout Type
Description
stripe-mirror
The top-level volume comprises one plex, and the component subdisks (subvolumes) are mirrored.
This volume layout stripes data across mirrored volumes. The difference between stripe-mirror and concat-mirror is that the top-level plex is striped rather than concatenated. In the VEA interface, the GUI term used for a layered, striped layout is Striped Mirrored. Striped Mirrored volumes are mirrored by default and therefore require more disks than unmirrored striped volumes. Striped Mirrored volumes require at least four disks.
4–40
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Creating Layered Volumes VEA: In the New Volume Wizard, select Concatenated Mirrored or Striped Mirrored as the volume layout.
vxassist make: # vxassist -g datadg make datavol 10g layout=stripe-mirror # vxassist -g datadg make datavol 10g layout=concat-mirror
Note: To create simple mirrored volumes (nonlayered), you can use: • layout=mirror-concat • layout=mirror-stripe VM40_Solaris_R1.0_20040115
4-36
Creating a Layered Volume: VEA In the New Volume wizard, select one of the two layered volume layout types: • Concatenated Mirrored: The Concatenated Mirrored layout refers to a concat-mirror volume. • Striped Mirrored: The Striped Mirrored layout refers to a stripe-mirror volume. Creating a Layered Volume: CLI To create a mirrored volume from the command line: vxassist -g diskgroup make volume_name length layout=type [other_attributes]
In the syntax, you can specify any of the following layout types: • To create layered volumes – layout=concat-mirror – layout=stripe-mirror • To create simple mirrored volumes – layout=mirror-concat – layout=mirror-stripe For striped volumes, you can specify other attributes, such as ncol=number_of_columns and stripeunit=size.
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–41
Viewing Layered Volumes vxprint vxprint -rth -rth vol01 vol01 ... ... vv vol01 Top-level vol01 volume and plex pl pl vol01-03 vol01-03
Subvolume, second-level volume, plex, and subvolume
sv sv vol01-S01 vol01-S01 v2 v2 vol01-L01 vol01-L01
-vol01 vol01 vol01-03 vol01-03
-p2 vol01-P01 p2 vol01-P01 vol01-L01 vol01-L01 s2 s2 datadg05-02 datadg05-02 vol01-P01 vol01-P01 p2 p2 vol01-P02 vol01-P02 vol01-L01 vol01-L01 s2 datadg03-02 vol01-P02 s2 datadg03-02 vol01-P02 sv sv vol01-S02 vol01-S02
vol01-03 vol01-03
ENABLED ENABLED ACTIVE... ACTIVE... ENABLED ENABLED ACTIVE... ACTIVE... vol01-L01 vol01-L01 1... 1... ENABLED ENABLED ACTIVE... ACTIVE... ENABLED ACTIVE... ENABLED ACTIVE... datadg05 datadg05 0... 0... ENABLED ENABLED ACTIVE... ACTIVE... datadg03 datadg03 0... 0... vol01-L02 vol01-L02 1... 1...
VM40_Solaris_R1.0_20040115
4-37
Viewing a Layered Volume: VEA To view the layout of a layered volume, you can use any of the methods for displaying volume information, including: • Object views in the main window • Disk View window • Volume View window • Volume to Disk Mapping window • Volume Layout window Viewing a Layered Volume: CLI To view the configuration of a layered volume from the command line, you use the -r option of the vxprint command. The -r option ensures that subvolume configuration information for a layered volume is displayed. The -L option is also useful for displaying layered volume information.
4–42
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Removing a Volume • When a volume is removed, the space used by the volume is freed and can be used elsewhere. • Unmount the file system before removing the volume. VEA: • •
Select the volume that you want to remove. Select Actions—>Delete Volume.
vxassist remove volume: vxassist -g diskgroup remove volume volume_name # vxassist -g datadg remove volume datavol
vxedit: vxedit -g diskgroup -rf rm volume_name # vxedit -g datadg -rf rm datavol VM40_Solaris_R1.0_20040115
4-38
Removing a Volume You should only remove a volume if you are sure that you do not need the data in the volume, or if the data is backed up elsewhere. A volume must be closed before it can be removed. For example, if the volume contains a file system, the file system must be unmounted. You must manually edit the OS-specific file system table file in order to remove the entry for the file system and avoid errors at boot. If the volume is used as a raw device, the application, such as a database, must close the device. Removing a Volume: VEA Select:
A volume
Navigation path:
Actions—>Delete Volume
Input:
Verify the volume to be removed and confirm its removal.
Removing a Volume: CLI You can use the vxassist remove command with VxVM release 3.0 and later: vxassist [-g diskgroup] remove volume volume_name
For earlier versions of VxVM, use the vxedit command: vxedit [-g diskgroup] -rf rm volume_name
If the -r option is not used, the removal fails if the volume has an associated plex. The -f option stops the volume so that it can be removed. For more information, see the vxedit(1m) manual page. Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–43
Summary You should now be able to: • Identify the features, advantages, and disadvantages of volume layouts (concatenated, striped, mirrored, and RAID-5) supported by VxVM. • Create concatenated, striped, mirrored, and RAID-5 volumes by using VEA and from the command line. • Display volume layout information by using VEA and by using the vxprint command. • Create and view layered volumes by using VEA and from the command line. • Remove a volume from VxVM by using VEA and from the command line. VM40_Solaris_R1.0_20040115
4-39
Summary This lesson described how to create a volume in VxVM. This lesson covered how to create a volume using different volume layouts, how to display volume layout information, and how to remove a volume. Next Steps In the next lesson, you learn how to configure additional volume attributes. Additional Resources • VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. • VERITAS Volume Manager User’s Guide—VERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. • VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.
4–44
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 4 Lab 4: Creating Volumes • In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. • You also practice creating a RAID-5 volume, creating a volume with a file system, and creating a layered volume. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B.
VM40_Solaris_R1.0_20040115
4-40
Lab 4: Creating Volumes Goal In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. You also practice creating a RAID-5 volume, creating a volume with a file system, and creating a layered volume. To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
Lesson 4 Creating Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4–45
4–46
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 5 Configuring Volumes
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
5-2
Introduction Overview This lesson describes how to configure volumes in VxVM. This lesson covers how to add and remove a mirror, add a log, change the volume read policy, and allocate storage to volumes. This lesson also describes how to add a file system to a volume and administer VERITAS File System. Importance By configuring volume attributes, you can create volumes that meet the needs of your business environment.
5–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: •
Add a mirror to and remove a mirror from an existing volume by using VEA and from the command line.
•
Add a dirty region log or RAID-5 log to an existing volume by using VEA and from the command line.
•
Change the volume read policy for a mirrored volume to specify which plex in a volume is used to satisfy read requests by using VEA and from the command line.
•
Allocate storage for a volume by specifying storage attributes and ordered allocation.
•
Add a file system to an existing volume and administer VERITAS File System.
VM40_Solaris_R1.0_20040115
5-3
Outline of Topics • Administering Mirrors • Adding a Log to a Volume • Changing the Volume Read Policy • Allocating Storage for Volumes • Administering File Systems
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–3
Adding a Mirror to a Volume • Only concatenated or striped volumes can be mirrored. • By default, a mirror is created with the same plex layout as the original volume. • Each mirror must reside on separate disks. • All disks must be in the same disk group. • A volume can have up to 32 plexes, or mirrors. • Adding a mirror requires volume resynchronization. VM40_Solaris_R1.0_20040115
5-4
Administering Mirrors Adding a Mirror If a volume was not originally created as a mirrored volume, or if you want to add additional mirrors, you can add a mirror to an existing volume. Only concatenated or striped volumes can be mirrored. You cannot mirror a RAID-5 volume. By default, a mirror is created with the same plex layout as the plex already in the volume. For example, assume that a volume is composed of a single striped plex. If you add a mirror to the volume, VxVM makes that plex striped, as well. You can specify a different layout using VEA or from the command line. A mirrored volume requires at least two disks. You cannot add a mirror to a disk that is already being used by the volume. A volume can have multiple mirrors, as long as each mirror resides on separate disks. Only disks in the same disk group as the volume can be used to create the new mirror. Unless you specify the disks to be used for the mirror, VxVM automatically locates and uses available disk space to create the mirror. A volume can contain up to 32 plexes (mirrors); however, the practical limit is 31. One plex should be reserved for use by VxVM for background repair operations. Note: Adding a mirror requires resynchronization of the additional plex, so this operation may take some time.
5–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Adding a Mirror VEA: • Select the volume to be mirrored. • Select Actions—>Mirror—>Add.
vxassist mirror: vxassist -g diskgroup mirror volume [layout=layout_type] [disk_name]
Example: # vxassist -g datadg mirror datavol
VM40_Solaris_R1.0_20040115
5-5
Adding a Mirror: VEA Select:
The volume to be mirrored
Navigation path:
Actions—>Mirror—>Add
Input:
Number of mirrors to add: Type a number. Default is 1. Choose the layout: Select from Concatenated or Striped. Select disks to use: VxVM can select the disks, or you can choose specific disks. You can also mirror or stripe across controllers, trays, targets, or enclosures.
To verify that a new mirror was added, view the total number of copies of the volume as displayed in the main window. The total number of copies is increased by the number of mirrors added. Adding a Mirror: CLI To mirror the volume datavol in the disk group datadg: # vxassist -g datadg mirror datavol
To add a mirror onto a specific disk, you specify the disk name in the command: # vxassist -g datadg mirror datavol datadg03
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–5
Removing a Mirror Why remove a mirror? • To provide free space • To reduce number of mirrors • To remove a temporary mirror
When a plex is removed, space from the subdisks is returned to the free space pool.
VM40_Solaris_R1.0_20040115
5-6
Removing a Mirror When a mirror (plex) is no longer needed, you can remove it. When a mirror is removed, the space occupied by that mirror can be used elsewhere. Removing a mirror can be used: • To provide free disk space • To reduce the number of mirrors in a volume in order to reduce I/O to the volume • To remove a temporary mirror that was created to back up a volume and is no longer needed Space from the subdisks of a removed plex is returned to the disk group’s free space pool. Caution: Removing a mirror results in loss of data redundancy. If a volume only has two plexes, removing one of them leaves the volume unmirrored.
5–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Removing a Mirror VEA: • •
Select Actions—>Mirror—>Remove. Remove by mirror name, quantity, or disk.
vxassist remove mirror: vxassist -g diskgroup remove mirror volume [!]dm_name To remove the plex that contains a subdisk from the disk datadg02: # vxassist -g datadg remove mirror datavol !datadg02
To remove the plex that uses any disk except datadg02: # vxassist -g datadg remove mirror datavol datadg02
vxplex and vxedit in sequence: vxplex -g diskgroup dis plex_name vxedit -g diskgroup -rf rm plex_name VM40_Solaris_R1.0_20040115
5-7
Removing a Mirror: VEA Select:
The volume that contains the mirror to be removed
Navigation path:
Actions—>Mirror—>Add
Input:
Remove mirrors by: You can remove a mirror by the name of the mirror, by quantity, or by disk. By mirror: To specify the name of the mirror to be removed, select Mirror. Add the plex to be removed to the “Selected mirrors” field. By quantity: To specify a number of mirrors to be removed, select Quantity/Disk, and type the number of mirrors to be removed in the “Mirror quantity” field. By disk: To specify the name of disks on which mirrors should be preserved, select Quantity/Disk. Add the disks that are to retain their plexes to the “Protect from removal” field.
Removing a Mirror: CLI To remove a mirror from the command line, you use the command: vxassist [-g diskgroup] remove mirror volume [!]dm_name
When deleting a mirror (or a log), you indicate the storage to be removed using the form !dm_name. For example, for the volume datavol, to remove the plex that contains a subdisk from the disk datadg02: Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–7
# vxassist -g datadg remove mirror datavol !datadg02
To remove the plex that uses any disk except datadg02: # vxassist -g datadg remove mirror datavol datadg02
You can also use the vxplex and vxedit commands in combination to remove a mirror: vxplex [-g diskgroup] dis plex_name vxedit [-g diskgroup] -rf rm plex_name
For example: # vxplex -g datadg dis datavol-02 # vxedit -g datadg -rf rm datavol-02
You can also use the single command: # vxplex -g diskgroup -o rm dis plex_name
For more information, see the vxplex(1m) and vxedit(1m) manual pages.
5–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Adding a Log to a Volume Dirty Region Logging
RAID-5 Logging
(for mirrored volumes)
(for RAID-5 volumes)
•• Log Logkeeps keepstrack trackof of changed regions. changed regions. •• IfIfthe thesystem systemfails, fails,only onlythe the changed regions of changed regions of volume volumemust mustbe be recovered. recovered. •• DRL DRLis isnot notenabled enabledby by default. When default. WhenDRL DRLis is enabled, enabled,one onelog logis is created. created. •• You Youcan cancreate createadditional additional VM40_Solaris_R1.0_20040115 logs logsto tomirror mirrorlog logdata. data.
•• Log Logkeeps keepsaacopy copyof ofdata data and parity writes. and parity writes. •• IfIfthe thesystem systemfails, fails,the thelog log is isreplayed replayedto tospeed speed resynchronization. resynchronization. •• RAID-5 RAID-5logging loggingis isenabled enabled by bydefault. default. •• RAID-5 RAID-5logs logscan canbe be mirrored. mirrored. •• Store Storelogs logson ondisks disks separate from separate fromvolume volumedata data 5-8 and andparity. parity.
VM40_Solaris_R1.0_20040115
5-8
Adding a Log to a Volume Logging in VxVM By enabling logging, VxVM tracks changed regions of a volume. Log information can then be used to reduce plex synchronization times and speed the recovery of volumes after a system failure. Logging is an optional feature, but is highly recommended, especially for large volumes. VxVM supports two types of logging: • Dirty region logging (for mirrored volumes) • RAID-5 logging (for RAID-5 volumes) Dirty Region Logging Dirty region logging (DRL) is used with mirrored volume layouts. DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume. Prior to every write, a bit is set in a log to record the area of the disk that is being changed. In case of system failure, DRL uses this information to recover only the portions of the volume that need to be recovered. If DRL is not used and a system failure occurs, all mirrors of the volumes must be restored to a consistent state by copying the full contents of the volume between its mirrors. This process can be lengthy and I/O intensive.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–9
When you enable logging on a mirrored volume, one log plex is created by default. The log plex uses space from disks already used for that volume, or you can specify which disk to use. To enhance performance, you should consider placing the log plex on a disk that is not already in use by the volume. You can create additional DRL logs on different disks to mirror the DRL information. RAID-5 Logging When you create a RAID-5 volume, a RAID-5 log is added by default. RAID-5 logs speed up the resynchronization time for RAID-5 volumes after a system failure. A RAID-5 log maintains a copy of the data and parity being written to the volume at any given time. If a system failure occurs, VxVM can replay the RAID-5 log to resynchronize the volume. This copies the data and parity that was being written at the time of failure from the log to the appropriate areas of the RAID-5 volume. You can create multiple RAID-5 logs on different disks to mirror the log information. Ideally, each RAID-5 volume should have at least two logs to protect against the loss of logging information due to the failure of a single disk. A RAID-5 log should be stored on a separate disk from the volume data and parity disks. Therefore, at least four disks are required to implement RAID-5 with logging. Although a RAID-5 volume cannot be mirrored, RAID-5 logs can be mirrored. To support concurrent access to the RAID-5 array, the log should be several times the stripe size of the RAID-5 plex. As a guideline, make the log six times the size of a full-stripe write to the RAID-5 volume.
5–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Adding/Removing a Log VEA: • •
Actions—>Log—>Add Actions—>Log—>Remove
vxassist addlog: vxassist -g diskgroup addlog volume [logtype=drl] [nlog=n] [attributes]
Examples: •
To add a dirty region log to an existing mirrored volume: # vxassist -g datadg addlog datavol logtype=drl
•
To add a RAID-5 log to a RAID-5 volume, no log type is needed: # vxassist -g acctdg addlog payvol
•
To remove a log from a volume: vxassist -g diskgroup remove log [nlog=n] volume
VM40_Solaris_R1.0_20040115
5-9
Adding a Log: VEA Select:
The volume to be logged
Navigation path:
Actions—>Log—>Add
Input:
Disk to contain the log: By default, VxVM locates available space on any disk in the disk group and assigns the space automatically. To place the log on specific disks, select “Manually assign destination disks,” and select the disk to contain the log.
You can add a log to a volume when you create the volume or at any time after volume creation. The type of log that is created is based on the type of volume layout. Removing a Log: VEA Select:
The volume that contains the log to be removed
Navigation path:
Actions—>Log—>Remove
Input:
Removal method: You can specify a removal method similar to removing a mirror.
Note: When you remove the only log from a volume, logging is no longer in effect, and recovery time increases in the event of a system crash.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–11
Adding a Log: CLI You can add a dirty region log to a mirrored volume or add a RAID-5 log to a RAID-5 volume by using the vxassist addlog command. To add a dirty region log to a mirrored volume, you use the logtype=drl attribute. For a RAID-5 volume, you do not need to specify a log type. VxVM adds a RAID-5 log based on the volume layout. # vxassist -g diskgroup addlog volume_name [logtype=drl] [nlog=n] [attributes]
For example, to add a dirty region log to the mirrored volume datavol in the disk group datadg: # vxassist -g datadg addlog datavol logtype=drl
To add two dirty region logs, you add the nlog attribute: # vxassist -g datadg addlog datavol logtype=drl nlog=2
To add a RAID-5 log to the RAID-5 volume payvol in the disk group acctdg: # vxassist -g acctdg addlog payvol
VxVM recognizes that the layout is RAID-5 and adds a RAID-5 log. You can specify additional attributes, such as the disks that should contain the log, when you run the vxassist addlog command. When no disks are specified, VxVM uses space from the disks already in use by that volume, which may not be best for performance. Removing a Log: CLI You can remove a dirty region log or a RAID-5 log by using the vxassist remove log command with the name of the volume. The appropriate type of log is removed based on the type of volume. vxassist -g diskgroup remove log volume_name
For example, to remove the dirty region log from the volume datavol, you type: # vxassist -g datadg remove log datavol
By default, vxassist removes one log. To remove more than one log, you can add the nlog=n attribute to specify the number of logs to be removed: # vxassist -g datadg remove log nlog=2 datavol
5–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Volume Read Policies Preferred Preferred Plex Plex
Round RoundRobin Robin Volume
Volume Read I/O
Read I/O Read I/O
Read I/O Preferred Preferred
Selected Selected Plex Plex Read I/O
Volume
Is there a striped plex?
VM40_Solaris_R1.0_20040115
5-10
Default Default Method Method
Changing the Volume Read Policy Volume Read Policies with Mirroring One of the benefits of mirrored volumes is that you have more than one copy of the data from which to satisfy read requests. You can specify which plex VxVM should use to satisfy read requests by setting the read policy. The read policy for a volume determines the order in which volume plexes are accessed during I/O operations. VxVM has three read policies: • Round robin: If you specify a round-robin read policy, VxVM reads each plex in turn in “round-robin” manner for each nonsequential I/O detected. Sequential access causes only one plex to be accessed in order to take advantage of drive or controller read-ahead caching policies. If a read is within 256K of the previous read, then the read is sent to the same plex. • Preferred plex: With the preferred plex read policy, Volume Manager reads first from a plex that has been named as the preferred plex. Read requests are satisfied from one specific plex, presumably the plex with the highest performance. If the preferred plex fails, another plex is accessed. • Selected plex: This is the default read policy. Under the selected plex policy, Volume Manager chooses an appropriate read policy based on the plex configuration to achieve the greatest I/O throughput. If the volume has an enabled striped plex, the read policy defaults to that plex; otherwise, it defaults to a round-robin read policy.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–13
Setting the Volume Read Policy VEA: • •
Actions—>Set Volume Usage Select from Based on layouts, Round robin, or Preferred.
vxvol rdpol: vxvol -g diskgroup rdpol policy volume_name [plex]
Examples: • • •
To set the read policy to round robin: # vxvol -g datadg rdpol round datavol To set the read policy to read from a preferred plex: # vxvol -g datadg rdpol prefer datavol datavol-02 To set the read policy to select a plex based on layouts: # vxvol -g datadg rdpol select datavol
VM40_Solaris_R1.0_20040115
5-11
Changing the Volume Read Policy: VEA Select:
A volume
Navigation path:
Actions—>Set Volume Usage
Input:
Volume read policy: Select Based on layouts (default; the selected plex method), Round robin, or Preferred. If you select Preferred, then you can also select the preferred plex from the list of available plexes.
Changing the Volume Read Policy: CLI vxvol -g diskgroup rdpol round volume_name vxvol -g diskgroup rdpol prefer volume_name preferred_plex vxvol -g diskgroup rdpol select volume_name
5–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Specifying Storage Attributes With storage attributes, you can specify: • Which storage devices are used by the volume • How volumes are mirrored across devices When creating a volume, you can: • Include specific disks, controllers, enclosures, targets, or trays to be used for the volume. • Exclude specific disks, controllers, enclosures, targets, or trays from being used for the volume. • Mirror volumes across specific controllers, enclosures, targets, or trays. (By default, VxVM mirrors across different disks.) VM40_Solaris_R1.0_20040115
5-12
Allocating Storage for Volumes Specifying Storage Attributes for Volumes VxVM automatically selects the disks on which each volume resides, unless you specify otherwise. To create a volume on specific disks, you can designate those disks when creating a volume. By specifying storage attributes when you create a volume, you can: • Include specific disks, controllers, enclosures, targets, or trays to be used for the volume. • Exclude specific disks, controllers, enclosures, targets, or trays to be used for the volume. • Mirror volumes across specific controllers, enclosures, targets, or trays. (By default, VxVM does not permit mirroring on the same disk.) By specifying storage attributes, you can ensure a high availability environment. For example, you can only permit mirroring of a volume on disks connected to different controllers, and eliminate the controller as a single point of failure. Note: When creating a volume, all storage attributes that you specify for use must belong to the same disk group. Otherwise, VxVM does not use them to create a volume.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–15
Storage Attributes: Methods VEA: In the New Volume wizard, select “Manually select disks for use by this volume,” and select the disks and storage allocation policy. CLI: Add storage attributes to vxassist make: vxassist [-g diskgroup] make volume length [layout=layout] [mirror=ctlr|enclr|target] [!][storage_attributes...] Exclude Exclude
•
Disks: datadg02
•
Controllers: ctlr:c2
•
Enclosures: enclr:emc1
•
Targets: target:c2t4
•
Trays: c2tray2
VM40_Solaris_R1.0_20040115
•
Mirror across controllers: mirror=ctlr
•
Mirror across enclosures: mirror=enclr
•
Mirror across targets: mirror=target
5-13
Specifying Storage Attributes: VEA In the New Volume wizard, select “Manually select disks for use by this volume.” Select the disks and the storage layout policy for allocating storage to a volume. You can specify that the volume is to be mirrored or striped across controllers, enclosures, targets, or trays.
Note: A tray is a set of disks within certain Sun arrays. Specifying Storage Attributes: CLI To create a volume on specific disks, you add storage attributes to the end of the vxassist command: vxassist [-g diskgroup] make volume_name length [layout=layout] storage_attributes...
Storage attributes can include: • Disk names, in the format diskname, for example, datadg02
5–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
• • • •
Controllers, in the format ctlr:controller_name, for example, ctlr:c2 Enclosures, in the format enclr:enclosure_name, for example, enclr:emc1 Targets, in the format target:target_name, for example, target:c2t4 Trays, in the format c#tray#, for example, c2tray2
To exclude a disk, controller, enclosure, target, or tray, you add the exclusion symbol (!) before the storage attribute. For example, to exclude datadg02 from volume creation, you use the format: !datadg02. When mirroring volumes across controllers, enclosures, or targets, you can use additional attributes: • The attribute mirror=ctlr specifies that disks in one mirror should not be on the same controller as disks in other mirrors within the same volume. • The attribute mirror=enclr specifies that disks in one mirror should not be in the same enclosure as disks in other mirrors within the same volume. • The attribute mirror=target specifies that volumes should be mirrored between identical target IDs on different controllers. Note: The vxassist utility has an internal default mirror=disk attribute that prevents you from mirroring data on the same disk.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–17
Storage Attributes: Examples To create datavol using any disks except for datadg05: # vxassist -g datadg make datavol 5g !datadg05
To exclude all disks on controller c2: # vxassist -g datadg make datavol 5g !ctlr:c2
To include all disks on c1, except for target t5: # vxassist -g datadg make datavol 5g ctlr:c1 !target:c1t5
To create a mirrored volume with one plex on c2 and the other plex on c3: # vxassist -g datadg make datavol 10g layout=mirror nmirror=2 mirror=ctlr ctlr:c2 ctlr:c3 VM40_Solaris_R1.0_20040115
5-14
Example: Creating a Volume on Specific Disks To create a 5-GB volume called datavol on datadg03 and datadg04: # vxassist -g datadg make datavol 5g datadg03 datadg04
Examples: Excluding Storage from Volume Creation To create the volume datavol using any disks except for datadg05: # vxassist -g datadg make datavol 5g !datadg05
To exclude all disks that are on controller c2: # vxassist -g datadg make datavol 5g !ctlr:c2
To include only disks on controller c1 except for target t5: # vxassist -g datadg make datavol 5g ctlr:c1 !target:c1t5
To exclude disks datadg07 and datadg08 when calculating the maximum size of a RAID-5 volume that vxassist can create using the disks in the disk group datadg: # vxassist -g datadg maxsize layout=raid5 nlog=2 !datadg07 !datadg08
Example: Mirroring Across Controllers To create a mirrored volume with two data plexes, and specify that disks in one mirror should not be on the same controller as disks in other mirrors within the same volume:
5–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# vxassist -g datadg make datavol 10g layout=mirror nmirror=2 mirror=ctlr ctlr:c2 ctlr:c3
The disks in one data plex are all attached to controller c2, and the disks in the other data plex are all attached to controller c3. This arrangement ensures continued availability of the volume should either controller fail. Example: Mirroring Across Enclosures To create a mirrored volume with two data plexes, and specify that disks in one mirror should not be in the same enclosure as disks in other mirrors within the same volume: # vxassist -g datadg make datavol 10g layout=mirror nmirror=2 mirror=enclr enclr:emc1 enclr:emc2
The disks in one data plex are all taken from enclosure emc1, and the disks in the other data plex are all taken from enclosure emc2. This arrangement ensures continued availability of the volume should either enclosure become unavailable.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–19
Ordered Allocation With VxVM 3.2 and later, ordered allocation enables you to control how columns and mirrors are laid out when creating a volume. With ordered allocation, storage is allocated in a specific order: • First, VxVM concatenates subdisks in columns. • Secondly, VxVM groups columns in striped plexes. • Finally, VxVM forms mirrors. Note: When using ordered allocation, the number of disks specified must exactly match the number of disks needed for a given layout (you cannot specify more). VM40_Solaris_R1.0_20040115
5-15
Specifying Ordered Allocation of Storage for Volumes In addition to specifying which storage devices VxVM uses to create a volume, you can also specify how the volume is distributed on the specified storage. By using the ordered allocation feature of VxVM, you can control how volumes are laid out on specified storage. Ordered allocation is available in VxVM 3.2 and later. When you use ordered allocation in creating a volume, columns and mirrors are created on disks based on the order in which you list the disks on the command line. Storage is allocated in the following order: • First, VxVM concatenates the disks. • Secondly, VxVM forms columns. • Finally, VxVM forms mirrors. For example, if you are creating a three-column mirror-stripe volume using six specified disks, VxVM creates column 1 on the first disk, column 2 on the second disk, and column 3 on the third disk. Then, the mirror is created using the fourth, fifth, and sixth specified disks. Without the ordered allocation option, VxVM uses the disks in any order.
5–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Ordered Allocation: Methods VEA:
In the New Volume wizard, select “Manually select disks for use by this volume.” Select the disks, the storage allocation policy, and mark the Ordered check box. CLI: Add the -o ordered option: vxassist [-g diskgroup][-o ordered] make volume length [layout=layout]... VM40_Solaris_R1.0_20040115
5-16
Specifying Ordered Allocation: VEA In the New Volume wizard, select “Manually select disks for use by this volume.” Select the disks and storage layout policy, and mark the Ordered check box.
When Ordered is selected, VxVM uses the specified storage to first concatenate disks, then to form columns, and finally to form mirrors. Specifying Ordered Allocation: CLI To implement ordered allocation, use the -o ordered option to vxassist: vxassist [-g diskgroup] [-o ordered] make volume_name...
Two optional attributes are also available with the -o ordered option: • You can use the col_switch=size1,size2... attribute to specify how to allocate space from each listed disk to concatenate subdisks in a column before switching to the next disk. The number of size arguments determines how many disks are concatenated to form a column. • You can use the logdisk=disk attribute to specify the disk on which logs are created. This attribute is required when using ordered allocation in creating a RAID-5 volume, unless nolog or noraid5log is specified. For other types of volume layouts, this attribute is optional. Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–21
Ordered Allocation: Examples Specifying the Order of Columns
datavol
# vxassist -g datadg -o ordered make datavol 2g layout=stripe ncol=3 datadg02 datadg04 datadg06 02
Specifying the Order of Mirrors
04
06
datavol
# vxassist -g datadg -o ordered make datavol 2g layout=mirror datadg02 datadg04 02 VM40_Solaris_R1.0_20040115
04 5-17
Example: Order of Columns To create a 10-GB striped volume, called datavol, with three columns striped across three disks: # vxassist -g datadg -o ordered make datavol 10g layout=stripe ncol=3 datadg02 datadg04 datadg06
Because the -o ordered option is specified, column 1 is placed on datadg02, column 2 is placed on datadg04, and column 3 is placed on datadg06. Without this option, column 1 can be placed on any of the three disks, column 2 on any of the remaining two disks, and column 3 on the remaining disk. Example: Order of Mirrors To create a mirrored volume using datadg02 and datadg04: # vxassist -g datadg -o ordered make datavol 10g layout=mirror datadg02 datadg04
Because the -o ordered option is specified, the first mirror is placed on datadg02 and the second mirror is placed on datadg04. Without this option, the first mirror could be placed on either disk. Note: There is no logical difference between the mirrors. However, by controlling the order of mirrors, you can allocate plex names with specific disks (for example, datavol-01 with datadg02 and datavol-02 with datadg04). This level of control is significant when you perform mirror breakoff and disk group split operations. You can establish conventions that indicate to you which specific disks are used for the mirror breakoff operations. 5–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Ordered Allocation: Examples Specifying Column Concatenation # vxassist -g datadg -o ordered make datavol 10g layout=mirror-stripe ncol=2 col_switch=3g,2g datadg01 datadg02 datadg03 datadg04 datadg05 datadg06 datadg07 datadg08
01
datavol
02
3 GB 3 GB 2 GB 2 GB
3 GB 3 GB 2 GB 2 GB
03
05
04
06
07
08
Specifying Other Storage Classes # vxassist -g datadg -o ordered make datavol 80g layout=mirror-stripe ncol=3 ctlr:c1 ctlr:c2 ctlr:c3 VM40_Solaris_R1.0_20040115 ctlr:c4 ctlr:c5 ctlr:c6 VM40_Solaris_R1.0_20040115
5-18
5-18
Example: Concatenating Columns You can use the col_switch attribute to specify how to concatenate space on the disks into columns. For example, to create a 2-column, mirrored-stripe volume: # vxassist -g datadg -o ordered make datavol 10g layout=mirror-stripe ncol=2 col_switch=3g,2g datadg01 datadg02 datadg03 datadg04 datadg05 datadg06 datadg07 datadg08
Because the col_switch attribute is included, this command allocates 3 GB from datadg01 and 2 GB from datadg02 to column 1, and 3 GB from datadg03 and 2 GB from datadg04 to column 2. The mirrors of these columns are then similarly formed from disks datadg05 through datadg08. Example: Other Storage Classes You can use other storage specification classes, such as controllers, enclosures, targets, and trays, with ordered allocation. For example, to create a 3-column, mirrored-stripe volume between specified controllers: # vxassist -g datadg -o ordered make datavol 80g layout=mirror-stripe ncol=3 ctlr:c1 ctlr:c2 ctlr:c3 ctlr:c4 ctlr:c5 ctlr:c6
This command allocates space for column 1 from disks on controllers c1, for column 2 from disks on controller c2, and so on.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–23
Adding a File System: VEA Select SelectActions—>File Actions—>FileSystem—>New System—>NewFile FileSystem. System.
Mount Mount File File System System Details Details
New New File File System System Details Details
VM40_Solaris_R1.0_20040115
5-19
VM40_Solaris_R1.0_20040115
5-19
Administering File Systems A file system provides an organized structure to facilitate the storage and retrieval of files. You can add a file system to a volume when you initially create a volume or any time after you create the volume. Adding a File System to a Volume: VEA Select:
A volume
Navigation path:
Actions—>File System—>New File System
Input:
File system type: Select vxfs or other supported platform-specific file system type. Create options: Set mkfs options. Mount options: Specify a mount point and other mount options.
Mounting a File System: VEA A file system created with VEA is mounted automatically if you specify the mount point in the New File System dialog box. If a file system was previously created, but not mounted, on a volume, you can explicitly mount the file system by using Actions—>File System—>Mount File System. Unmounting a File System: VEA To unmount a file system on a volume, select the file system or volume containing the file system, and select Actions—>File System—>Unmount File System.
5–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Adding a File System: CLI 1. Create the file system using mkfs (VxFS) or OS-specific file system creation commands. 2. Create a mount point directory on which to mount the file system. 3. Mount the volume to the mount point by using the mount command. – Data is accessed through the mount point directory. – When data is written to files, it is actually written to the block device file: /dev/vx/dsk/disk_group/volume_name – When fsck is run on the file system, the raw device file is checked: /dev/vx/rdsk/disk_group/volume_name VM40_Solaris_R1.0_20040115
5-20
Solaris
HP-UX
AIX
Linux
Adding a File System to a Volume: CLI To add a file system to a volume from the command line, you must create the file system, create a mount point for the file system, and then mount the file system. Notes: • When creating a file system on a volume, the size of the file system defaults to and cannot exceed the size of the volume. • When a file system has been mounted on a volume, the data is accessed through the mount point directory. • When data is written to files, it is actually written to the block device file: /dev/vx/dsk/disk_group/volume_name. • When fsck is run on the file system, the raw device file is checked: /dev/vx/rdsk/disk_group/volume_name. Solaris
To create and mount a VxFS file system: # mkfs -F vxfs /dev/vx/rdsk/datadg/datavol # mkdir /data # mount -F vxfs /dev/vx/dsk/datadg/datavol /data
To create and mount a UFS file system: # newfs /dev/vx/rdsk/datadg/datavol # mkdir /data # mount /dev/vx/dsk/datadg/datavol /data Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–25
HP-UX
To create and mount a VxFS file system: # mkfs -F vxfs /dev/vx/rdsk/datadg/datavol # mkdir /data # mount -F vxfs /dev/vx/dsk/datadg/datavol /data
To create and mount an HFS file system: # newfs -F hfs /dev/vx/rdsk/datadg/datavol # mkdir /data # mount -F hfs /dev/vx/dsk/datadg/datavol /data AIX
To create and mount a VxFS file system using mkfs: # mkfs -V vxfs /dev/vx/rdsk/datadg/datavol # mkdir /data # mount -V vxfs /dev/vx/dsk/datadg/datavol /data
To create and mount a VxFS file system using crfs: # crfs -v vxfs -d /dev/vx/rdsk/datadg/datavol -m /data -A yes
Notes: • An uppercase V is used with mkfs; a lowercase v is used with crfs (to avoid conflict with another crfs option). • crfs creates the file system, creates the mount point, and updates the file systems file (/etc/filesystems). The -A yes option requests mount at boot. • If the file system already exists in /etc/filesystems, you can mount the file system by simply using the syntax: # mount mount_point Linux
To create and mount a VxFS file system using mkfs: # mkfs -t vxfs /dev/vx/rdsk/datadg/datavol # mkdir /data # mount -t vxfs /dev/vx/dsk/datadg/datavol /data
5–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Mount File System at Boot To mount the file system automatically at boot time, edit the OS-specific file system table file to add an entry for the file system. Specify information such as: •
Device to mount:
/dev/vx/dsk/datadg/datavol
•
Device to fsck:
/dev/vx/rdsk/datadg/datavol
•
Mount point:
/data
•
File system type:
vxfs
• fsck pass:
1
•
Mount at boot:
yes
•
Mount options:
-
In VEA, select “Add to file system table” and “Mount at boot” in the New File System dialog box.
VM40_Solaris_R1.0_20040115
VM40_Solaris_R1.0_20040115
5-21
5-21
Mounting a File System at Boot: CLI If you want the file system to be mounted at every system boot, you must edit the file system table file by adding an entry for the file system. If you later decide to remove the volume, you must remove the entry in the file system table file. Platform
File System Table File
Solaris
/etc/vfstab
HP-UX
/etc/fstab
AIX
/etc/filesystems
Linux
/etc/fstab
Notes: • In VEA, when you create a file system, if you select the “Add to file system table” and “Mount at boot” check boxes, the entry is made automatically in the file system table file. If the volume is later removed through VEA, its corresponding file system table file entry is also removed automatically. • In AIX, you can use the following commands when working with the file system table file, /etc/filesystems: – To view entries: # lsfs mount_point – To change details of an entry, use chfs. For example, to turn off mount at boot: # chfs -A no mount_point – To remove an entry: # rmfs mount_point
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–27
Using VxFS Commands •
VxFS can be used as the basis for any file system except for file systems used to boot the system.
•
Specify directories in the PATH environment variable to access VxFS-specific commands.
•
VxFS uses standard file system management syntax: command [fs_type] [generic_options] [-o VxFS_options] [special|mount_point]
•
Use the file system switchout to access VxFS-specific versions of standard commands.
•
Without the file system switchout, the file system type is taken from the default specified in the default file system file. To use VxFS as your default, change this file to contain vxfs.
VM40_Solaris_R1.0_20040115
5-22
Solaris
HP-UX
AIX
Linux
Using VERITAS File System Commands You can generally use VERITAS File System (VxFS) as an alternative to other disk-based, OS-specific file systems, except for the file systems used to boot the system. File systems used to boot the system are mounted read-only in the boot process, before the VxFS driver is loaded. VxFS can be used in place of: • UNIX File System (UFS) on Solaris, except for root, /usr, /var, and /opt. • Hierarchical File System (HFS) on HP-UX, except for /stand. • Journaled File System (JFS) and Enhanced Journaled File System (JFS2) on AIX, except for root and /usr. • Extended File System Version 2 (EXT2) and Version 3 (EXT3) on Linux, except for root, /boot, /etc, /lib, /var, and /usr. Location of VxFS Commands: Platform
Location of VxFS Commands
Solaris
/opt/VRTSvxfs/sbin, /usr/lib/fs/vxfs, /etc/fs/vxfs
HP-UX
/usr/sbin, /opt/VRTS/bin, /sbin/fs
AIX
/opt/VRTSvxfs/sbin, /usr/lib/fs/vxfs, /etc/fs/vxfs
Linux
/sbin, /usr/lib/fs/vxfs
Specify these directories in the PATH environment variable.
5–28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
General File System Command Syntax VERITAS File System uses standard file system management command syntax: command [fs_switchout] [generic_options] [-o specific_options] [special | mount_point]
To access VxFS-specific versions, or wrappers, of standard commands, you use the Virtual File System switchout mechanism followed by the file system type, vxfs. The switchout mechanism directs the system to search the appropriate directories for VxFS-specific versions of commands. Platform
File System Switchout
Solaris
-F vxfs
HP-UX
-F vxfs
AIX
-V vxfs (or -v vxfs when used with crfs)
Linux
-t vxfs
Using VxFS Commands by Default If you do not use the switchout mechanism, then the file system type is taken from the default specified in the OS-specific default file system file. If you want VERITAS File System to be your default file system type, then you change the default file system file to contain vxfs.: Platform
Default File System File
Solaris
/etc/default/fs
HP-UX
/etc/default/fs
AIX
/etc/vfs
Linux
/etc/default/fs
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–29
VxFS-Specific mkfs Options mkfs [fs_type] [-o specific_options] special -o N
-o bsize=n
• Provides information only • Does not create the file system
• • • • •
-o largefiles| nolargefiles • Supports files > 2 gigabytes (or > 8 million files) • Default: largefiles
-o version=n • Specifies layout version • Valid values are 4, 5, and 6. • Default: Version 6
Sets logical block size Default: 1024 bytes (1K) for most Cannot be changed after creation In most cases, the default is best. Resizing the file system does not change the block size.
-o logsize=n • Sets size of logging area • Default depends on file system size. • Default is sufficient for most workloads. • Log size can be changed after creation using fsadm.
VM40_Solaris_R1.0_20040115
5-23
Using mkfs Command Options You can set a variety of file system properties when you create a VERITAS file system by adding VxFS-specific options to the mkfs command. For example: • -o N Reports the same structural information about the file system as if it had actually been created, without actually creating the file system. • -o largefiles|nolargefiles Controls the largefiles flag for the file system. By default, the largefiles flag is on, which enables the creation of files 2 gigabytes or larger and the use of more than 8 million inodes in a file system. If you turn the flag off, then files in the file system are limited to less than 2 GB in size. After file system creation, you can use the -o largefiles option to the fsadm command to enable or disable large file support. See the fsadm_vxfs(1m) manual page for more information. • -o version=n The -o version=n option specifies a particular file system layout version to be used when making the file system, where n is the VxFS file system layout version number. Valid values are 4, 5, and 6. When no option is specified, the default is file system layout Version 6. – The Version 4 layout enables extents to be variable in size, enables support for large files, and adds typed extents to the VxFS architecture. Version 4 supports files and file systems up to one terabyte in size. – The Version 5 layout enables the creation of file system sizes up to 32
5–30
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
terabytes. Files can be a maximum of two terabytes. File systems larger than 1 terabyte must be created on a VxVM volume and require an 8K block size. – The Version 6 layout enables the creation of files and file systems up to 8 exabytes (263) in size, and enables features such as multidevice support, cross-platform data sharing, named data streams, and file change log. Note: With VxFS 4.0 and later, VxFS file systems with layout versions 1 and 2 can no longer be created or mounted. • -o bsize=n Sets the block size for files on the file system, where n is the block size in bytes for files on the file system. Block size represents the smallest amount of disk space allocated to a file and must be a power of two selected from the range 1024 to 8192. The default block size is 1024 bytes for file systems smaller than one terabyte. The default block size is 8K for file systems greater than 1 TB. Overall file system performance can be improved or degraded by changing the block size. In most cases, you do not need to specify a block size when creating a file system. However, for large file systems with relatively few files, you may want to experiment with larger block sizes. Resizing the file system does not change the block size. Therefore, you typically set a larger than usual block size if you expect to extend the file system in the near future. Determining an appropriate block size involves a trade-off between memory consumption and wasted disk space. • -o logsize=n Allocates the number of file system blocks for an activity logging area, where n is the number of file system blocks. The activity logging area, called the intent log, contains a record of changes to be made to the structure of the file system. When you create a file system with mkfs, VxFS uses a default log size (in the range of 256K to 64 MB) that is based on the file system size. The larger the file system, the larger the default intent log size. The log size can be changed after the file system is created by using the log option of the fsadm command. The minimum log size is the number of file system blocks that make the log no less than 256K. The maximum log size is the number of file system blocks that make the log no greater than 2 GB. For more information on mkfs options, see the mkfs_vxfs(1m) manual page.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–31
Maximum File/File System Sizes Solaris • File = 263 bytes • File System = 263 bytes
Linux • File = 16 TB • File System = 2 TB
Requirements: Requirements: •• VxFS VxFS 4.0 4.0 file file system system (with (with version version 66 file file system system layout) layout) on on aa VxVM VxVM 4.0 4.0 volume volume •• 64-bit 64-bit kernel kernel
HP-UX • File = 241 bytes • File System = 2 TB
AIX • File = 244 bytes • File System = 247 bytes
Note: On a 32-bit kernel, the maximum file size is 2 TB and the maximum file system size is 1 TB.
VM40_Solaris_R1.0_20040115
5-24
Maximum File and File System Sizes The maximum file and file system sizes can be obtained when using a VxFS 4.0 file system on a VxVM 4.0 volume and running on a 64-bit kernel.
5–32
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Other VxFS Commands Mount options: # mount ... -r ...
Mount as read only
# mount -v
Display mounted file systems
# mount -p
Display in file system table format
# mount -a
Mount all in file system table
Unmount options: # umount /mydata
Unmount a file system
# umount -a
Unmount all mounted file systems
# umount -o force /mydata
Force an unmount
Display file system type: # fstyp -v /dev/vx/dsk/datadg/datavol
Display free space: VM40_Solaris_R1.0_20040115
# df -F vxfs /mydata
VM40_Solaris_R1.0_20040115
5-25
5-25
Other mount Command Options The following options are available with the VxFS-specific mount command: • To mount the file system as read-only, add the -r option to the mount command. • To display a list of currently mounted file systems: # mount -v • To display a list of mounted file systems in the file system table format: # mount -p • To mount all file systems listed in the file system table file: # mount -a Unmounting a File System To unmount a file system from the command line, you use the umount command: • To unmount a currently mounted file system: umount mount_point • To unmount all file systems, except the ones required by the operating system: umount -a • To perform a forced unmount of a VxFS file system: umount -o force mount_point
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–33
A forced unmount can be useful in situations such as high availability environments, where a mounted file system could prevent timely failover. Any active process with I/O operations pending on an unmounted file system receives an I/O error. Caution: This command can cause data loss. Identifying File System Type If you do not know the file system type of a particular file system, you can determine the file system type by using the fstyp command. You can use the fstyp command to describe either a mounted or unmounted file system. To determine the type of file system on a disk partition, you use the following syntax: fstyp [-v] special
In the syntax, you specify the command followed by the name of the device. You can use the -v option to specify verbose mode. In VEA, right-click a file system in the object tree, and select Properties. The file system type is displayed in the File System Properties window. Identifying Free Space To report the number of free disk blocks and inodes for a VxFS File System, you use the df command. The df command displays the number of free blocks and free inodes in a file system or directory by examining the counts kept in the superblocks. Extents smaller than 8K may not be usable for all types of allocation, so the df command does not count free blocks in extents below 8K when reporting the total number of free blocks. df [-F vxfs] [generic_options] [-o s] [special|mount_point]
The -o s option is specific to VxFS. You can use this option to print the number of free extents of each size. In VEA, right-click a file system, and select Properties to display free space and usage information.
5–34
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Traditional Block-Based Allocation Inode Inode
Block-based allocation:
n n+3 n+8 n+13 n+20 n+21
• Allocates space to the next rotationally adjacent block • Allocates blocks at random from a free block map • Becomes less effective as file system fills • Requires extra disk I/O to write metadata
n n+13
n+3 n+20
n+8 n+21
VM40_Solaris_R1.0_20040115
5-26
Comparing VxFS with Traditional File System Allocation Policies Both VxFS and traditional UNIX file systems, such as UFS, use index tables to store information and location information about blocks used for files. However, VxFS allocation is extent-based, while other file systems are block-based. • Block-based allocation: File systems that use block-based allocation assign disk space to a file one block at a time. • Extent-based allocation: File systems that use extent-based allocation assign disk space in groups of contiguous blocks, called extents. Example: UFS Block-Based Allocation UFS allocates space for files one block at a time. When allocating space to a file, UFS uses the next rotationally adjacent block until the file is stored. UFS can perform at a level similar to an extent-based file system on sequential I/O by using a technique called block clustering. In UFS, the maxcontig file system tunable parameter can be used to cluster reads and writes together into groups of multiple blocks. Through block clustering, writes are delayed so that several small writes are processed as one large write. Sequential read requests can be processed as one large read through read-ahead techniques. Block-based allocation requires extra disk I/O to write file system block structure information, or metadata. Metadata is always written synchronously to disk, which can significantly slow overall file system performance. Over time, block-based allocation produces a fragmented file system with random file access.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–35
VxFS Extent-Based Allocation Inode: Inode: An index index block block associated associated with with a file file
n
• •
n, 17 n+37, 6
Address-length Address-length pair pair consists consists of: of: •• Starting block block •• Length of of extent
n+1 n+2 n+3 n+4 n+5 n+6 n+7 n+8
n+9 n+10 n+11 n+12 n+13 n+14 n+15 n+16 n+17
n+37 n+38 n+39
Extent size is based on the size of I/O write requests. When a file expands, another extent is allocated.
n+40 n+41 n+42
Extent: Extent: A A set set of contiguous of contiguous blocks blocks
•
Additional extents are progressively larger, reducing VM40_Solaris_R1.0_20040115 the total number of extents used by a file. VM40_Solaris_R1.0_20040115
5-27
5-27
VxFS Extent-Based Allocation VERITAS File System selects a contiguous range of file system blocks, called an extent, for inclusion in a file. The number of blocks in an extent varies and is based on either the I/O pattern of the application, or explicit requests by the user or programmer. Extent-based allocation enables larger I/O operations to be passed to the underlying drivers. VxFS attempts to allocate each file in one extent of blocks. If this is not possible, VxFS attempts to allocate all extents for a file close to each other. Each file is associated with an index block, called an inode. In an inode, an extent is represented as an address-length pair, which identifies the starting block address and the length of the extent in logical blocks. This enables the file system to directly access any block of the file. VxFS automatically selects an extent size by using a default allocation policy that is based on the size of I/O write requests. The default allocation policy attempts to balance two goals: • Optimum I/O performance through large allocations • Minimal file system fragmentation through allocation from space available in the file system that best fits the data The first extent allocated is large enough for the first write to the file. Typically, the first extent is the smallest power of 2 that is larger than the size of the first write, with a minimum extent allocation of 8K. Additional extents are progressively larger, doubling the size of the file with each new extent. This method reduces the total number of extents used by a single file. 5–36
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
There is no restriction on the size of an extent. When a file needs to expand to a size larger than the extent size, the operating system allocates another extent of disk blocks, and the inode is updated to include a pointer to the first block of the new extent along with its size. Benefits of Extent-Based Allocation Benefits of extent-based allocation include: • Good performance: By grouping multiple blocks into large writes, extentbased allocation is faster than block-at-a-time operations. Note: Random I/O does not benefit as much, because the I/O sizes are generally small. To perform a random read of a file, the file system must look up the block address for each desired block, which is similar to block-based allocation. • Less metadata overhead: Metadata is written when a file is created, but subsequent writes within an extent do not require additional metadata writes. Therefore, a file with only a few very large extents requires only a small amount of metadata. Also, to read all blocks in an extent sequentially, the file system must only read the starting block number and the length of the extent, resulting in very little sequential read overhead. Extent-based allocation can address files of any supported size up to 20 GB directly and efficiently. Also, large files can be accessed with fewer pointers and less indirection than blockbased allocation. Note: Improper extent sizes can reduce performance benefits, as follows: • If the extent size is too small, the system loses some performance benefits and acts more like an indexed allocation system. • If the extents size is too large, the file system contains allocated disk space that is not actually in use, which is wasted space.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–37
The vxupgrade Command For better performance, use file system layout Version 6 for new file systems. To upgrade the layout online, use vxupgrade: vxupgrade [-n new_version] [-o noquota] [-r rawdev] mount_point To display the current file system layout version number: # vxupgrade /mnt Upgrading must be performed in stages. For example, to upgrade the file system layout from Version 4 to Version 6: # vxupgrade -n 5 /mnt # vxupgrade -n 6 /mnt VM40_Solaris_R1.0_20040115
5-28
Upgrading the VxFS File System Layout The placement of file system structures and the organization of user data on disk is referred to as the file system layout. The evolution of VERITAS File System has included six different file system layout versions. Each version has become increasingly complex to support greater scalability for large files and to minimize file system fragmentation. By default, any new file system that you create using VxFS 4.0 has file system layout Version 6. You can upgrade an existing file system that has an earlier file system layout to Version 6 by using the vxupgrade command. The upgrade does not require an unmount and can be performed online. Upgrading to disk layout Version 6 changes all inodes in the file system. After a file system is upgraded to disk layout Version 6, it cannot be mounted with releases prior to VxFS 4.0. Performing Online Upgrades Only a privileged user can upgrade the file system layout. Once you upgrade to a later layout version, you cannot downgrade to an earlier layout version while the file system is online. You must perform the layout upgrade procedure in stages when using the vxupgrade command. You cannot upgrade Version 4 file systems directly to Version 6. For example, you must upgrade from Version 4 to Version 5, then from Version 5 to Version 6.
5–38
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
The vxupgrade Command To upgrade the VxFS file system layout, you use the vxpgrade command. The vxupgrade command only operates on file systems mounted for read/write access. vxupgrade [-n new_version] [-o noquota] [-r rawdev] mount_point • The -n option specifies the new file system layout version number to which you are upgrading. The new version can be 5 or 6. • The -r rawdev option specifies the path of the raw device. You use this option when vxupgrade cannot determine which raw device corresponds to the mount point—for example, when /etc/mnttab is corrupted. Displaying the File System Layout Version You can use the vxupgrade command without the -n option to display the file system layout version number of a file system. To display the file system layout version number of a VERITAS file system mounted at /mnt, you type: # vxupgrade /mnt /mnt: vxfs file system version 4 layout
In the output, the current file system layout version is displayed. Using the vxupgrade Command A VxFS file system with Version 4 file system layout is mounted at /mnt. To upgrade this file system to Version 6 layout, you execute the following sequence of commands: # vxupgrade -n 5 /mnt # vxupgrade -n 6 /mnt
If you attempt to upgrade directly from file system layout Version 4 to Version 6, you receive an error. How Does vxupgrade Work? The upgrade process follows this sequence of events: 1 The vxupgrade command creates the lock file in /lost+found/.fsadm. The lock file blocks any use of the fsadm utility on this file system during the vxupgrade procedure. 2 The file system is frozen. 3 New file system structures are allocated and initialized. 4 The file system thaws, and the inodes are released. 5 The lock file in /lost+found/.fsadm is removed. This process does not keep the file system frozen for more than a few seconds.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–39
VxFS Structure Allocation Unit 0 32K blocks
Allocation Unit 1 32K blocks
Allocation Unit 2 32K blocks
... Allocation Unit n 32K blocks
Structural Fileset • • • • • • • • • • •
Object Location Table file Label file (Superblock) Device file Fileset Header file Inode List file Inode Allocation Unit file Log file (Intent Log) Extent AU State file Extent AU Summary file Free Extent Map file Quotas files
VM40_Solaris_R1.0_20040115
5-29
VxFS Structural Components The structure of a VERITAS file system is complex, and only the main structures are presented in this topic. For more information about structural components, see the VERITAS File System System Administrator’s Guide. VxFS layout Versions 4, 5, and 6 include the following structural components: • Allocation units • Structural files VxFS Allocation Units With the VxFS layout, the entire file system space is divided into fixed-size allocation units. The first allocation unit starts at block zero, and all allocation units are a fixed length of 32K blocks. A file system with a block size of 1K has an AU size of 32 MB, and for a block size of 8K, the AU size is 256 MB. An exception is the last allocation unit in the file system, which occupies whatever space remains at the end of the file system. An allocation unit is roughly equivalent to the cylinder group in UFS. VxFS Structural Files All structural information about the file system is contained in files within a structural fileset. With the exception of the superblock, which has a known location, structural files are not stored in a fixed location. The object location table (OLT) is used to keep track of locations of other structural files.
5–40
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Earlier VxFS layout versions placed structural information in fixed locations within allocation units. When structural information from the allocation units is separated, expansion of the file system simply requires extending the appropriate structural files. This design also removes extent size restrictions of layout Versions 1 and 2 by enabling extents to span allocation units. To display file system structures, use the ncheck_vxfs (1m) command. The structural files in the VxFS Version 4, 5, and 6 file system layouts are: File
Description
Object Location Table File
Contains the object location table (OLT), which is used to locate the other structural files
Label File
Encapsulates the superblock and superblock replicas The superblock contains fundamental information about the file system, such as file system type, size, layout, and available resources. The location of the primary superblock is known. The label file can locate superblock copies if there is structural damage to the file system.
Device File
Records device information, such as volume length and volume label, and contains pointers to other structural files
Fileset Header File
Holds information on a per-fileset basis, which may include the inode of the fileset’s inode list file, the maximum number of inodes allowed, an indication of whether the file system supports large files, and the inode number of the quotas file if the fileset supports quotas
Inode List File
Contains inode lists that are stored in inode list files Increasing the number of inodes involves increasing the size of the file after expanding the inode allocation unit file.
Inode Allocation Unit File
Holds the free inode map, extended operations map, and a summary of inode resources
Log File
Maps the block used by the file system intent log (The intent log is a record of current activity used to guarantee file system integrity in the event of system failure.)
Extent Allocation Unit State File
Indicates the allocation state of each AU by defining whether each AU is free, allocated as a whole (no bitmaps allocated), or expanded
Extent Allocation Unit Summary File
Contains the AU summary for each allocation unit, which contains the number of free extents of each size (The summary for an extent is created only when an allocation unit is expanded for use.)
Free Extent Map File
Contains the free extent maps for each of the allocation units
Quotas Files
If the file system supports quotas, there is a quotas file that is used to track the resources allocated to each user.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–41
Fragmentation Degree of fragmentation depends on: • •
File system usage File system activity patterns Initial Initial Allocation Allocation
Fragmented Fragmented
Defragmented Defragmented
Fragmentation types: • Directory fragmentation • Extent fragmentation VM40_Solaris_R1.0_20040115
5-30
Controlling File System Fragmentation In a VERITAS file system, when free resources are initially allocated to files, they are aligned in the most efficient order possible to provide optimal performance. On an active file system, the original order is lost over time as files are created, removed, and resized. As space is allocated and deallocated from files, the available free space becomes broken up into fragments. This means that space has to be assigned to files in smaller and smaller extents. This process is known as fragmentation. Fragmentation leads to degraded performance and availability. The degree of fragmentation depends on file system usage and activity patterns. Allocation units in VxFS are designed to help minimize and control fragmentation. However, over time file systems eventually become fragmented. VxFS provides online reporting and optimization utilities to enable you to monitor and defragment a mounted file system. These utilities are accessible through the file system administration command, fsadm. Using the fsadm command, you can track and eliminate fragmentation without interrupting user access to the file system. Types of Fragmentation VxFS addresses two types of fragmentation: • Directory fragmentation As files are created and removed, gaps are left in directory inodes. This is known as directory fragmentation. Directory fragmentation causes directory lookups to become slower.
5–42
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
•
Extent fragmentation As files are created and removed, the free extent map for an allocation unit changes from having one large free area to having many smaller free areas. Extent fragmentation occurs when files cannot be allocated in contiguous chunks and more extents must be referenced to access a file. In a case of extreme fragmentation, a file system may have free space, none of which can be allocated.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–43
Monitoring Fragmentation To monitor directory fragmentation: # fsadm -D /mnt1 Dirs Total Searched Blocks total 486 99
Immed Immeds Dirs to Add 388 6
Dirs to Blocks Reduce to Reduce 6 6
A A high high total total in in the the Dirs Dirs to to Reduce Reduce column column indicates indicates fragmentation. fragmentation.
To monitor extent fragmentation: # fsadm -E /home ...
% Free blocks in extents smaller than 64 blks: 8.35 % Free blocks in extents smaller than
8 blks: 4.16
% blks allocated to extents 64 blks or larger: 45.81 Output Output displays displays percentages percentages of of free free and and allocated allocated blocks blocks per per extent extent size. size.
VM40_Solaris_R1.0_20040115
5-31
Running Fragmentation Reports You can monitor fragmentation in a VERITAS file system by running reports that describe fragmentation levels. You use the fsadm command to run reports on both directory and extent fragmentation. The df command, which reports on file system free space, also provides information useful in monitoring fragmentation. • To obtain a directory fragmentation report, you use the -D option in the fsadm command: fsadm -D mount_point In the syntax, you specify the fsadm -D command and the mount point that identifies the file system. • To obtain an extent fragmentation report, you use the -E option in the fsadm command: fsadm -E [-l largesize] mount_point In the syntax, you specify the fsadm -E command followed by the mount point that identifies the file system. By default, the largesize value is 64 blocks. This means that the extent fragmentation report considers extents of size 64 blocks or larger to be immovable; that is, reallocating and consolidating these extents does not improve performance. You can specify a different largesize value by using the -l option. • You can also use the df -F vxfs -o s command to print the number of free extents of each size.
5–44
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Interpreting Fragmentation Reports In general, for optimum performance, the percentage of free space in a file system should not fall below 10 percent. A file system with 10 percent or more free space has less fragmentation and better extent allocation. The simplest way to determine the degree of fragmentation is to view the percentages in the extent fragmentation report and follow these guidelines: • An unfragmented file system has one or more of the following characteristics: – Less than five percent of free space in extents of less than 64 blocks in length – Less than one percent of free space in extents of less than eight blocks in length – More than five percent of the total file system size available as free extents in lengths of 64 or more blocks • A badly fragmented file system has one or more of the following characteristics: – More than 50 percent of free space used by small extents of less than 64 blocks in length – A large number of small extents that are free (Generally, a fragmented file system has greater than five percent of free space in extents of less than 8 blocks in length.) – Less than five percent of the total file system size available is in large extents, which are defined as free extents in lengths of 64 or more blocks. Note: You should also consider file size when interpreting fragmentation reports. If most of the files are less than 64 blocks in size, then this last characteristic would not represent a fragmented file system.
Percentage
Unfragmented
Badly Fragmented
Percentage of free space in extents of less than 64 blocks in length
< 5%
> 50%
Percentage of free space in extents of less than 8 blocks in length
< 1%
> 5%
Percentage of total file system size in extents of length 64 blocks or greater
> 5%
< 5%
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–45
Defragmenting a File System fsadm [-d] [-D] [-e] [-E] [-t time][-p passes] mount_point During extent reorganization:
During directory reorganization:
• •
•
• • •
Small files are made contiguous. Large files are built from large extents. Small, recent files are moved near the inodes. Large, old files are moved to end of the AU. Free space is clustered in the center.
• • • •
Valid entries are moved to the front. Free space is clustered in the center of the allocation unit. Directories are packed into inode area. Directories are placed before other files. Entries are sorted by access time.
Example:
Example:
fsadm -e -E -s /mnt1
fsadm -d -D /mnt1
VM40_Solaris_R1.0_20040115
5-32
VxFS Defragmentation You can use the online administration utility fsadm to defragment, or reorganize, file system directories and extents. The fsadm utility defragments a file system mounted for read/write access by: • Removing unused space from directories • Making all small files contiguous • Consolidating free blocks for file system use Only a privileged user can reorganize a file system. Defragmenting Extents Defragmenting extents, called extent reorganization, can improve performance: fsadm -e mount_point
During extent reorganization: • Small files (less than 64K) are made into one contiguous extent. • Large files are built from large extents. • Small and recently used (less than 14 days) files are moved near the inode area. • Large or old files (more than 14 days since last access) are moved to the end of the allocation unit. • Free space is clustered in the center of the data area. Extent reorganization is performed on all inodes in the file system. Each pass through the inodes moves the file system closer to optimal organization.
5–46
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Defragmenting Directories Defragmenting directories, called directory reorganization, is not nearly as critical as extent reorganization, but regular directory reorganization improves performance: fsadm -d mount_point
Directories are reorganized through compression and sorting. During directory reorganization: • Valid entries are moved to the front of the directory. • Free space is clustered in the center of the allocation unit. • Directories and symbolic links are packed into the inode immediate area. • Directories and symbolic links are placed before other files. • Entries are sorted by the time of last access. Other fsadm Defragmentation Options If you specify both -d and -e, directory reorganization is always completed before extent reorganization. If you use the -D and -E with the -d and -e options, fragmentation reports are produced both before and after the reorganization. You can use the -t and -p options to control the amount of work performed by fsadm, either in a specified time or by a number of passes. By default, fsadm runs five passes. If both -t and -p are specified, fsadm exits if either of the terminating conditions is reached. For more information on defragmentation options, see the fsadm_vxfs(1m) manual page. Duration of Defragmentation The time it takes to complete extent reorganization varies, depending on the degree of fragmentation, disk speed, and the number of inodes in the file system. In general, extent reorganization takes approximately one minute for every 100 megabytes of disk space.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–47
Scheduling Defragmentation • The frequency of defragmentation depends on usage, activity patterns, and importance of performance. • Run defragmentation on demand or as a cron job: – Daily or weekly for frequently used file systems – Monthly for infrequently used file systems
• Adjust defragmentation intervals based on reports. • To defragment using VEA, highlight a file system and select Actions—>Defrag File System.
VM40_Solaris_R1.0_20040115
5-33
Scheduling Defragmentation The best way to ensure that fragmentation does not become a problem is to defragment the file system on a regular basis. The frequency of defragmentation depends on file system usage, activity patterns, and the importance of file system performance. In general, follow these guidelines: • Schedule defragmentation during a time when the file system is relatively idle. • For frequently used file systems, you should schedule defragmentation daily or weekly. • For infrequently used file systems, you should schedule defragmentation at least monthly. • Full file systems tend to fragment and are difficult to defragment. You should consider expanding the file system. To determine the defragmentation schedule that is best for your system, select what you think is an appropriate interval for running extent reorganization and run the fragmentation reports both before and after the reorganization. If the degree of fragmentation is approaching the bad fragmentation figures, then the interval between fsadm runs should be reduced. If the degree of fragmentation is low, then the interval between fsadm runs can be increased. You should schedule directory reorganization for file systems when the extent reorganization is scheduled. The fsadm utility can run on demand and can be scheduled regularly as a cron job. The defragmentation process can take some time. You receive an alert when the process is complete. 5–48
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Intent Log Allocation Units
1 The The intent intent log log records pending pending file file system changes changes before before metadata is is changed. changed. Structural Intent Log Files
Crash Crash Data
Metadata
2 After After the the intent intent log log is written, other file file system updates are are made. made.
VM40_Solaris_R1.0_20040115
Disk
fsck fsck 3 If If the the system system crashes, crashes, the the intent intent log log is is replayed replayed by by VxFS VxFS fsck. fsck.
5-34
Role of the Intent Log A file system may be left in an inconsistent state after a system failure. Recovery of structural consistency requires examination of file system metadata structures.VERITAS File System provides fast file system recovery after a system failure by using a tracking feature called intent logging, or journaling. Intent logging is the process by which intended changes to file system metadata are written to a log before changes are made to the file system structure. Once the intent log has been written, the other updates to the file system can be written in any order. In the event of a system failure, the VxFS fsck utility replays the intent log to nullify or complete file system operations that were active when the system failed. Traditionally, the length of time taken for recovery using fsck was proportional to the size of the file system. For large disk configurations, running fsck is a timeconsuming process that checks, verifies, and corrects the entire file system. The VxFS version of the fsck utility performs an intent log replay to recover a file system without completing a full structural check of the entire file system. The time required for log replay is proportional to the log size, not the file system size. Therefore, the file system can be recovered and mounted seconds after a system failure. Intent log recovery is not readily apparent to users or administrators, and the intent log can be replayed multiple times with no adverse effects. Note: Replaying the intent log may not completely recover the damaged file system structure if the disk suffers a hardware failure. Such situations may require a complete system check using the VxFS fsck utility.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–49
Maintaining VxFS Consistency To check file system consistency by using the intent log for the VxFS on the volume datavol: # fsck [fs_type] /dev/vx/rdsk/datadg/datavol To perform a full check without using the intent log: # fsck [fs_type] -o full,nolog /dev/vx/rdsk/datadg/datavol To check two file systems in parallel using the intent log: # fsck [fs_type] -o p /dev/rdsk/c1t2d0s4 /dev/rdsk/c1t0d0s5 To perform a file system check using the VEA GUI, highlight an unmounted file system, and select Actions—>Check File System. VM40_Solaris_R1.0_20040115
5-35
Maintaining File System Consistency You use the VxFS-specific version of the fsck command to check the consistency of and repair a VxFS file system. The fsck utility replays the intent log by default, instead of performing a full structural file system check, which is usually sufficient to set the file system state to CLEAN. You can also use the fsck utility to perform a full structural recovery in the unlikely event that the log is unusable. The syntax for the fsck command is: fsck [fs_type] [generic_options] [-y|-Y] [-n|-N] [-o full,nolog] special
Generic fsck Options For a complete list of generic options, see the fsck(1m) manual page. Some of the generic options include:
5–50
Option
Description
-m
Checks, but does not repair, a file system before mounting
-n|N
Assumes a response of no to all prompts by fsck (This option does not replay the intent log and performs a full fsck.)
-V
Echoes the expanded command line but does not execute the command
-y|Y
Assumes a response of yes to all prompts by fsck (If the file system requires a full fsck after the log replay, then a full fsck is performed.) VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VxFS-Specific fsck Options Option
Description
-o full
Perform a log replay and a full file system check. (By default, VxFS performs an intent log replay only.)
-o nolog
Do not perform log replay. You can use this option if the log area becomes physically damaged.
-o p
Note: This option is supported in Solaris 8, update 2 and later. Allow parallel log replay for several VxFS file systems. Each message from fsck is prefixed with the device name to identify the device. This suboption does not perform a full file system check in parallel; that is still done sequentially on each device, even when multiple devices are specified.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–51
Resizing the Intent Log •
Intent log size can be changed using fsadm: fsadm [-F vxfs] -o log=size [,logdev=device] mount_point Specify a new log size.
Place log on a separate device.
•
Default log size: Depends on file system size
•
Maximum log size: 2 GB
•
Minimum log size: 256K
•
Larger log sizes may improve performance for intensive synchronous writes, but may increase: – Recovery time – Memory requirements – Log maintenance time
VM40_Solaris_R1.0_20040115
5-36
Resizing the Intent Log The VxFS intent log is allocated when the file system is first created. The size of the intent log is based on the size of the file system—the larger the file system, the larger the intent log. • Default log size: Based on file system size; in the range 256K to 64 MB • Maximum log size: 2 GB (Version 6 layout); 16 MB (Versions 4 and 5 layout) • Minimum log size: 256K With the Version 6 disk layout, you can dynamically increase or decrease the intent log size using the log option of the fsadm command. The allocation can be directed to a specified intent logging device, as long as the device exists and belongs to the same volume set as the file system. Increasing the size of the intent log can improve system performance because it reduces the number of times the log wraps around. However, increasing the intent log size can lead to greater times required for a log replay if there is a system failure. Memory requirements for log maintenance increase as the log size increases. The log size should never be more than 50 percent of the physical memory size of the system. A small log uses less space on the disk and leaves more room for file data. For example, setting a log size smaller than the default log size may be appropriate for a small floppy device. On small systems, you should ensure that the log size is not greater than half the available swap space.
5–52
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Logging mount Options mount -F vxfs [-o specific_options] ... All structural changes logged
Most logging delayed; great performance improvement, but changes could be lost
-o tmplog
-o log Integrity
Performance
-o blkclear
-o delaylog
All storage initialized; provides increased security; slower than standard file system
Default; some logging delayed; improves performance
VM40_Solaris_R1.0_20040115
5-37
Controlling Logging Behavior VERITAS File System provides VxFS-specific logging options that you can use when mounting a file system to alter default logging behavior. By default, when you mount a VERITAS file system, the -o delaylog option is used with the mount command. With this option, some system calls return before the intent log is written. This logging delay improves the performance of the system, and this mode approximates traditional UNIX guarantees for correctness in case of system failures. You can specify other mount options to change logging behavior to further improve performance at the expense of reliability. Selecting mount Options for Logging You can add VxFS-specific mount options to the standard mount command using -o in the syntax: mount [-F vxfs] [generic_options] [-o specific_options] special mount_point
Logging mount options include: • -o log • -o delaylog • -o tmplog • -o blkclear
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–53
-o blkclear The blkclear option is used in increased data security environments. This option guarantees that all storage is initialized before being allocated to files. The increased integrity is provided by clearing extents on disk when they are allocated within a file. Extending writes are not affected by this mode. A blkclear mode file system should be approximately ten percent slower than a standard mode VxFS file system, depending on the workload. -o log This option guarantees that all structural changes to the file system have been logged on disk when the system call returns. If a system failure occurs, fsck replays recent changes so that they are not lost. -o delaylog This is the default option that does not need to be specified. When you use this option, some system calls return before the intent log is written, and the logging delay improves the performance of the system. With this option, VxFS synchronously maintains structural changes to the file system, and operations such as file create, file delete, and extending file sizes are guaranteed to go into the log. Other operations such as synchronous I/Os (for example, a database transaction log) are also guaranteed to be stored on disk. For some workloads, such as file servers, where the application does not request synchronous semantics, VxFS tries to cache things when allowed to improve performance. If VxFS is not allowed to cache things, for example, in database environments or NFS environments where the database sets the caching policies, VxFS will strictly adhere to those policies. However, when the application allows VxFS to choose the caching policy, VxFS will attempt to do the best job from a performance and memory management perspective. -o tmplog With the tmplog option, intent logging is almost always delayed. This option greatly improves performance, but recent changes may disappear if the system crashes. This mode is only recommended for temporary file systems. On most UNIX systems, temporary file system directories (such as /tmp and /usr/tmp) often hold files that do not need to be retained when the system reboots. The underlying file system does not need to maintain a high degree of structural integrity for these temporary directories.
5–54
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Logging and Performance To select the best logging mode for your environment: • Understand the different logging options. • Test sample loads and compare performance results. • Consider the type of operations performed as well as the workload. • Performance of I/O to devices can improve if writes are performed in a particular size, or in a multiple of that size. To specify an I/O size to be used for logging, use the mount option: -o logiosize=size • Place intent log on a separate volume and disk. VM40_Solaris_R1.0_20040115
5-38
Logging and VxFS Performance In environments where data reliability and integrity is of the highest importance, logging is essential. However, logging does incur performance overhead. If maximum data reliability is less important than maximum performance, then you can experiment with logging mount options. When selecting mount options for logging to try to improve performance, follow these guidelines: Test representative system loads. The best way to select a logging mode is to test representative system loads against the logging modes and compare the performance results. Consider the type of operations and the workload. The degree of performance improvement depends on the operations being performed and the workload. • File system structure-intensive loads (such as mkdir, create, and rename) may show over 100 percent improvement. • Read/write intensive loads should show less improvement.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–55
Experiment with different logging modes. • The delaylog and tmplog modes are capable of significantly improving performance. – With delaylog, the improvement over log mode is typically about 15 to 20 percent. – With tmplog, the improvement is even higher. • A nodatainlog mode file system should be approximately 50 percent slower than a standard mode VxFS file system for synchronous writes. Other operations are not affected. Experiment with I/O Sizes for Logging The performance of some storage devices, such as those using read-modify-write features, improves if the writes are performed in a particular size, or in a multiple of that size. When you mount a file system, you can specify the I/O size to be used for logging by using the logiosize option to the mount command: # mount -F vxfs -o logiosize=size special mount_point
You can specify a size (in bytes) of 512, 1024, 2048, 4096, or 8192. If you specify an I/O size for logging, VxFS writes the intent log in at least that size, or in a multiple of that size, to obtain maximum performance from devices that employ a read-modify-write feature. Note: A read-modify-write operation is a RAID-5 algorithm used for short write operations, that is, write operations in which the number of data columns that must be written to is less than half the total number of data columns. Place the intent log on a separate volume. By placing the intent log on a separate volume and disk, you eliminate the disk seek time between the VxFS data and log areas on disk and increase the performance of synchronous log writes. Prior to VxFS 4.0, you could use VERITAS QuickLog to enhance VxFS performance by exporting the file system log to a separate physical volume. However, QuickLog does not operate on the Version 6 disk layout introduced in the VxFS 4.0 release. With VxFS 4.0 and later, the same task can be accomplished by using the multivolume support feature of VxFS. With multivolume support, a single VxFS file system can be mounted on multiple volumes combined into a volume set. You can dedicate one of the volumes in the volume set to the intent log.
5–56
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
File Change Log • Tracks changes to files and directories in a file system for use by backup utilities, webcrawlers, search engines, and replication programs. • In contrast to the intent log, the FCL is not used for recovery. • Location: mount_point/lost+found/changelog • To activate/deactivate an FCL for a mounted file system: # fcladm on|off mount_point (Default is off.) • To remove an FCL (FCL must be off first): # fcladm rm mount_point • To obtain current FCL state for a mounted file system: # fcladm state mount_point VM40_Solaris_R1.0_20040115
5-39
File Change Log The VxFS File Change Log (FCL) is another type of log that tracks changes to files and directories in a file system. Applications that can make use of the FCL are those that are typically required to scan an entire file system to discover changes since the last scan, such as backup utilities, webcrawlers, search engines, and replication programs. The File Change Log records file system changes such as creates, links, unlinks, renaming, data appended, data overwritten, data truncated, extended attribute modifications, holes punched, and other file property updates. Note: The FCL records only that data has changed, not the actual data. It is the responsibility of the application to examine the files that have changed data to determine which data has changed. FCL stores changes in a sparse file in the file system namespace. The FCL log file is always located in mount_point/lost+found/changelog. Comparing the Intent Log and the File Change Log The intent log is used to speed recovery of the file system after a crash. The FCL has no such role. Instead, the FCL is used to improve the performance of applications. For example: your IT department mandates that all systems undergo a virus scan once a week. The virus scan takes some time and your system takes a performance hit during the scan. To improve this situation, an FCL could be used with the virus scanner. The virus scanner, if using and FCL, could read the log, find all files on your system that are either new or that have been modified, and scan only those files.
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–57
Summary You should now be able to: •
Add a mirror to and remove a mirror from an existing volume by using VEA and from the command line.
•
Add a dirty region log or RAID-5 log to an existing volume by using VEA and from the command line.
•
Change the volume read policy for a mirrored volume to specify which plex in a volume is used to satisfy read requests by using VEA and from the command line.
•
Allocate storage for a volume by specifying storage attributes and ordered allocation.
•
Add a file system to an existing volume and administer VERITAS File System.
VM40_Solaris_R1.0_20040115
5-40
Summary This lesson described how to configure volumes in VxVM. This lesson covered how to add and remove a mirror, how to add a log, and how to add a file system to a volume. In addition, methods for allocating storage for volumes and changing the volume read policy were also covered. Next Steps In the next lesson, you learn how to reconfigure volumes while online. Additional Resources • VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. • VERITAS Volume Manager User’s Guide—VERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. • VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.
5–58
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 5 Lab 5: Configuring Volumes • This lab provides additional practice in configuring volume attributes. • In this lab, you add mirrors, logs, and file systems to existing volumes, change the volume read policy, and specify ordered allocation of storage to volumes. • You also practice basic file system administration. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B. VM40_Solaris_R1.0_20040115
5-41
Lab 5: Configuring Volumes To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
Lesson 5 Configuring Volumes Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5–59
5–60
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 6 Reconfiguring Volumes Online
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
6-2
Introduction Overview This lesson describes how to perform and monitor volume maintenance tasks using VERITAS Volume Manager (VxVM). This lesson describes how to perform online administration tasks, such as resizing a volume and changing the layout of a volume, and how to analyze volume configurations with the Storage Expert utility. Importance With VxVM, you can perform volume maintenance, such as changing the size and layout of a volume, without disrupting applications or file systems that are using the volume. A volume layout can be resized, reconfigured, monitored, and controlled while the volume is online and accessible to users. The Storage Expert utility enables you to analyze volume configurations based on VxVM best practices.
6–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: • Resize a volume, file system, or LUN while the volume remains online. • Change the volume layout while the volume remains online. • Manage volume maintenance tasks with VEA and from the command line. • Analyze volume configurations by using the Storage Expert utility.
VM40_Solaris_R1.0_20040115
6-3
Outline of Topics • Resizing a Volume • Changing the Volume Layout • Managing Volume Tasks • Analyzing Volume Configurations with Storage Expert
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–3
Resizing a Volume To resize a volume, you can: •
Specify a desired new volume size.
•
Add to or subtract from the current volume size.
Expanding a volume provides more space to users: •
Disk space must be available.
•
VxVM assigns disk space, or you can specify disks.
Shrinking a volume enables you to use space elsewhere. VxVM returns space to the free space pool. If a volume is resized, its file system must also be resized. •
VxFS can be expanded or reduced while mounted.
• •
UFS can be expanded, but not reduced. Ensure that the data manager application supports resizing.
VM40_Solaris_R1.0_20040115
6-4
Resizing a Volume Resizing a Volume If users require more space on a volume, you can increase the size of the volume. If a volume contains unused space that you need to use elsewhere, you can shrink the volume. To resize a volume, you can specify either: • The desired new size of the volume, or • The amount of space to add to or subtract from the current volume size When the volume size is reduced, the resulting extra space is returned to the free space pool. When the volume size is increased, sufficient disk space must be available in the disk group. When increasing the size of a volume, VxVM assigns the necessary new space from available disks. By default, VxVM uses space from any disk in the disk group, unless you define specific disks. Resizing a Volume with a File System Volumes and file systems are separate virtual objects. When a volume is resized, the size of the raw volume is changed. If a file system exists that uses the volume, the file system must also be resized. If a volume is expanded, its associated file system must also be expanded to be able to use the increased storage space. A VERITAS File System (VxFS) can be enlarged or reduced while mounted.
6–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
When you resize a volume using VEA or the vxresize command, the file system is also resized. Resizing Volumes with Other Types of Data For volumes containing data other than file systems, such as raw database data, you must ensure that the data manager application can support the resizing of the data device with which it has been configured.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–5
Resizing a Volume: Methods Method
What Is Resized?
VEA
Both volume and file system
vxresize
Both volume and file system
vxassist
Volume only
fsadm
File system only
VM40_Solaris_R1.0_20040115
File system must be mounted
6-5
Resizing a Volume and File System: Methods To resize a volume from the command line, you can use either the vxassist command or the vxresize command. Both commands can expand or reduce a volume to a specific size or by a specified amount of space, with one significant difference: • vxresize automatically resizes a volume’s file system. • vxassist does not resize a volume’s file system. When using vxassist, you must resize the file system separately by using the fsadm command. When you expand a volume, both commands automatically locate available disk space unless you designate specific disks to use. When you shrink a volume, unused space is returned to the free space pool of the disk group. When you resize a volume, you can specify the length of a new volume in sectors, kilobytes, megabytes, or gigabytes. The unit of measure is added as a suffix to the length (s, k, m, or g). If no unit is specified, the default unit is sectors. Notes: • Do not shrink a volume below the size of the file system or database using the volume. If you have a VxFS file system, you can shrink the file system and then shrink the volume. If you do not shrink the file system first, you risk unrecoverable data loss. Shrinking a volume can always be safely performed on empty volumes. • You cannot grow or shrink any volume associated with an encapsulated boot disk. These volumes map to a physical underlying partition on the disk and 6–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
must be contiguous. If you attempt to grow these volumes, the system could become unbootable if you need to revert back to slices to boot. Growing these volumes can also prevent a successful OS upgrade, and you might have to do a fresh install. Additionally, the upgrade_start script might fail.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–7
Resizing a Volume: VEA Highlight Highlightaavolume, volume,and andselect selectActions—>Resize Actions—>ResizeVolume. Volume. Specify Specifythe theamount amount of space of spaceto toadd addor or subtract, subtract,or orspecify specifyaa new newvolume volumesize. size.
IfIfdesired, desired,specify specify disks disksto tobe beused used for forthe theadditional additional space. space. VM40_Solaris_R1.0_20040115
6-6
Resizing a Volume and File System: VEA Select:
The volume to be resized
Navigation path:
Actions—>Resize Volume
Input:
Add by: To increase the volume size by a specific amount of space, input how much space should be added to the volume. Subtract by: To decrease the volume size by a specific amount of space, input how much space should be removed. New volume size: To specify a new volume size, input the size. Max Size: To determine the largest possible size, click Max Size. Select disks for use by this volume: You can select specific disks to use and specify mirroring and striping options. Force: You can force the resize if the size is being reduced and the volume is active.
Notes: When you resize a volume, if a VERITAS file system (VxFS) is mounted on the volume, the file system is also resized. The file system is not resized if it is unmounted.
6–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Resizing a Volume: vxresize vxresize [-b] fs_type -g diskgroup volume [+|-]new_length Original volume size: 10 MB 1
# vxresize -g mydg myvol 50m
2
# vxresize -g mydg myvol +10m
3
# vxresize -g mydg myvol 40m
4
# vxresize -g mydg myvol -10m
10 MB
50 MB
60 MB
40 MB
30 MB
Original
1
2
3
4
VM40_Solaris_R1.0_20040115
6-7
Resizing a Volume and File System: vxresize vxresize [-b] fstype -g diskgroup volume new_length
The new_length operand can begin with a plus sign (+) to indicate that the new length is added to the current volume length. A minus sign (-) indicates that the new length is subtracted from the current volume length. The -b option runs the process in the background. The ability to expand or shrink a file system depends on the file system type and whether the file system is mounted or unmounted. File System Type
Mounted FS
Unmounted FS
VxFS
Expand and shrink
Not allowed
UFS
Expand only
Expand only
Example: The size of the volume myvol is 10 MB. To extend myvol to 50 MB: # vxresize -g mydg myvol 50m
To extend myvol by an additional 10 MB: # vxresize -g mydg myvol +10m
To shrink myvol back to a length of 40 MB: # vxresize -g mydg myvol 40m
To shrink myvol by an additional 10 MB: # vxresize -g mydg myvol -10m Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–9
Resizing a Volume: vxassist vxassist -g diskgroup {growto|growby|shrinkto| shrinkby} volume size Original volume size: 20 MB 1
# vxassist -g datadg growto datavol 40m
2
# vxassist -g datadg growby datavol 10m
3
# vxassist -g datadg shrinkto datavol 30m
4
# vxassist -g datadg shrinkby datavol 10m
20 MB
40 MB
50 MB
30 MB
20 MB
Original
1
2
3
4
VM40_Solaris_R1.0_20040115
6-8
Resizing a Volume Only: vxassist vxassist -g diskgroup {growto|growby|shrinkto|shrinkby} volume_name size • growto Increases volume to specified length • growby Increases volume by specified amount • shrinkto Reduces volume to specified length • shrinkby Reduces volume by specified amount
Example: Resizing a Volume with vxassist The size of the volume datavol is 20 MB. To extend datavol to 40 MB: # vxassist -g datadg growto datavol 40m
To extend datavol by an additional 10 MB: # vxassist -g datadg growby datavol 10m
To shrink datavol back to a length of 30 MB: # vxassist -g datadg shrinkto datavol 30m
To shrink datavol by an additional 10 MB: # vxassist -g datadg shrinkby datavol 10m
The size of the volume is returned to 20 MB.
6–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Resizing a File System: fsadm fsadm [fs_type] [-b newsize] [-r rawdev] mount_point Example: Expand the file system /datavol from 512,000 sectors to 1,024,000 sectors. 1. Verify the free space on the underlying device: # vxdg -g datadg free 2. Expand the volume using vxassist: # vxassist -g datadg growto myvol 1024000 3. Expand the file system using fsadm: # fsadm -F vxfs -b 1024000 -r /dev/vx/rdsk/datadg/datavol /datavol 4. Verify that the file system was resized by using df: # df -k /datavol VM40_Solaris_R1.0_20040115
6-9
Resizing a File System Only: fsadm You may need to resize a file system to accommodate a change in use—for example, when there is an increased need for space in the file system. You may also need to resize a file system as part of a general reorganization of disk usage—for example, when a large file system is subdivided into several smaller file systems. You can resize a VxFS file system while the file system remains mounted by using the fsadm command. fsadm [fs_type] [-b newsize] [-r rawdev] mount_point
Using fsadm to resize a file system does not automatically resize the underlying volume. When you expand a file system, the underlying device must be large enough to contain the new larger file system. When you shrink a file system, unused space is released at the end of the underlying device, which can be a VxVM volume or disk partition. You can then resize the device, but be careful not to make the device smaller than the size of the file system. Notes: When resizing a file system, avoid the following common errors: • Resizing a file system that is very busy: Although resizing a file system requires that the file system be mounted, the file system “freezes” when the actual resizing occurs. Freezing temporarily prevents new access to the file system, but waits for pending I/Os to complete. You should attempt to resize a file system during a time when the file system is under less of a load. • Resizing a file system that has a mounted snapshot file system: If a snapshot file system is mounted on the file system being resized, the resize fails. File systems that have snapshots mounted on them cannot be resized. Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–11
•
•
6–12
Resizing a corrupt file system: A file system that has experienced structural damage and is marked for full fsck cannot be resized. If the resize fails due to structural damage, you must unmount the file system, perform a fsck, remount the file system, and try the resize again. Resizing a file system that is nearly 100 percent full: The resize operation needs space to expand a file system, and if a file system is nearly 100 percent full, an error may be returned. When increasing the size of a file system, the size of the internal structural files must first be extended. If the file system is full or almost full, and the size that you have specified is not possible, then VxFS automatically attempts to increase by a smaller amount. If expansion is still not possible, try to defragment the file system or move some files temporarily to another file system.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Resizing a Dynamic LUN •
If you resize a LUN in the hardware, you should resize the VxVM disk corresponding to that LUN.
•
Disk headers and other VxVM structures are updated to reflect the new size.
•
Intended for devices that are part of an imported disk group.
VEA: • •
Select the disk that you want to expand. Select Actions—>Resize Disk.
CLI: vxdisk [-f] -g diskgroup resize {accessname|medianame} length=attribute
Example: vxdisk -g datadg resize datadg01 length=8.5GB VM40_Solaris_R1.0_20040115
6-10
Resizing a Dynamic LUN When you resize a LUN in the hardware, you can should resize the VxVM disk corresponding to that LUN. You can use vxdisk resize to update disk headers and other VxVM structures to match a new LUN size. This command does not resize the underlying LUN itself. Resizing a LUN: VEA Select:
The disk to be resized
Navigation path:
Actions—>Resize Disk
Input:
New disk size: Specify a new disk size. If the new size is smaller than the current disk size, then this is a shrink request and the subdisks that fall outside the new disk size need to be preserved. Select a Disk to Resize: Select the disk to be resized (from the selected disk group) Force: Before reducing the size of a device, volumes on the device should first be reduced in size or moved off the device. By default, the resize fails if any subdisks would be disabled. Selecting the Force checkbox overrides this behavior.
Resizing a LUN: CLI vxdisk [-f] [-g diskgroup] resize {accessname|medianame} length=attribute
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–13
Changing the Volume Layout Online relayout: Change the volume layout or layout characteristics while the volume is online. volume
volume
Examples: •
Relayout concatenated to mirror-concat to achieve redundancy.
•
Relayout RAID-5 to mirrored for better write performance.
•
Relayout mirrored to RAID-5 to save space.
•
Change stripe unit size or add columns to achieve desired performance.
VM40_Solaris_R1.0_20040115
6-11
Changing the Volume Layout What Is Online Relayout? You may need to change the volume layout in order to change the redundancy or performance characteristics of an existing volume. The online relayout feature of VxVM enables you to change from one volume layout to another by invoking a single command. You can also modify the performance characteristics of a particular layout to reflect changes in your application environment. While relayout is in progress, data on the volume can be accessed without interruption. Online relayout eliminates the need for creating a new volume in order to obtain a different volume layout.
6–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Supported Transformations Use online relayout to change the volume or plex layout to or from: • Concatenated • Striped • RAID-5 • Striped mirrored • Concatenated mirrored Also use online relayout to change the number of columns or stripe unit size for a RAID-5 or striped plex. VM40_Solaris_R1.0_20040115
6-12
Supported Transformations By using online relayout, you can change the layout of an entire volume or a specific plex. VxVM currently supports the transformations listed in the slide. Note: Online relayout should be used only with volumes created with the vxassist command or through the VEA interface.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–15
How Does Relayout Work? 1 Source Subvolume
Data Datais iscopied copiedaachunk chunkat ataa time timeto toaatemporary temporaryarea. area.
Temporary Subvolume (scratch pad)
2
Data Dataisisreturned returnedfrom fromtemporary temporary area to new layout area to new layoutarea. area.
By default: • If volume size is less than 50 MB, the temp area = volume size. • If volume size is 50 MB to 1 GB, the temp area = 50 MB. • If volume size is 1 GB or greater, the temp area = 1 GB. VM40_Solaris_R1.0_20040115
The larger the temporary space, the faster the relayout, because larger pieces can be copied at one time.
VM40_Solaris_R1.0_20040115
6-13
6-13
How Does Online Relayout Work? The transformation of data from one layout to another involves rearranging the data in the existing layout into the new layout. Data is removed from the source subvolume in portions and copied into a temporary subvolume, or scratch pad. The temporary storage space is taken from the free space in the disk group. Data redundancy is maintained by mirroring any temporary space used. The area in the source subvolume is then transformed to the new layout, and data saved in the temporary subvolume is written back to the new layout. This operation is repeated until all the storage and data in the source subvolume have been transformed to the new layout. Read/write access to data is not interrupted during the transformation. If all of the plexes in the volume have identical layouts, VxVM changes all plexes to the new layout. If the volume contains plexes with different layouts, you must specify a target plex. VxVM changes the layout of the target plex and does not change the other plexes in the volume. File systems mounted on the volumes do not need to be unmounted to perform online relayout, as long as online resizing operations can be performed on the file system. If the system fails during a transformation, data is not corrupted. The transformation continues after the system is restored and read/write access is maintained.
6–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Temporary Storage Space VxVM determines the size of the temporary storage area, or you can specify a size through VEA or vxassist. Default sizes are as follows: • If the original volume size is less than 50 MB, the temporary storage area is the same size as the volume. • If the original volume is larger than 50 MB, but smaller than 1 GB, the temporary storage area is 50 MB. • If the original volume is larger than 1 GB, the temporary storage area is 1 GB. Specifying a larger temporary space size speeds up the layout change process, because larger pieces of data are copied at a time. If the specified temporary space size is too small, VxVM uses a larger size.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–17
Online Relayout Notes • You can reverse online relayout at any time. • Some layout transformations can cause a slight increase or decrease in the the volume length due to subdisk alignment policies. If volume length increases during relayout, VxVM resizes the file system using vxresize. • Relayout does not change log plexes. • You cannot: – – – –
Create a snapshot during relayout. Change the number of mirrors during relayout. Perform multiple relayouts at the same time. Perform relayout on a volume with a sparse plex.
VM40_Solaris_R1.0_20040115
6-14
Notes on Online Relayout • Reversing online relayout: You can reverse the online relayout process at any time, but the data may not be returned to the exact previous storage location. Any existing transformation in the volume should be stopped before performing a reversal. • Volume length: Some layout transformations can cause a slight increase or decrease in the volume length due to subdisk alignment policies. If the volume length changes during online relayout, VxVM uses vxresize to shrink or grow a file system mounted on the volume. • Log plexes: When you change the layout of a volume, the log plexes are not changed. Before you change the layout of a mirrored volume with a log, the log plexes should be removed and then re-created after the relayout. • Volume snapshots: You cannot create a snapshot of a volume when there is an online relayout operation running on the volume. • Number of mirrors: During a transformation, you cannot change the number of mirrors in a volume. • Multiple relayouts: A volume cannot undergo multiple relayouts at the same time. • Sparse plexes: Online relayout cannot be used to change the layout of a volume with a sparse plex.
6–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Changing the Layout: VEA Highlight Highlightaavolume, volume,and andselect selectActions—>Change Actions—>ChangeLayout. Layout.
Select Selectaanew new volume volumelayout. layout.
Set Setrelayout relayoutoptions. options.
VM40_Solaris_R1.0_20040115
6-15
Changing the Volume Layout: VEA Select:
The volume to be changed to a different layout
Navigation path:
Actions—>Change Layout
Input:
Layout: Select the new volume layout and specify layout details as necessary. Options: To retain the original volume size when the volume layout changes, mark the “Retain volume size at completion” check box. To specify the size of the pieces of data that are copied to temporary space during the volume relayout, type a size in the “Temp space size” field. To specify additional disk space to be used for the new volume layout (if needed), specify a disk in the Disk(s) field or browse to select a disk. To specify the temporary disk space to be used during the volume layout change, specify a disk in the “Temp disk(s)” field or browse to select a disk. If the volume contains plexes with different layouts, specify the plex to be changed to the new layout in the “Target plex” field.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–19
Changing the Layout: VEA Relayout RelayoutStatus StatusMonitor MonitorWindow Window Status Status Information Information
Relayout Relayout controls controls
VM40_Solaris_R1.0_20040115
6-16
When you launch a relayout operation, the Relayout Status Monitor window is displayed. This window provides information and options regarding the progress of the relayout operation. • Volume Name: The name of the volume that is undergoing relayout • Initial Layout: The original layout of the volume • Desired Layout: The new layout for the volume • Status: The status of the relayout task • % Complete: The progress of the relayout task The Relayout Status Monitor window also contains options that enable you to control the relayout process: • Pause: To temporarily stop the relayout operation, click Pause. • Abort: To cancel the relayout operation, click Abort. • Continue: To resume a paused or aborted operation, click Continue. • Reverse: To undo the layout changes and return the volume to its original layout, click Reverse.
6–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Changing the Layout: CLI vxassist relayout •
Used for nonlayered relayout operations
•
Used for changing layout characteristics, such as stripe width and number of columns
vxassist convert •
Used to change RAID-5 to a stripe-mirror.
•
Changes nonlayered volumes to layered volumes, and vice versa
Note: vxassist relayout cannot create a nonlayered mirrored volume in a single step. The command always creates a layered mirrored volume even if you specify a non-layered mirrored layout. Use vxassist convert to convert the resulting layered volume into a nonlayered volume. VM40_Solaris_R1.0_20040115
6-17
Changing the Volume Layout: CLI From the command line, online relayout is initiated using the vxassist command. • The vxassist relayout command is used for all nonlayered transformations, including changing the layout of a plex, stripe size, and/or number of columns. • The vxassist convert command is used to change the resilience level of a volume; that is, to convert a volume from nonlayered to layered, or vice versa. Use this option only when layered volumes are involved in the transformation. The vxassist relayout operation involves the copying of data at the disk level in order to change the structure of the volume. The vxassist convert operation does not copy data; it only changes the way the data is referenced. Note: vxassist relayout cannot create a nonlayered mirrored volume in a single step. The command always creates a layered mirrored volume even if you specify a non-layered mirrored layout, such as mirror-stripe or mirror-concat. Use the vxassist convert command to convert the resulting layered mirrored volume into a nonlayered mirrored volume.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–21
vxassist relayout vxassist -g diskgroup relayout volume|plex layout=layout ncol=[+|-]ncol stripeunit=size To change to a striped layout: # vxassist –g datadg relayout datavol layout=stripe ncol=2 To add a column to striped volume datavol: # vxassist –g datadg relayout datavol ncol=+1 To remove a column from datavol: # vxassist –g datadg relayout datavol ncol=-1 To change stripe unit size and number of columns: # vxassist –g datadg relayout datavol stripeunit=32k ncol=5 To change mirrored layouts to RAID-5, specify the plex to be converted (instead of the volume): # vxassist -g datadg relayout datavol01-01 layout=raid5 stripeunit=32k ncol=3 VM40_Solaris_R1.0_20040115
6-18
The vxassist relayout Command The vxassist relayout command performs most online relayout operations: vxassist -g diskgroup relayout volume_name|plex_name layout=layout ncol=[+|-]ncol stripeunit=size [tmpsize=tmpsize]
Notes: • When changing to a striped layout, you should always specify the number of columns, or the operation may fail with the following error: vxvm:vxassist: ERROR: Cannot allocate space for 51200 block volume vxvm:vxassist: ERROR: Relayout operation aborted. • Any layout can be changed to RAID-5 if sufficient disk space and disks exist in the disk group. If the ncol and stripeunit options are not specified, the default characteristics are used. When using vxassist to change the layout of a volume to RAID-5, VxVM may place the RAID-5 log on the same disk as a column, for example, when there is no other free space available. To place the log on a different disk, you can remove the log and then add the log to the location of your choice. • If you convert a mirrored volume to RAID-5, you must specify which plex is to be converted. All other plexes are removed when the conversion has finished, releasing their space for other purposes. If you convert a mirrored volume to a layout other than RAID-5, the unconverted plexes are not removed.
6–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
vxassist convert Use vxassist convert to convert: • mirror-stripe to stripe-mirror • stripe-mirror to mirror-stripe • mirror-concat to concat-mirror • concat-mirror to mirror-concat To convert the striped volume datavol to a layered stripe-mirror layout: # vxassist –g datadg convert datavol layout=stripe-mirror VM40_Solaris_R1.0_20040115
6-19
The vxassist convert Command To change the resilience level of a volume; that is, to convert a nonlayered volume to a layered volume, or vice versa, you use the vxassist convert option. Available conversion operations include: • mirror-stripe to stripe-mirror • stripe-mirror to mirror-stripe • mirror-concat to concat-mirror • concat-mirror to mirror-concat The syntax for vxassist convert is: vxassist -g diskgroup convert volume_name|plex_name layout=layout
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–23
Managing Volume Tasks: VEA Relayout Status Monitor Window • • •
Displays automatically when you start relayout Enables you to view progress, pause, abort, continue, or reverse the relayout task Is also accessible from the Volume Properties window
Task History Window • • •
Displays information about the current-session tasks Can be accessed by clicking the Tasks tab at the bottom of the main window Enables you to right-click a task to abort, pause, resume, or throttle a task in progress
Command Log File • •
Contains history of current- and previous-session tasks Is located in /var/adm/vx/veacmdlog
VM40_Solaris_R1.0_20040115
6-20
Managing Volume Tasks Managing Volume Tasks: VEA Relayout Status Monitor Window Through the Relayout Status Monitor window, you can view the progress of the relayout task and also pause, abort, continue, or reverse the relayout task. You can also access the Relayout Status Monitor through the Volume Properties window. Task History Window The Task History window displays a list of tasks performed in the current session and includes the name of the operation performed, target object, host machine, start time, status, and progress. To display the Task History window, click the Tasks tab at the bottom of the main window. When you right-click a task in the list and select Properties, the Task Properties window is displayed. In this window, you can view the underlying commands executed to perform the task. Command Log File The command log file, located in /var/adm/vx/veacmdlog, contains a history of VEA tasks performed in the current session and in previous sessions. The file contains task descriptions and properties, such as date, command, output, and exit code. All sessions since the initial VEA session are recorded. The log file is not self-limiting and should therefore be initialized periodically to prevent excessive use of disk space. 6–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Managing Volume Tasks: CLI What is a task? • A task is a long-term operation, such as online relayout, that is in progress on the system. • Task ID is a unique number assigned to a single task. • Task tag is a string assigned to a task or tasks by the administrator to simplify task management. For most utilities, you specify a task tag using: -t tag
Use the vxtask command to: • Display task information. • Pause, continue, and abort tasks. • Modify the progress rate of a task. VM40_Solaris_R1.0_20040115
6-21
Managing Volume Tasks: CLI To monitor and control volume maintenance operations from the command line, you use the vxtask and vxrelayout commands.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–25
vxtask list To display information about tasks: vxtask [-ahlpr] list [task_id|task_tag] VxVM-assigned VxVM-assigned Task Task ID ID
Percentage Percentage of of task task complete complete
Starting, Starting, ending, ending, and and current current offset offset
## vxtask vxtask list list TASKID PROGRESS TASKID PTID PTID TYPE/STATE TYPE/STATEPCT PCT PROGRESS 198 RELAYOUT/R 198 RELAYOUT/R58.48% 58.48% 0/20480/11976 0/20480/11976 RELAYOUT RELAYOUT myvol myvol Parent Parent ID ID
VM40_Solaris_R1.0_20040115
Description Description of of task task
State State of of Running Running (R), (R), Paused Paused (P), (P), or or Aborting Aborting (A) (A)
Affected Affected VxVM VxVM object object 6-22
Displaying Task Information with vxtask To display information about tasks, such as relayout or resynchronization processes, you use the vxtask list command. Without any options, vxtask list prints a one-line summary for each task running on the system. Information in the output includes: • TASKID The task identifier assigned to the task by VxVM • PTID The ID of the parent task, if any If the task must be completed before a higher-level task is completed, the higher-level task is called the parent task. • TYPE/STATE The task type and state The type is a description of the work being performed, such as RELAYOUT. The state is a single letter representing one of three states: –R: Running –P: Paused –A: Aborting • PCT The percentage of the operation that has been completed to this point • PROGRESS The starting, ending, and current offset for the operation, separated by slashes, a description of the task, and names of objects that are affected
6–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
vxtask list Options To display task information in long format: # vxtask -l list To display a hierarchical listing of parent/child tasks: # vxtask -h list To limit output to paused tasks: # vxtask -p list To limit output to running tasks: # vxtask -r list To limit output to aborted tasks: # vxtask -a list To limit output to tasks with a specific task ID or task tag: # vxtask list convertop1 Task Tasktag tag
VM40_Solaris_R1.0_20040115
6-23
Options for vxtask list Several options for vxtask list are illustrated in the slide.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–27
vxtask monitor To provide a continuously updated list of tasks running on the system, use vxtask monitor: vxtask [-c count] [-ln] [-t time] [-w interval] monitor [task_id|task_tag] • -l: Displays task information in long format • -n: Displays information for tasks that are newly registered while the program is running • -c count: Prints count sets of task information and then exits • -t time: Exits program after time seconds • -w interval: Prints “waiting ...” after interval seconds with no activity
When a task is completed, the STATE is displayed as EXITED. VM40_Solaris_R1.0_20040115
6-24
Monitoring a Task with vxtask To provide a continuously updated listing of tasks running on the system, you use the vxtask monitor command. (The vxtask list output represents a point in time and is not continuously updated.) With vxtask monitor, you can track the progress of a task on an ongoing basis. By default, vxtask monitor prints a one-line summary for each task running on the system. # vxtask monitor TASKID 198
PTID
TYPE/STATE
PCT
PROGRESS
RELAYOUT/R
58.48%
0/20480/11976 RELAYOUT datavol
The output is the same as for vxtask list, but changes as information about the task changes. When a task is completed, the STATE is displayed as EXITED.
6–28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
vxtask abort|pause|resume To abort, pause, or resume a task: vxtask abort|pause|resume task_id|task_tag
To pause the task with the task ID 198: # vxtask pause 198
To resume the task with the task ID 198: # vxtask resume 198
To abort the task with the task tag convertop1: # vxtask abort convertop1 VM40_Solaris_R1.0_20040115
6-25
Controlling Tasks with vxtask You can abort, pause, or resume a task by using the vxtask command. You specify the task ID or task tag to identify the task. Using pause, abort, and resume For example, you can pause a task when the system is under heavy contention between the sequential I/O of the synchronization process and the applications trying to access the volume. The pause option allows for an indefinite amount of time for an application to complete before using the resume option to continue the process. The abort option is often used when reversing a process. For example, if you start a process and then decide that you do not want to continue, you reverse the process. When the process returns to 0 percent, you use abort to stop the task. Note: Once you abort or pause a relayout task, you must at some point either resume or reverse it.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–29
vxrelayout The vxrelayout command can also be used to display the status of, reverse, or start a relayout operation: vxrelayout –g diskgroup status|reverse|start volume Note: You cannot stop a relayout with vxrelayout. Only the vxtask command can stop a relayout operation.
## vxrelayout vxrelayout -g -g datadg datadg status status datavol datavol STRIPED, STRIPED, columns=5, columns=5, stwidth=128 stwidth=128 --> --> STRIPED, columns=6, stwidth=128 STRIPED, columns=6, stwidth=128 Relayout Relayout running, running, 58.48% 58.48% completed. completed. VM40_Solaris_R1.0_20040115
Task Task status status
Source Source layout layout Destination Destination layout layout
Percentage Percentage of of task task completed completed
6-26
Controlling Relayout Tasks with vxrelayout The vxrelayout command can also be used to display the status of relayout operations and to control relayout tasks. • The status option displays the status of an ongoing or discontinued layout conversion. • The reverse option reverses a discontinued layout conversion. Before using this option, the relayout operation must be stopped using vxtask abort. • The start option continues a discontinued layout conversion. Before using this option, the relayout operation must have been stopped using vxtask abort. For example, to display information about the relayout operation being performed on the volume datavol, which exists in the datadg disk group: # vxrelayout -g datadg status datavol STRIPED, columns=5, stwidth=128 --> STRIPED, columns=6, stwidth=128 Relayout running, 58.48% completed.
The output displays the characteristics of both the source and destination layouts (including the layout type, number of columns, and stripe width), the status of the operation, and the percentage completed. In the example, the output indicates that an increase from five to six columns for a striped volume is more than halfway completed.
6–30
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Controlling Task Progress To control the I/O rate for mirror copy operations from the command line, use vxrelayout options: -o slow=iodelay • Use this option to reduce the system performance impact of copy operations by setting a number of milliseconds to delay copy operations • Process runs faster without this option.
-o iosize=size • Use this option to perform copy operations in regions with the length specified by size. • Specifying a larger number typically causes the operation to complete sooner, but with greater impact on other processes using the volume. VM40_Solaris_R1.0_20040115
6-27
Controlling the Task Progress Rate VxVM provides additional options that you can use with the vxrelayout command to pass usage-type-specific options to an operation. These options can be used to control the I/O rate for mirror copy operations by speeding up or slowing down resynchronization times. -o slow=iodelay
This option reduces the system performance impact of copy operations. Copy operations are usually a set of short copy operations on small regions of the volume (normally from 16K to 128K). This option inserts a delay between the recovery of each such region. A specific delay can be specified with iodelay as a number of milliseconds. The process runs faster when you do not set this option. -o iosize=size
This option performs copy operations in regions with the length specified by size, which is a standard VxVM length number. Specifying a larger number typically causes the operation to complete sooner, but with greater impact on other processes using the volume. The default I/O size is typically 32K. Caution: Be careful when using these options to speed up operations, because other system processes may slow down. It is always acceptable to increase the slow options to enable more system resources to be used for other operations.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–31
Controlling Task Progress: VEA Right-click Right-clickaatask taskin inthe theTask TaskHistory Historywindow, window, and select Throttle Task. and select Throttle Task.
Set Setthe thethrottling throttling value valuein inthe theThrottle Throttle Task dialog Task dialogbox. box.
VM40_Solaris_R1.0_20040115
6-28
Slowing a Task with vxtask You can also set the slow attribute in the vxtask command by using the syntax: vxtask [-i task_id] set slow=value
Throttling a Task with VEA You can reduce the priority of any task that is time-consuming. Right-click the task in the Task History window, and select Throttle Task. In the Throttle Task dialog box, use the slider to set a throttling value. The larger the throttling value, the slower the task is performed.
6–32
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
What Is Storage Expert? VERITAS Storage Expert (VxSE) is a command line utility that provides volume configuration analysis. Storage Expert: •
Analyzes configurations based on a set of “rules”, or VxVM “best practices”
•
Produces a report of results in ASCII format
•
Provides recommendations, but does not launch any administrative operations
Administrator: Administrator:
•• Are Areall allof ofmy mylogs logs mirrored? mirrored? •• Are Areall allof ofmy my volumes volumesredundant? redundant? •• Should VM40_Solaris_R1.0_20040115 Shouldmy mymirror-stripe mirror-stripe be beaastripe-mirror? stripe-mirror?
Storage Expert
Rules: VxVM “Best Practices”
VM40_Solaris_R1.0_20040115
Report: INFO VIOLATION PASS 6-29
6-29
Analyzing Volume Configurations with Storage Expert What Is Storage Expert? As your environment grows, your volume configurations become increasingly complex. You should monitor your configurations to ensure appropriate fault tolerance, layout, recovery time, and utilization of storage. Checking each volume manually to verify that you have appropriate storage layouts can be a timeconsuming task. The VERITAS Storage Expert (VxSE) utility is designed to help you locate poor volume configurations, monitor your volumes, and provide advice on how to improve volume configurations. Storage Expert is a command line utility that is included as part of VxVM. Storage Expert provides volume configuration analysis based on a set of configuration rules that compare your volumes and disk groups to VxVM “best practice” management policies. Storage Expert reports the status of your volumes compared to the rules and makes recommendations, but does not launch any VxVM administrative operations. Storage Expert consists of a set of scripts (called rules), an engine that runs the scripts (the rules engine), and a report generator. When you run a Storage Expert rule, the utility: 1 Gathers information about your VxVM objects and configuration 2 Analyzes the data by comparing it to predefined VxVM best practices 3 Produces a report in ASCII format containing the results and recommendations for your configuration
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–33
What Are the Rules? Storage Expert contains 23 rules. Rules provide answers to questions about: • Resilience – Do my mirrored volumes have DRLs? (vxse_drl1) – Is my RAID-5 log appropriately sized? (vxse_raid5log2)
•
Disk groups and associated objects – Are all of my disk groups of the current version? (vxse_dg4) – Are all of my volumes redundant? (vxse_redundancy) – Is my disk group configuration database too full? (vxse_dg1)
•
Striping – Are my stripes an appropriate size? (vxse_stripes1) – Do my striped volumes have too few or too many columns? (vxse_stripes2)
•
Spare disks – Do I have enough spare disks? (vxse_spares) – Do I have too many spare disks? (vxse_spares)
VM40_Solaris_R1.0_20040115
VM40_Solaris_R1.0_20040115
6-30
6-30
What Are the Storage Expert Rules? Storage Expert currently contains 23 rules. Each rule performs a different check on your storage configuration. A complete list of Storage Expert rules, their customizable attributes, and default values is included at the end of this section. Rules enable you to answer questions such as: • Does your storage configuration have the resilience to withstand disk failure and system failure? – Do your large mirrored volumes have associated DRLs? (vxse_drl1) – Are the DRLs mirrored? (vxse_drl2) – Do your RAID-5 volumes have logs? (vxse_raid5log1) – Is your RAID-5 log an appropriate size? (vxse_raid5log2) – Are your RAID-5 logs mirrored? (vxse_raid5log3) • Are your disk groups and VxVM objects configured to ensure integrity and resilience? – Is your disk group configuration database becoming too full? (vxse_dg1) – Are your volumes redundant? (vxse_redundancy) – Do your disk groups contain any disabled or detached plexes, or any stopped or disabled volumes? (vxse_volplex) – Does your disk group have too many or too few disk group configuration copies? (vxse_dg2) – Do your disk groups have the latest disk group version number? (vxse_dg4)
6–34
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
– –
•
•
Do you have too many configured disks in a disk group? (vxse_dg1) Do you have any initialized disks that are not part of a disk group? (vxse_disk) – Do you have any disk groups that are detected, but not imported? (vxse_dg6) Are your striping parameters for striped and RAID-5 volumes configured appropriately? – Should your large mirror-stripe volumes be reconfigured as stripe-mirror volumes? (vxse_mirstripe) – Do your RAID-5 volumes have too few or too many columns? (vxse_raid5) – Is the stripe unit size set to an integer multiple of the default 8K? (vxse_stripes1) – Do your striped plexes have too few or too many columns? (vxse_stripes2) Do you have adequate spare disks configured for use in a disk group? – Do you have enough spare disks in a disk group? (vxse_spares) – Do you have too many spare disks in a disk group? (vxse_spares)
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–35
Running Storage Expert Rules • •
VxVM and VEA must be installed. Rules are located in /opt/VRTS/vxse/vxvm. Add this path to your PATH variable.
•
Syntax: rule_name [options] {info|list|check|run}
•
In the syntax: – info – list – check – run
Displays rule description Displays attributes of rule Displays default values Runs the rule
In the output: – INFO – PASS – VIOLATION
Information is displayed. Object met rule conditions. Object did not meet conditions.
•
VM40_Solaris_R1.0_20040115
6-31
Before Using Storage Expert Before you run Storage Expert, ensure that: • You have root user privileges. • The following packages are installed: – VRTSvxvm – VRTSob – VRTSvmpro – VRTSfspro • The VEA service is started on the system. Note: The VEA packages must be installed to run Storage Expert, even if you are not using the VEA GUI, and even though Storage Expert cannot be administered through the VEA. Running a Storage Expert Rule Storage Expert rules are located in the /opt/VRTS/vxse/vxvm directory. Add this path to your PATH environment variable before running a rule. The syntax for running a rule is: rule_name [options] {info|list|check|run} [attribute=value]
In the syntax: • info Displays a description of the rule • list Displays attributes of the rule that can be modified by the user 6–36
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
• •
Displays the default values of user-definable rule attributes Runs the rule
check run
Options include: • -g diskgroup • -d defaults_file • -v
Runs a rule for a specific disk group Runs a rule using a user-created defaults file Displays output in verbose mode
Rule Output When you run a rule, output is generated that indicates the status of objects that are examined against the rule. In the output: • INFO Indicates information about an object • PASS Indicates that the object met the conditions of the rule • VIOLATION Indicates that the object did not meet the conditions of the rule Notes: • By default, output is displayed on the screen, but you can redirect the output to a file using standard UNIX redirection. • You can also set Storage Expert to run as a cron job to notify administrators and automatically archive reports.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–37
Running Storage Expert: Examples • To display a description of rule vxse_raid5log1: # vxse_raid5log1 info vxse:vxse_raid5log1: vxse:vxse_raid5log1: vxse:vxse_raid5log1: vxse:vxse_raid5log1: vxse:vxse_raid5log1:
INFO: INFO: INFO: INFO: INFO:
vxse_raid5log1 -DESCRIPTION --------------------------This rule checks for RAID-5 volumes which do not have an associated log
• To run vxse_raid5log1 on the disk group datadg: # vxse_raid5log1 -g datadg run vxse:vxse_raid5log1: INFO: vxse_raid5log1 - RESULTS vxse:vxse_raid5log1: INFO: --------------------------vxse_raid5log1 VIOLATION: Disk group (datadg) RAID5 volume (raid5vol) does not have a log VM40_Solaris_R1.0_20040115
6-32
Displaying a Rule Description: Example To display a description of rule vxse_raid5log1: # vxse_raid5log1 info
The output displays a brief description of the rule. Running a Rule: Example To run the rule vxse_raid5log1 on the disk group datadg: # vxse_raid5log1 -g datadg run
The output indicates that the RAID-5 volume raid5vol does not have a log, which violates VxVM best practices. In the event of system failure and disk failure in a RAID-5 volume, data can be lost or corrupted if the volume does not have a RAID-5 log.
6–38
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Running Storage Expert: Examples • To display attributes of rule vxse_spares: # vxse_spares list ... vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares:
INFO: INFO: INFO: INFO: INFO: INFO: INFO: INFO: INFO: INFO:
max_disk_spare_ratio - max. percentage of spare disks in the disk group. Warn if the number of spare disks is greater than this percent min_disk_spare_ratio - min. percentage of spare disks in the disk group. Warn if the number of spare disks is less than this percent
• To display attribute default values of rule vxse_spares: # vxse_spares check
... vxse:vxse_spares: vxse:vxse_spares: vxse:vxse_spares: VM40_Solaris_R1.0_20040115 vxse:vxse_spares:
INFO: INFO: INFO: INFO:
max_disk_spare_ratio - (20) max. percent of spare disks min_disk_spare_ratio - (10) 6-33 min. percent of spare disks
VM40_Solaris_R1.0_20040115
6-33
Displaying Tunable Attributes of a Rule: Example Some rules compare VxVM object characteristics against a set of defined attribute values. For example, rule vxse_spares checks that the number of spare disks in a disk group is within the VxVM best practices threshold. To determine what that threshold is, you can display information about the attributes of the rule by using the list keyword. For example: # vxse_spares list
Displaying Default Attribute Values of a Rule: Example To display the default value of the attributes for rule vxse_spares, use the check keyword: # vxse_spares check
The output indicates that when you run the rule vxse_spares, you receive a warning if the number of spare disks in the disk group is less than 10 percent or greater than 20 percent.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–39
Customizing Rule Defaults You can run a rule against different attribute values by: • Specifying an attribute value in the run command:
# vxse_drl1 run mirror_threshold=4g • Running Storage Expert against a user-created defaults file: # vxse_drl1 -d /etc/vxse.myfile run • Modifying the Storage Expert defaults file: 1. Open the defaults file /etc/default/vxse. 2. Delete the comment symbol (#) from the line that contains the attribute you want to modify. 3. Type a new default value and save the file. VM40_Solaris_R1.0_20040115
6-34
Customizing Rule Default Values You can customize the default attribute values used by Storage Expert rules to meet the needs of your environment by using one of several methods. • To run a rule with an attribute value other than the default, you can specify the rule attribute and its new value on the command line when you run the rule. For example, in the rule vxse_drl1, the mirror_threshold attribute is 1 GB by default. This rule issues a warning if a mirror is larger than 1 GB and does not have an attached dirty region log. To run the vxse_drl1 rule with a different mirror_threshold value of 4 GB: # vxse_drl1 run mirror_threshold=4g • To run Storage Expert rules against a user-created defaults file, you create a new defaults file with customized attribute values, then specify the file in the command line using the -d option. For example, to run the vxse_drl1 rule against the user-created defaults file /etc/vxse.myfile: # vxse_drl1 -d /etc/vxse.myfile run • To change the default value of an attribute in the Storage Expert defaults file: a Open the defaults file /etc/default/vxse. b Delete the comment symbol (#) from the beginning of the line that contains the attribute that you want to modify. (You can also specify values that are to be ignored by inserting a # character at the start of a line.) c Type a new value for the attribute and save the file. When you run the rule again, the new value is used for that rule by default. 6–40
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Storage Expert Rules: Complete Listing This section contains a list of all Storage Expert rules, descriptions, and default attributes. Some of the concepts described in the rules are discussed later in this training. Rule
Description
Default Attributes
vxse_dc_failures
Checks and points out failed disks and disabled controllers.
This rule has no tunable attributes.
vxse_dg1
Checks for disk group configurations in which the disk group has become too large
max_disks_per_dg=250 Warn if a disk group has more disks than 250.
vxse_dg2
Checks for disk group configurations in which the disk group has too many or too few disk group configuration copies, and if the disk group has too many or too few disk group log copies
This rule has no tunable attributes.
vxse_dg3
Checks disk group configurations to verify that the disk group has the correct “on disk config” size
This rule has no tunable attributes.
vxse_dg4
Checks for disk groups that do not have a current version number, and which may need to be upgraded
This rule has no tunable attributes.
vxse_dg5
Checks for disk groups in which there is only one configuration copy
This rule has no tunable attributes.
vxse_dg6
Checks for disks that are initialized, but are not part of any disk group
This rule has no tunable attributes.
vxse_disk
Checks for disks that are initialized, but are not part of any disk group.
This rule has no tunable attributes.
vxse_disklog
Checks for physical disks that have more than one RAID-5 log
This rule has no tunable attributes.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–41
vxse_drl1
Checks for large mirror volumes that do not have an associated DRL log
mirror_threshold=1g Warn if a mirror is larger than 1 GB and does not have an attached DRL log.
vxse_drl2
Checks for large mirror volumes that do not have DRL log that is mirrored
large_mirror_size=20g Warn if a mirror-stripe volume is larger than 20 GB.
vxse_host
Checks that the system “hostname” in the /etc/vx/ volbootfile matches the hostname that was assigned to the system when it was booted
This rule has no tunable attributes.
vxse_mirstripe
Checks for large mirrorstriped volumes that should be striped-mirrors
large_mirror_size=1g Warn if a mirror-stripe volume is larger than 1 GB. nsd_threshold=8 Warn if a mirror-stripe volume has more subdisks than 8.
vxse_raid5
Checks for RAID-5 volumes that are too narrow or too wide
too_narrow_raid5=4 Warn if actual number of RAID-5 columns is less than 4. too_wide_raid5=8 Warn if the actual number of RAID5 columns is greater than 8.
vxse_raid5log1
Checks for RAID-5 volumes that do not have an associated log
This rule has no tunable attributes.
vxse_raid5log2
Checks for recommended minimum and maximum RAID-5 log sizes
r5_max_size=1g Warn if a RAID-5 log is larger than 1 GB. r5_min_size=64m Warn if a RAID-5 log is smaller than 64 MB.
vxse_raid5log3
Checks for large RAID-5 volumes that do not have a mirrored RAID-5 log
large_vol_size=20g Warn if a RAID-5 volume with a non-mirrored RAID-5 log is larger than 20 GB.
6–42
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
vxse_redundancy
Checks the redundancy of volumes
volume_redundancy The value of 2 performs a mirror redundancy check. A value of 1 performs a RAID-5 redundancy check. The default value of 0 performs no redundancy check.
vxse_rootmir
Checks that all root mirrors are set up correctly
This rule has no tunable attributes.
vxse_spares
Checks that the number of spare disks in a disk group is within the VM “Best Practices” thresholds
max_disk_spare_ratio=20 Warn if the percentage of spare disks is greater than 20. min_disk_spare_ratio=10 Warn if the percentage of spare disks is less than 10.
vxse_stripes1
Checks for stripe volumes whose stripe unit is not a multiple of the default stripe unit size
default_stripeunit=8k Warn if a stripe does not have a stripe unit which is an integer multiple of 8K.
vxse_stripes2
Checks for stripe volumes that have too many or too few columns.
too_narrow_stripe=3 Warn if a striped volume has fewer columns than 3. too_wide_stripe=16 Warn if a striped volume has more columns than 16.
vxse_volplex
Checks for volumes and plexes that are in various states, such as: - Disabled plexes - Detached plexes - Stopped volumes - Disabled volumes - Disabled logs - Failed plexes - Volumes needing recovery
This rule has no tunable attributes.
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–43
Summary You should now be able to: • Resize a volume, file system, or LUN while the volume remains online. • Change the volume layout while the volume remains online. • Manage volume maintenance tasks with VEA and from the command line. • Analyze volume configurations by using the Storage Expert utility.
VM40_Solaris_R1.0_20040115
6-35
Summary This lesson described how to perform and monitor volume maintenance tasks using VERITAS Volume Manager (VxVM). This lesson described how to perform online administration tasks, such as resizing a volume and changing the layout of a volume, and how to analyze volume configurations with the Storage Expert utility. Next Steps The next lesson describes root disk encapsulation and upgrading. Additional Resources • VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. • VERITAS Volume Manager User’s Guide—VERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. • VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.
6–44
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 6 Lab 6: Reconfiguring Volumes Online • In this lab, you create and resize volumes and change volume layouts. You also practice using the Storage Expert utility to analyze volume configurations. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B.
VM40_Solaris_R1.0_20040115
6-36
Lab 6: Reconfiguring Volumes Online To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
Lesson 6 Reconfiguring Volumes Online Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6–45
6–46
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 7 Encapsulation and Rootability
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
7-2
Introduction Overview This lesson describes the process of placing the boot disk under VxVM control. Methods for creating an alternate boot disk, removing the boot disk from VxVM control, and upgrading VxVM are covered. Importance Disk encapsulation enables you to preserve data on a disk when you place the disk under VxVM control. By encapsulating and mirroring your boot disk, you can ensure that if your boot disk is lost, the system continues to operate on the mirror. A thorough understanding of the encapsulation process is important for performing upgrades of VxVM software.
7–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: • Place the boot disk under VxVM control. • Create an alternate boot disk by mirroring the boot disk that is under VxVM control. • Remove the boot disk from VxVM control. • Upgrade to a new VxVM version.
VM40_Solaris_R1.0_20040115
7-3
Outline of Topics • Placing the Boot Disk Under VxVM Control • Creating an Alternate Boot Disk • Removing the Boot Disk from VxVM Control • Upgrading to a New VxVM Version
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–3
What Is Encapsulation? •
Encapsulation is the process of converting partitions into volumes to bring those partitions under VxVM control.
•
Requirements: – One free partition (for public and private region) – s2 slice that represents the full disk – 2048 sectors free at beginning or end of disk for the private region Private region
Encapsulated data disk
home eng acct dist
homevol
engvol
acctvol
distvol
VM40_Solaris_R1.0_20040115
7-4
VM40_Solaris_R1.0_20040115
7-4
Placing the Boot Disk Under VxVM Control What Is Encapsulation? Encapsulation is a method of placing a disk under VxVM control in which the data that exists on a disk is preserved. Encapsulation converts existing partitions into volumes, which provides continued access to the data on the disk after a reboot. After a disk has been encapsulated, the disk is handled in the same way as an initialized disk. For example, suppose that a system has three partitions on the disk drive. When you encapsulate the disk to bring it under VxVM control, there will be three volumes in the disk group. Data Disk Encapsulation Requirements Disk encapsulation cannot occur unless these requirements are met: • Partition table entries must be available on the disk for the public and private regions. During encapsulation, you are prompted to select the disk layout. If you choose a CDS disk layout, then only one partition is needed. However, should encapsulation as a CDS disk fail, you can specify a sliced layout be used instead, in which case you would need two free partitions. • The disk must contain an s2 slice that represents the full disk (The s2 slice cannot contain a file system.) • 2048 sectors of unpartitioned free space, rounded up to the nearest cylinder boundary, must be available, either at the beginning or at the end of the disk.
7–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
What Is Rootability? •
Rootability is the process of encapsulating the root file system, swap device, and other file systems on the boot disk.
•
Requirements are the same as for data disk encapsulation, but the private region can be created from swap space. Partitions are mapped to subdisks that are used to create the volumes that replace the original partitions.
Encapsulated boot disk
Private region / /usr /var swap
rootvol
usr
var
swapvol
VM40_Solaris_R1.0_20040115
VM40_Solaris_R1.0_20040115
7-5
7-5
What Is Rootability? Rootability, or root encapsulation, is the process of placing the root file system, swap device, and other file systems on the boot disk under VxVM control. VxVM converts existing partitions of the boot disk into VxVM volumes. The system can then mount the standard boot disk file systems (that is, /, /usr, and so on) from volumes instead of disk partitions. Boot Disk Encapsulation Requirements Boot disk encapsulation has the same requirements as data disk encapsulation, but requires two free partitions (for the public and private regions). When encapsulating the boot disk, the private region can be created from the swap area, which reduces the swap area by the size of the private region. The private region is created at the beginning of the swap area, and the swap partition begins one cylinder from its original location. When creating new boot disks, you should start the partitions on the new boot disks on the next cylinder beyond the 2048 default used for the private region.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–5
Why Encapsulate the Boot Disk? • You should encapsulate the boot disk only if you plan to mirror the boot disk. • Benefits of mirroring the boot disk: – Enables high availability – Fixes bad blocks automatically (for reads) – Improves performance
• There is no benefit to boot disk encapsulation for its own sake. You should not encapsulate the boot disk if you do not plan to mirror the boot disk.
VM40_Solaris_R1.0_20040115
7-6
Why Encapsulate Root? It is highly recommended that you encapsulate and mirror the boot disk. Some of the benefits of encapsulating and mirroring root include: • High availability Encapsulating and mirroring root sets up a high availability environment for the boot disk. If the boot disk is lost, the system continues to operate on the mirror disk. • Bad block revectoring If the boot disk has bad blocks, then VxVM reads the block from the other disk and copies it back to the bad block to fix it. SCSI drives automatically fix bad blocks on writes, which is called bad block revectoring. • Improved performance By adding additional mirrors with different volume layouts, you can achieve better performance. Mirroring alone can also improve performance if the root volumes are performing more reads than writes, which is the case on many systems. When Not to Encapsulate Root If you do not plan to mirror root, then you should not encapsulate it. Encapsulation adds a level of complexity to system administration, which increases the complexity of upgrading the operating system.
7–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Limitations of Boot Disk Encapsulation • Encapsulating the boot disk adds steps to OS upgrades. • A system cannot boot from a boot disk that spans multiple devices. • You should never grow or change the layout of boot disk volumes. These volumes map to a physical underlying partition on disk and must be contiguous.
VM40_Solaris_R1.0_20040115
7-7
Limitations of Boot Disk Encapsulation A system cannot boot from a boot disk that spans multiple devices. You should never expand or change the layout of boot volumes. No volume associated with an encapsulated boot disk (rootvol, usr, var, opt, swapvol, and so on) should be expanded or shrunk, because these volumes map to a physical underlying partition on the disk and must be contiguous. If you attempt to expand these volumes, the system can become unbootable if it becomes necessary to revert back to slices in order to boot the system. Expanding these volumes can also prevent a successful Solaris upgrade, and a fresh install can be required. Additionally, the upgrade_start script (used in upgrading VxVM to a new version) might fail. Note: You can add a mirror of a different layout, but the mirror is not bootable.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–7
File System Requirements For root, usr, var, and opt volumes: •
Use UFS file systems. (VxFS is not available until later in the boot process.)
•
Use contiguous disk space. (Volumes cannot use striped, RAID-5, concatenated mirrored, or striped mirrored layouts.)
•
Do not use dirty region logging on the system volumes. (You can use DRL for the opt and var volumes.)
For swap volumes: •
The first swap volume must be contiguous, and, therefore, cannot use striped or layered layouts.
•
Other swap volumes can be noncontiguous and can use any layout. However, there is an implied 2-GB limit of usable swap space per device for 32-bit operating systems.
VM40_Solaris_R1.0_20040115
7-8
File System Requirements for Root Volumes To boot from volumes, you should follow these requirements and recommendations for the file systems on root volumes: Solaris
For the root, usr, var, and opt volumes: • Use UFS file systems: You must use UFS file systems for these volumes, because the VERITAS File System (VxFS) package is not available until later in the boot process when the scripts in /etc/rc2.d (multiuser mode) are executed. • Use contiguous disk space: These volumes must be located in a contiguous area on disk, as required by the operating system. For this reason, these volumes cannot use striped, RAID-5, concatenated mirrored, or striped mirrored layouts. • Do not use dirty region logging for root or usr: You cannot use dirty region logging (DRL) on the root and usr volumes. If you attempt to add a dirty region log to the root and usr volumes, you receive an error. Note: The opt and var volumes can use dirty region logging. • Swap Space Considerations: If you have swap defined then it needs to be contiguous disk space. The first swap volume (as listed in the /etc/vfstab file) must be contiguous and, therefore, cannot use striped or layered layouts. Additional swap volumes can be noncontiguous and can use any layout.
7–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Note: You can add noncontiguous swap space through Volume Manager. However, Solaris automatically uses swap devices in a round-robin method, which may reduce expected performance benefits of adding striped swap volumes. For 32-bit operating systems, usable space per swap device is limited to 2 GB. For 64-bit operating systems, this limit is much higher (up to 263 - 1 bytes).
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–9
Before Encapsulating the Boot Disk • Plan your rootability configuration: bootdg: sysdg
Encapsulated boot disk
Boot disk mirror
Spare disks
• Enable boot disk aliases: eeprom “use-nvramrc?=true” • Record the layout of the partitions on the unencapsulated boot disk to save for future use. VM40_Solaris_R1.0_20040115
7-9
Before Encapsulating the Boot Disk Plan your rootability configuration. bootdg is a system-wide reserved disk group name that is an alias for the disk group that contains the volumes that are used to boot the system. When you place the boot disk under VxVM control, VxVM sets bootdg to the appropriate disk group. You should never attempt to change the assigned value of bootdg; doing so may render your system unbootable. An example configuration would be to place the boot disk into a disk group named sysdg, and add at least two more disks to the disk group: one for a boot disk mirror and one as a spare disk. VxVM will set bootdg to sysdg. Enable boot disk aliases. Before encapsulating your boot disk, set the EEPROM variable use-nvramrc? to true. This enables VxVM to take advantage of boot disk aliases to identify the mirror of the boot disk if a replacement is needed. If this variable is set to false, you must determine which disks are bootable yourself. Set this variable to true as follows: eeprom “use-nvramrc?=true”
Save the layout of partitions before you encapsulate the boot disk. For example, on Solaris, you can use the prtvtoc command to record the layout of the partitions on the unencapsulated boot disk (/dev/rdsk/c0t0d0s2 in this example): # prtvtoc /dev/rdsk/c0t0d0s2
Record the output from this command for future reference.
7–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Encapsulating the Boot Disk vxdiskadm: “Encapsulate one or more disks”
Follow the prompts by specifying: • Name of the device to add • Name of the disk group to which the disk will be added • Sliced disk format (The boot disk cannot be a CDS disk.)
VM40_Solaris_R1.0_20040115
7-10
Encapsulating the Boot Disk: vxdiskadm You can use vxdiskadm for encapsulating data disks as well as the boot disk. To encapsulate the boot disk: 1 From the vxdiskadm main menu, select the “Encapsulate one or more disks” option. 2 When prompted, specify the disk device name of the boot disk. If you do not know the device name of the disk to be encapsulated, type list at the prompt for a complete listing of available disks. 3 When prompted, specify the name of the disk group to which the boot disk will be added. The disk group does not need to already exist. 4 When prompted, accept the default disk name and confirm that you want to encapsulate the disk. 5 If you are prompted to choose whether the disk is to be formatted as a CDS disk that is portable between different operating systems, or as a nonportable sliced disk, then you must select sliced. Only the sliced format is suitable for use with root, boot, or swap disks. 6 When prompted, select the default private region size. vxdiskadm then proceeds to encapsulate the disk. 7 A message confirms that the disk is encapsulated and states that you should reboot your system at the earliest possible opportunity.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–11
After Boot Disk Encapsulation After boot disk encapsulation, you can view operating system-specific files to better understand the encapsulation process.
Solaris: • VTOC • /etc/system • /etc/vfstab Note: Other platform-specific information will be added when VxVM 4.0 is released on those platforms.
VM40_Solaris_R1.0_20040115
7-11
Solaris
Viewing Encapsulated Disks To better understand encapsulation of the boot disk, you can examine operating system files for the changes made by the VxVM root encapsulation process. Solaris
After encapsulating the boot disk, if you view the VTOC, you notice that Tag 14 is used for the public region, and Tag 15 is used for the private region. The partitions for the root, swap, usr, and var partitions are still on the disk, unlike on data disks where all partitions are removed. The boot disk is a special case, and the partitions are kept to make upgrading easier.
As part of the root encapsulation process, the /etc/system file is updated to include information that tells VxVM to boot up on the encapsulated volumes: rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1
7–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VxVM also updates the /etc/vfstab file to mount volumes instead of partitions.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–13
Alternate Boot Disk: Requirements • An alternate boot disk is a mirror of the entire boot disk. An alternate boot disk preserves the boot block in case the initial boot disk fails. • Creating an alternate boot disk requires: – The boot disk must be encapsulated by VxVM. – Another disk must be available with enough space to contain all of the boot disk partitions. – All disks must be in the boot disk group.
• The root mirror places the private region at the beginning of the disk. The remaining partitions are placed after the private region. VM40_Solaris_R1.0_20040115
7-12
Creating an Alternate Boot Disk Mirroring the Boot Disk To protect against boot disk failure, you can create an alternate boot disk. An alternate boot disk is a mirror of the entire boot disk. You can use the alternate boot disk to boot the system if the primary boot disk fails. Requirements for Mirroring the Boot Disk • The boot disk must be encapsulated by VxVM in order to be mirrored. • To mirror the boot disk, you must provide another disk with enough space to contain all of the root partitions. • You can only use disks in the boot disk group for the boot disk and alternate boot disks. The root mirror places the private region at the beginning of the disk, and the remaining partitions are placed after the private region. Each disk contains all of the data, but data is not necessarily placed at the exact same location on each disk. Note: Whenever you create an alternate boot disk, you should always verify that the root mirror is bootable. Why Create an Alternate Boot Disk? Creating a mirror of a system boot disk makes the system less vulnerable to failure. If one disk fails, the system can function with the mirror. An alternate boot disk is used if the boot disk becomes unbootable due to a stale root volume, errors in VxVM header information, or hardware failure on the boot disk. 7–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Creating an Alternate Boot Disk VEA: • •
Highlight the boot disk, and select Actions—>Mirror Disk. Specify the target disk to use as the alternate boot disk.
vxdiskadm: “Mirror volumes on a disk”
CLI: To mirror the root volume only: # vxrootmir alternate_disk To mirror all other unmirrored, concatenated volumes on the boot disk to the alternate disk: # vxmirror boot_disk alternate_disk To mirror other volumes to the boot disk or other disks: # vxassist mirror homevol alternate_disk VM40_Solaris_R1.0_20040115
7-13
Creating an Alternate Boot Disk: VEA 1 Select a disk that is at least as large as the boot disk, and add the disk to the boot disk group. 2 In the main window, highlight the boot disk, then select Actions—>Mirror Disk. 3 In the Mirror Disk dialog box, verify the name of the boot disk, and specify the target disk to use as the alternate boot disk. 4 Click Yes in the Mirror Disk dialog box to complete the mirroring process. 5 After the root mirror is created, verify that the root mirror is bootable. Creating an Alternate Boot Disk: vxdiskadm 1 Select a disk that is at least as large as the boot disk, and add the disk to the boot disk group. 2 In the vxdiskadm main menu, select the “Mirror volumes on a disk” option. 3 When prompted, specify the name of the disk containing the volumes to be mirrored (that is, the name of the boot disk). 4 When prompted, specify the name of the disk to which the boot disk will be mirrored. 5 A summary of the action is displayed, and you are prompted to confirm the operation. 6
After the root mirror is created, verify that the root mirror is bootable.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–15
Creating an Alternate Boot Disk: CLI 1 Select a disk that is at least as large as the boot disk, and add the disk to the boot disk group. 2 To create a mirror for the root volume only, use the vxrootmir command: # vxrootmir alternate_disk where alternate_disk is the disk name assigned to the other disk. vxrootmir invokes vxbootsetup (which invokes installboot), so that the disk is partitioned and made bootable. (The process is similar to using vxmirror and vxdiskadm.) 3 To mirror all other concatenated, nonmirrored volumes on the primary boot disk to your alternate boot disk, you can use the command: # vxmirror boot_disk alternate_disk 4 Other volumes on the boot disk can be mirrored separately using vxassist. For example, if you have a /home file system on a volume homevol, you can mirror it to alternate_disk using the command: # vxassist mirror homevol alternate_disk If you do not have space for a copy of some of these file systems on your alternate boot disk, you can mirror them to other disks. You can also span or stripe these other volumes across other disks attached to your system. 5 After the root mirror is created, verify that the root mirror is bootable.
7–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Boot Disk Error Messages Stale Staleroot rootvolume volume vxvm: vxconfigd: Warning: Plex rootvol-01 for root volume is stale or unusable Failed Failedstartup startup vxvm: vxconfigd: Error: System startup failed: Root Rootplex plexnot notvalid valid vxvm: vxconfigd: Error: System boot disk does not have a valid root plex Please boot from one of the following disks: Disk: disk01 Device: c0t1d0s0
VM40_Solaris_R1.0_20040115
7-14
Alternate Alternateboot bootdisks disksare arelisted. listed.
Possible Boot Disk Errors • Root plex is stale or unusable vxvm:vxconfigd: Warning: Plex rootvol-01 for root volume is stale or unusable • System startup failed vxvm:vxconfigd: ERROR: System startup failed • System boot disk does not have a valid root plex vxvm:vxconfigd: ERROR: System boot disk does not have a valid root plex Please boot from one of the following disks: Disk: diskname Device: device ... In the third message, alternate boot disks containing valid root mirrors are listed as part of the error message. Try to boot from one of the disks named in the error message. You may be able to boot using a device alias for one of the named disks. For example, use this command: ok> boot vx-diskname
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–17
Booting from an Alternate Mirror To boot the system using an alternate boot disk after failure of the primary boot disk: 1. Set the eeprom variable use-nvramrc? to true: ok> setenv use-nvramrc? true ok> reset This variable must be set to true to enable the use of alternate boot disks. 2. Check for available boot disk aliases: ok> devalias vx-rootdisk Output displays the name of the boot disk and available mirrors. vx-diskname 3. Boot from an available boot disk alias: ok> boot vx-diskname VM40_Solaris_R1.0_20040115
7-15
Booting from an Alternate Mirror If the boot disk is encapsulated and mirrored, you can use one of its mirrors to boot the system if the primary boot disk fails. To boot the system after failure of the primary boot disk on a SPARC system: 1 Check to ensure that the eeprom variable use-nvramrc? is set to true: ok> printenv use-nvramrc? This variable must be set to true to enable the use of alternate boot disks. To set the value of use-nvramrc? to true: ok> setenv use-nvramrc? true ok> reset 2 Check for available boot disk aliases: ok> devalias The devalias command displays the names of the boot disk and root mirrors. For example: vx-rootdisk vx-diskname Mirrors of the boot disk are listed in the form vx-diskname. 3 Boot from an available boot disk alias: ok> boot vx-diskname
7–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Unencapsulating a Boot Disk • To unencapsulate a boot disk, use vxunroot. • Requirements: – Remove all but one plex of rootvol, swapvol, usr, var, opt, and home. – You must have one disk in addition to the boot disk in the boot disk group.
• Use vxunroot when you need to: – Boot from physical system partitions. – Change the size or location of the private region on the boot disk. – Upgrade both the OS and VxVM.
• Do not use vxunroot if you are only upgrading VxVM packages, including the VEA package. VM40_Solaris_R1.0_20040115
7-16
Removing the Boot Disk from VxVM Control The vxunroot Command To convert the root file systems back to being accessible directly through disk partitions instead of through volume devices, you use the vxunroot utility. Other changes that were made to ensure the booting of the system from the root volume are also removed so that the system boots with no dependency on VxVM. For vxunroot to work properly, the following conditions must be met: • All but one plex of rootvol, swapvol, usr, var, opt, and home must be removed (using vxedit or vxplex). • One disk in addition to the boot disk must exist in the boot disk group. If none of these conditions is met, the vxunroot operation fails, and volumes are not converted back to disk partitions. When to Use vxunroot Use vxunroot when you need to: • Boot from physical system partitions. • Change the size or location of the private region on the boot disk. • Upgrade both your operating system and VxVM. You do not need to use vxunroot if you are only upgrading VxVM packages, including the VEA package.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–19
The vxunroot Command 1. Ensure that the boot disk volumes volumes only have one plex each: # vxprint -ht rootvol swapvol usr var 2. If boot disk volumes have more than one plex each, remove the unnecessary plexes: # vxplex -o rm dis plex_name 3. Run the vxunroot utility: # vxunroot
VM40_Solaris_R1.0_20040115
7-17
Unencapsulating the Boot Disk To convert root volumes back to partitions: 1 Ensure that the rootvol, swapvol, usr, and var volumes have only one associated plex each. The plex must be contiguous, nonstriped, nonspanned, and nonsparse. For information about the plex, use the following command: # vxprint -ht rootvol swapvol usr var 2 If any of these volumes have more than one associated plex, remove the unnecessary plexes using the command: # vxplex -o rm dis plex_name 3 Run the vxunroot program using the following command: # vxunroot Solaris Note
This command changes the volume entries in /etc/vfstab to the underlying disk partitions for the rootvol, swapvol, usr, and var volumes. The command also modifies /etc/system and prompts for a reboot so that disk partitions are mounted instead of volumes for the root, swap, usr, and var volumes.
7–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Notes on Upgrading VxVM • Determine what you are upgrading: VxVM only, both VxVM and the operating system, or the operating system only. • Follow documentation for VxVM and the operating system. • Install appropriate patches. • License is not required to upgrade VxVM only. • Your existing VxVM configuration is retained. • Upgrading VxVM does not upgrade existing disk group versions. You may need to manually upgrade each of your disk groups after a VxVM upgrade. VM40_Solaris_R1.0_20040115
7-18
Upgrading to a New VxVM Version General Notes on Upgrades • Determine what you are upgrading: Before you upgrade, determine whether you need to upgrade VxVM only, both VxVM and the operating system, or the operating system only. • Follow documentation: When upgrading, always follow the operating system and VxVM release notes and other documentation to determine proper installation procedures and required patches. • Install appropriate patches: You should install appropriate patches before adding new VxVM packages. For the latest patch information, visit the VERITAS Technical Support Web site. • License is not required to upgrade VxVM only: If you are already running an earlier release of VxVM, you do not need a new license key to upgrade to VxVM release 4.0. However, you must install the new licensing package, VRTSvlic, which uses your existing licensing information. VRTSvlic recognizes keys created in the previous format, and the new utilities in the VRTSvlic package report on, test, and install keys of both formats. • Your existing VxVM configuration is retained: The upgrade procedures allow you to retain your existing VxVM configuration. After upgrading, you can resume using VxVM without running the vxinstall program. • Upgrading VxVM does not upgrade existing disk group versions: Importing a pre-4.0 VxVM disk group does not automatically upgrade the disk group version to the VxVM 4.0 level. You may need to manually upgrade each of your disk groups after a VxVM upgrade.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–21
Upgrading VxVM Only Methods: • VxVM installation script (installvm) • Manual package upgrade • VxVM upgrade scripts – upgrade_start – upgrade_finish
VM40_Solaris_R1.0_20040115
7-19
Note: Upgrade procedures are documented in the VERITAS Volume Manager 4.0 Installation Guide. Follow all instructions in the installation guide when performing any upgrade. This training provides guidelines for a successful upgrade; refer to the installation documentation for detailed steps. Upgrading Volume Manager Only If you are already running a version of your operating system that is supported with the new version of VxVM, then you can upgrade Volume Manager only by using one of several methods: • VxVM installation script: Use the installvm to install the new version of VxVM. The installvm process is the easiest method of upgrading. • Manual package upgrade: Use the operating system-specific package installation commands to install the new version of VxVM on top of your existing software. The advantage of this method is that only one reboot is required. • VxVM upgrade scripts: Use the upgrade_start and upgrade_finish scripts to install the VxVM software. The advantage of this method is that VxVM configuration data is backed up and the boot disk is unencapsulated during the upgrade procedure. However, multiple reboots are required.
7–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Upgrading VxVM Only: installvm • Simply invoke the installvm script and follow instructions when prompted. • If you are doing a multihost installation you can avoid copying packages to each system. For example, to ensure that packages are not copied remotely when using the NFS mountable file system $NFS_FS: # cd /cdrom/CD_NAME # cp -r * $NFS_FS # cd volume_manager # ./installvm -pkgpath $NFS_FS/volume_manager/pkgs -patchpath $NFS_FS/volume_manager/patches
• This copies the files to an NFS mounted file system that is connected to all of the systems on which you want to install the software.
VM40_Solaris_R1.0_20040115
7-20
Upgrading VxVM Only: installvm You can use the installvm script to upgrade VxVM with an encapsulated or unencapsulated boot disk. You simply invoke the installvm script and follow instructions when prompted. If you are doing a multihost installation and you want to avoid the performance penalty of the installation scripts copying packages from the CDROM to a disk attached to each system, you can use installvm with the format: # ./installvm -pkgpath nfs/auto-mounted filesystem
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–23
Upgrading VxVM Only: Manual Package Upgrade 1. Bring the system to single-user mode. 2. Stop the vxconfigd and vxiod daemons: # vxdctl stop # vxiod -f set 0 3. Remove the VMSA software package VRTSvmsa (optional). 4. Add the new VxVM packages using package installation commands. 5. Perform a reconfiguration reboot.
VM40_Solaris_R1.0_20040115
7-21
Upgrading VxVM Only: Manual Package Upgrade To upgrade Volume Manager only on an encapsulated boot disk by using an operating system-specific package installation command: 1 Bring the system to single-user mode. 2 Stop the vxconfigd and vxiod daemons. # vxdctl stop # vxiod -f set 0 3 Remove the VMSA software package. This step is optional. You should not remove the VMSA package if you still have clients running an old version of VxVM. However, remember that VMSA does not run with VxVM 3.5 and later versions of vxconfigd. 4 Add the new VxVM packages by using the operating system-specific package installation commands. You must add the new licensing package first on the command line. 5 Perform a reconfiguration reboot.
7–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Scripts Used in Upgrades The upgrade_start and upgrade_finish scripts preserve your VxVM configuration. upgrade_start
upgrade_finish
•
Checks system
•
•
Converts volumes to partitions
Corrects mistakes due to abnormal termination of upgrade_start
•
Preserves files
•
Checks licenses
•
Updates system files
•
Converts partitions to volumes
•
Saves upgrade information in VXVM4.0-UPGRADE
•
Reloads drivers
•
Restores systems and configuration files
•
Verifies VxVM installation
VM40_Solaris_R1.0_20040115
To check for potential problems before any upgrade, run: # upgrade_start -check
VM40_Solaris_R1.0_20040115
7-22
7-22
Upgrading VxVM Only: Upgrade Scripts The upgrade_start and upgrade_finish scripts are available in the scripts directory on the VERITAS CD-ROM. These scripts preserve your VxVM configuration information while you upgrade the system. Ensure that you use the upgrade_start and upgrade_finish scripts included with VxVM 4.0 (not versions of the scripts provided with earlier versions of VxVM) when upgrading from an earlier release. Before any upgrade, you should run the upgrade_start -check command to find any problems that exist which could prevent a successful upgrade: # upgrade_start -check
This script enables you to determine if any changes are needed to your configuration before you perform an upgrade. This script reports errors, if any are found. Otherwise, it reports success and you can proceed with running the upgrade_start script. The upgrade_start Script The upgrade_start script prepares the previous version of VxVM for its removal: • Checks your system for problems that may prevent a successful upgrade • Checks to determine if you have previously run the upgrade scripts • Verifies that VRTSvxvm is installed • Preserves files that need to be restored after an upgrade • Updates operating system files, such as /etc/system and /etc/vfstab Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–25
• • •
Saves information in VXVM4.0-UPGRADE Converts key file systems from volumes to physical disk partitions and checks your running Solaris version Touches /VXVM4.0-UPGRADE/.start_runed to prevent Volume Manager from starting after reboot
The upgrade_finish Script The upgrade_finish script: • Corrects any mistakes made due to an abnormal termination of upgrade_start • Checks for appropriate licenses • Converts key file systems from physical disk partitions back to volumes • Reloads vxdmp, vxio, and vxspec drivers • Restores saved configuration files and VxVM state files • Restores operating system files, such as /etc/system and /etc/vfstab • Rebuilds the volboot file • Starts VxVM daemons • Verifies a successful installation of VxVM
7–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Upgrading VxVM Only: Upgrade Scripts 1. 2. 3. 4. 5. 6. 7. 8. 9.
Mount the VERITAS CD-ROM. Run upgrade_start -check. Run the upgrade_start script. Reboot the system to single-user mode. When the system comes up, mount the /opt partition (if it is not part of the root file system). Remove the VxVM package and other related VxVM packages by using package removal commands. Reboot the system to multiuser mode. Verify that /opt is mounted, and then install the new VxVM packages by using package installation commands. Run the upgrade_finish script.
VM40_Solaris_R1.0_20040115
7-23
To upgrade Volume Manager only on an encapsulated boot disk by using the upgrade scripts: 1 Mount the VERITAS CD-ROM and change to the scripts directory. 2 Run the upgrade_start -check command to find any problems that exist which could prevent a successful upgrade. 3 Run the upgrade_start script: # ./upgrade_start 4 Reboot the system to single-user mode. 5 When the system comes up, mount the /opt partition (if it is not part of the root file system). 6 Remove the VxVM package and other related VxVM packages by using an operating system-specific package removal command. Remove the VRTSvxvm package after you have removed the optional packages. Note: Do not remove the VRTSvmsa package if you will still manage other systems running older versions of VxVM. 7 Reboot the system to multiuser mode. 8 Verify that /opt is mounted, and then install the new VxVM packages by using an operating system-specific package installation command. 9 Change to the scripts directory, and run the upgrade_finish script: # ./upgrade_finish
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–27
Upgrading the OS Only To prepare: 1 Detach any boot disk mirrors. 2 Check alignment of boot disk volumes. 3 Ensure that /opt is not a symbolic link.
To upgrade: 1 Bring system to single-user mode. 2 Load VERITAS CD-ROM. 3 Check for upgrade issues. 4 Run upgrade_start. 5 Reboot to single-user mode. 6 Upgrade your operating system. 7 Reboot to single-user mode. 8 Load VERITAS CD-ROM. 9 Run upgrade_finish.
VM40_Solaris_R1.0_20040115
7-24
10 Reboot to multiuser mode. VM40_Solaris_R1.0_20040115
7-24
Upgrading Solaris Only Prepare for the Upgrade 1 If the boot disk is mirrored, detach the mirror. 2 Check the alignment of volumes on the boot disk to ensure that at least one plex for each of those volumes is formed from a single subdisk that begins on a cylinder boundary. The upgrade scripts automatically convert file systems on volumes back to using regular disk partitions, as necessary. If the upgrade scripts detect any problems (such as lack of cylinder alignment), an explanation of the problem is displayed, and the upgrade does not proceed. 3 If you plan to install any documentation or manual pages, ensure that the /opt directory exists, is writable, and is not a symbolic link. The volumes that are not converted by the upgrade_start script will not be available during the upgrade process. If you have a symbolic link from /opt to one of the unconverted volumes, the symbolic link will not function during the upgrade and items in /opt will not be installed. Perform the Upgrade 1 Bring the system down to single-user mode. 2 Load and mount the VERITAS CD-ROM, and locate the scripts directory. 3 Run the upgrade_start -check command to find any problems that exist which could prevent a successful upgrade. 4 Run the upgrade_start script: # ./upgrade_start 7–28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
5 6 7 8 9 10
Reboot to single user mode. Upgrade your operating system. Refer to your OS installation documentation to install the operating system and any required patches. Reboot to single user mode. Load and mount the VERITAS CD-ROM, and locate the scripts directory. Complete the upgrade by running the upgrade_finish script: # ./upgrade_finish Reboot to multiuser mode.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–29
Upgrading VxVM and the OS To prepare: 1 Install license keys if needed. 2 Detach any boot disk mirrors. 3 Check alignment of boot disk volumes. 4 Ensure that /opt is not a symbolic link.
To remove old version: 1 Bring system to single-user mode.
4 Run upgrade_start.
2 Load VERITAS CD-ROM.
5 Reboot to single-user mode.
3 Check for upgrade issues.
6 Remove VxVM packages.
To install new version: 1 Reboot to single-user mode. 2 Upgrade your operating system. 3 Reboot to single-user mode.
VM40_Solaris_R1.0_20040115
4 Load VERITAS CD-ROM.
5 Add new licensing and
VxVM packages.
6 Run upgrade_finish. 7-25 7 Perform reconfiguration reboot. 8 Add additional packages.
VM40_Solaris_R1.0_20040115
7-25
Upgrading VxVM and Your Operating System Prepare for the Upgrade 1 If you are upgrading VxVM from a version earlier than 3.0.2, you must obtain and install a VxVM license key. 2 If the boot disk is mirrored, detach the mirror. 3 Check the alignment of volumes on the boot disk. 4 If you plan to install any documentation or manual pages, ensure that the /opt directory exists, is writable, and is not a symbolic link. Remove the Old Packages 1 Bring the system down to single-user mode. 2 Load and mount the VERITAS CD-ROM, and locate the scripts directory. 3 Run the upgrade_start -check command to find any problems that exist which could prevent a successful upgrade. 4 Run the upgrade_start script: # ./upgrade_start 5 Reboot to single user mode. 6 Remove the old VxVM package and other related VxVM packages. Note: Do not remove the VRTSvmsa package if you still have clients running old versions of VxVM.
7–30
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Upgrade the Operating System and VxVM 1 Reboot to single user mode. 2 Upgrade your operating system. Refer to your OS installation documentation to install the operating system and any required patches. 3 Reboot to single user mode. 4 Load and mount the VERITAS CD-ROM. 5 Locate the directory that contains the VxVM packages and add the new VxVM licensing and software packages by using an operating system-specific package installation command. 6 Locate the scripts directory, and complete the upgrade by running the upgrade_finish script: # ./upgrade_finish 7 Perform a reconfiguration reboot. 8 Install any additional packages by using by using an operating system-specific package installation command.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–31
After Upgrading After completing the upgrade and rebooting: 1. Confirm that key VxVM processes (vxconfigd, vxnotify, and vxrelocd) are running by using the command: # ps -ef | grep vx 2. Verify the existence of the boot disk’s volumes: # vxprint -ht Note: To perform an upgrade without using the upgrade scripts, you can use vxunroot to convert volumes back to partitions. For more information, see the VERITAS Volume Manager Installation Guide and visit http://support.veritas.com. VM40_Solaris_R1.0_20040115
7-26
After Upgrading After completing the upgrade and rebooting, confirm the following: 1 Confirm that key VxVM processes (vxconfigd, vxnotify, and vxrelocd) are running by using the command: # ps -ef | grep vx 2 Verify the existence of the boot disk’s volumes by using vxprint: # vxprint -ht At this point, your preupgrade configuration is in effect, and any file systems previously defined on volumes are defined and mounted. Note: If you prefer to perform an upgrade without using the upgrade_start and upgrade_finish scripts, you can use the vxunroot command to convert volumes back to partitions. See the VERITAS Volume Manager Installation Guide and visit http://support.veritas.com for more information.
7–32
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Upgrading VxFS To upgrade VxFS, follow this sequence: 1. Unmount any mounted VERITAS file systems. 2. Remove old VxFS packages. 3. Comment out VxFS file systems in the file system table file, then reboot to flush VxFS kernel hooks. 4. Upgrade the OS if necessary for VxFS version compatibility. 5. Add the new VxFS packages. 6. Undo any changes made to the file system table file. 7. Reboot.
VM40_Solaris_R1.0_20040115
7-27
Upgrading to a New VxFS Version You must uninstall any previous version of the VRTSvxfs package before installing a new version. You do not need to remove existing VERITAS file systems, but all of them must remain unmounted throughout the upgrade process. Before upgrading, ensure that the new version of VxFS is compatible with the operating system version you are running. 1 Unmount any mounted VERITAS file system. You cannot remove the VRTSvxfs package if any VERITAS file system remains mounted. 2 Remove all VxFS packages. Remove the optional packages before the VRTSvxfs package. If you are upgrading from versions VxFS 3.3.3 or earlier, then you may also need to remove the VRTSqio and VRTSqlog packages. 3 If you have VxFS file systems specified in the file system table file, comment them out, and then reboot to flush VxFS kernel hooks still in RAM to avoid possible system panics. 4 If the new version of VxFS is not compatible with the operating system version you are running, upgrade the operating system. Refer to your OS installation documentation for instructions on how to upgrade your OS. 5 Install the new version of VxFS by following standard installation procedures. Mount the VERITAS CD-ROM and add the VxFS packages of the new version using the package installation command for your operating system. 6 Undo the changes that you made to the file system table file. 7 Reboot the system to mount any VxFS file systems.
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–33
Summary You should now be able to: • Place the boot disk under VxVM control. • Create an alternate boot disk by mirroring the boot disk that is under VxVM control. • Remove the boot disk from VxVM control. • Upgrade to a new VxVM version.
VM40_Solaris_R1.0_20040115
7-28
Summary This lesson described the disk encapsulation process and how to encapsulate the boot disk on your system. Methods for creating an alternate boot disk and unencapsulating a boot disk were covered. Next Steps The next lesson introduces basic recovery operations. Additional Resources • VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. • VERITAS Volume Manager Installation Guide This guide provides information on installing and initializing VxVM and the VERITAS Enterprise Administrator graphical user interface.
7–34
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 7 Lab 7: Encapsulation and Rootability • In this lab, you place the boot disk under VxVM control, create a boot disk mirror, disable the boot disk, and boot up from the mirror. • Then, you boot up again from the boot disk, break the mirror, and remove the boot disk from the boot disk group. • Finally, you reencapsulate the boot disk and re-create the mirror. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B. VM40_Solaris_R1.0_20040115
7-29
Lab 7: Encapsulation and Rootability To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
Lesson 7 Encapsulation and Rootability Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7–35
7–36
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lesson 8 Recovery Essentials
Overview
Recovery Essentials Encapsulation and Rootability Reconfiguring Volumes Online Configuring Volumes Creating Volumes Managing Disks and Disk Groups Installation and Interfaces Virtual Objects
VM40_Solaris_R1.0_20040115
8-2
Introduction Overview This lesson introduces basic recovery concepts and techniques. This lesson describes how data consistency is maintained after a system crash and how hot relocation restores redundancy to failed VxVM objects. This lesson also describes how to manage spare disks, replace a failed disk, and recover a volume. Importance VxVM protects systems from disk failures and helps you to recover from disk failures. You can use the techniques discussed in this lesson to recover from a variety of disk- and volume-related problems that may occur.
8–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Objectives After completing this lesson, you will be able to: • Describe mirror resynchronization processes. • Describe the hot-relocation process. • Manage spare disks. • Replace a failed disk. • Return relocated subdisks back to their original disk. • Recover a volume. • Describe tasks used to protect the VxVM configuration. VM40_Solaris_R1.0_20040115
8-3
Outline of Topics • Maintaining Data Consistency • Hot Relocation • Managing Spare Disks • Replacing a Disk • Unrelocating a Disk • Recovering a Volume • Protecting the VxVM Configuration
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–3
Resynchronization Resynchronization is the process of ensuring that, after a system crash: • •
All mirrors in a volume contain exactly the same data. Data and parity in RAID-5 volumes agree. Did Didall allwrites writes complete? complete?
Crash Crash
Do Doall allmirrors mirrors contain containthe the same samedata? data?
Writes Writes
Resynchronize Resynchronize
Types of mirror resynchronization: • •
Atomic-copy resynchronization Read-writeback resynchronization
VM40_Solaris_R1.0_20040115
VM40_Solaris_R1.0_20040115
8-4
8-4
Maintaining Data Consistency What Is Resynchronization? Resynchronization is the process of ensuring that, after a system crash: • All mirrors in mirrored volumes contain exactly the same data. • Data and parity in RAID-5 volumes agree. Data is written to the mirrors of a volume in parallel. If a system crash occurs before all the individual writes complete, some writes may complete while other writes do not. This can cause two reads from the same region of the volume to return different results if different mirrors are used to satisfy the read request. In the case of RAID-5 volumes, it can lead to parity corruption and incorrect data reconstruction. VxVM uses volume resynchronization processes to ensure that all copies of the data match exactly. VxVM records when a volume is first written to and marks it as dirty. When a volume is closed by all processes or stopped cleanly by the administrator, all writes have been completed, and the Volume Manager removes the dirty flag for the volume. Only volumes that are marked dirty when the system reboots require resynchronization. Not all volumes require resynchronization after a system failure. Volumes that were never written or that had no active I/O when the system failure occurred do not require resynchronization.
8–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Atomic-Copy Resynchronization Atomic-copy resynchronization involves the sequential writing of all blocks of a volume to a plex. This type of resynchronization is used in: • Adding a new plex (mirror) • Reattaching a detached plex (mirror) to a volume • Online reconfiguration operations: – – – –
Moving a plex Copying a plex Creating a snapshot Moving a subdisk
VM40_Solaris_R1.0_20040115
8-5
Atomic-Copy Resynchronization Atomic-copy resynchronization refers to the sequential writing of all blocks of the volume to a plex. This operation is used anytime a new mirror is added to a volume, or an existing mirror is in stale mode and has to be resynchronized. Atomic-Copy Resynchronization Process 1 The plex being copied to is set to a write-only state. 2 A read thread is started on the whole volume. (Every block is read internally.) 3 Blocks are written from the “good” plex to the stale or new plex.
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–5
Read-Writeback Resynchronization Read-writeback resynchronization is used for volumes that were fully mirrored prior to a system failure. This type of resynchronization involves: • Mirrors marked ACTIVE remain ACTIVE, and volume is placed in the SYNC state. • An internal read thread is started. Blocks are read from the plex specified in the read policy, and the data is written to the other plexes. • Upon completion, the SYNC flag is turned off. VM40_Solaris_R1.0_20040115
8-6
Read-Writeback Resynchronization Read-writeback resynchronization is a process where two or more plexes have the same data, but there may have been outstanding writes to the volume when the system crashed. Because the application must ensure that all writes are completed, the application must fix any writes that are not completed. The responsibility of VxVM is to guarantee that the mirrors have the same data. • A database (as an application) usually does this by writing the original data back to the disk. • A file system checks to ensure that all of its structures are intact. The applications using the file system must do their own checking. Read-Writeback Resynchronization Process 1 All plexes that were ACTIVE at the time of the crash have the volume’s data, and each plex is set to the ACTIVE state again, but the volume is placed in the SYNC state (or NEEDSYNC). 2 An internal read thread is started to read the entire volume, and blocks are read from whatever plex is in the read policy and are written back to the other plexes. 3 When the resynchronization process is complete, the SYNC flag is turned off (set to ACTIVE). User-initiated reads are also written to the other plexes in the volume but otherwise have no effect on the internal read thread.
8–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Impact of Resynchronization Resynchronization takes time and impacts performance. To minimize this performance impact, VxVM has the following solutions: • Dirty region logging for mirrored volumes • RAID-5 logging for RAID-5 volumes • FastResync for mirrored and snapshot volumes
VM40_Solaris_R1.0_20040115
8-7
Minimizing the Impact of Resynchronization The process of resynchronization can impact system performance and can take time. To minimize the performance impact of resynchronization, VxVM provides: • Dirty region logging for mirrored volumes • RAID-5 logging for RAID-5 volumes • FastResync for mirrored and snapshot volumes
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–7
Dirty Region Logging •
•
For mirrored volumes with logging enabled, DRL speeds plex resynchronization. Only regions that are dirty need to be resynchronized after a crash. VxVM selects an appropriate log size based on volume size. The log is relatively small compared to the size of the volume: Volume Size Less than 1 GB 1 GB to 4 GB 4 GB to 6 GB 6 GB to 9 GB 9 GB to 12 GB ...
•
Default Log Size 16K 33K 49K 82K 99K ...
If you resize a volume, the log size does not change. To resize the log, you must delete the log and add it back after resizing the volume.
VM40_Solaris_R1.0_20040115
8-8
Dirty Region Logging You were introduced to dirty region logging (DRL) when you created a volume with a log. This section describes how dirty region logging works. How Does DRL Work? DRL logically divides a volume into a set of consecutive regions and keeps track of the regions to which writes occur. A log is maintained that contains a status bit representing each region of the volume. For any write operation to the volume, the regions being written are marked dirty in the log before the data is written. If a write causes a log region to become dirty when it was previously clean, the log is synchronously written to disk before the write operation can occur. On system restart, VxVM recovers only those regions of the volume that are marked as dirty in the dirty region log. Log subdisks store the dirty region log of a volume that has DRL enabled. • Only one log subdisk can exist per plex. • Multiple log subdisks can be used to mirror the dirty region log. • If a plex contains a log subdisk and no data subdisks, it is called a log plex. Only a limited number of bits can be marked dirty in the log at any time. The dirty bit for a region is not cleared immediately after writing the data to the region. Instead, it remains marked as dirty until the corresponding volume region becomes the least recently used.
8–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Dirty Region Logging Volume
Region
DRL Before a Crash 0 1 2 3 …
0 1 2 3 ... 0010000… 00100…000…000010
Active Bitmap
0 1 2 3 ... 0000000… 00000…000…000000
Recovery Bitmap
DRL After a Crash 0 1 2 3 …
0 1 2 3 ... 0000000… 00000…000…000000 0 1 2 3 ... 0010000… 00100…000…000010
VM40_Solaris_R1.0_20040115
Active Bitmap Recovery Bitmap
VM40_Solaris_R1.0_20040115
8-9
8-9
Dirty Region Log Size VxVM selects an appropriate dirty region log size based on the volume size. In the dirty region log: • A small number of bytes of the DRL is reserved for internal use. The remaining bytes are used for the DRL bitmap. – The bytes are divided into two bitmaps: an active bitmap and a recovery bitmap. – Each bit in the active bitmap maps to a single region of the volume. • A maximum of 2048 dirty regions per system is allowed by default. How the Bitmaps Are Used in Dirty Region Logging Both bitmaps are zeroed when the volume is started initially, after a clean shutdown. As regions transition to dirty, the log is flushed before the writes to the volume occur. If the system crashes, the active map is OR’d with the recovery map. • Mirror resynchronization is now limited to the dirty bits in the recovery map. • The active map is simultaneously reset, and normal volume I/O is permitted. Utilization of two bitmaps in this fashion allows VxVM to handle multiple system crashes.
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–9
RAID-5 Logging • For RAID-5 volumes, logging helps to prevent data corruption during recovery. • RAID-5 logging records changes to data and parity on a persistent device (log disk) before committing the changes to the RAID-5 volume. • Logs are associated with a RAID-5 volume by being attached as log plexes.
VM40_Solaris_R1.0_20040115
8-10
RAID-5 Logging Dirty region logging is used for mirrored volumes only. RAID-5 volumes use RAID-5 logs to keep a copy of the data and parity currently being written. You were introduced to RAID-5 logging when you created a volume with a log. Without logging, data not involved in any active writes can be lost or silently corrupted if both a disk in a RAID-5 volume and the system fail. If this doublefailure occurs, there is no way of knowing if the data being written to the data portions of the disks or the parity being written to the parity portions have actually been written. RAID-5 logging is used to prevent corruption of data during recovery by immediately recording changes to data and parity to a log area on a persistent device (such as a disk-resident volume or nonvolatile RAM). The new data and parity are then written to disk. Logs are associated with a RAID-5 volume by being attached as log plexes. More than one log plex can exist for each RAID-5 volume, in which case the log areas are mirrored.
8–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
What Is Hot Relocation? Hot Relocation: System automatically reacts to I/O failures on redundant VxVM objects and restores redundancy to those objects by relocating affected subdisks. Spare Disks VM Disks
Subdisks are relocated to disks designated as spare disks or to free space in the disk group. VM40_Solaris_R1.0_20040115
8-11
Hot Relocation What Is Hot Relocation? Hot relocation is a feature of VxVM that enables a system to automatically react to I/O failures on redundant (mirrored or RAID-5) VxVM objects and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them redundant and accessible again. Partial Disk Failure When a partial disk failure occurs (that is, a failure affecting only some subdisks on a disk), redundant data on the failed portion of the disk is relocated. Existing volumes on the unaffected portions of the disk remain accessible. With partial disk failure, the disk is not removed from VxVM control and is labeled as FAILING, rather than as FAILED. Before removing a FAILING disk for replacement, you must evacuate any remaining volumes on the disk. Note: Hot relocation is only performed for redundant (mirrored or RAID-5) subdisks on a failed disk. Nonredundant subdisks on a failed disk are not relocated, but the system administrator is notified of the failure.
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–11
Hot-Relocation Process Volumes
4 Spare Disks
1
VM Disks
3 1. 1.vxrelocd vxrelocddetects detectsdisk diskfailure. failure. 2. 2.Administrator Administratoris isnotified notifiedby bye-mail. e-mail. 3. 3.Subdisks Subdisksare arerelocated relocatedto toaaspare. spare. 4. 4.Volume Volumerecovery recoveryis isattempted. attempted. VM40_Solaris_R1.0_20040115
2 Administrator
8-12
How Does Hot Relocation Work? The hot-relocation feature is enabled by default. No system administrator action is needed to start hot relocation when a failure occurs. The vxrelocd daemon starts during system startup and monitors VxVM for failures involving disks, plexes, or RAID-5 subdisks. When a failure occurs, vxrelocd triggers a hot-relocation attempt and notifies the system administrator, through e-mail, of failures and any relocation and recovery actions. The vxrelocd daemon is started from the S95vxvm-recover file. The argument to vxrelocd is the list of people to e-mail notice of a relocation (default is root). To disable vxrelocd, you can place a “#” in front of the line in the S95vxvm-recover file. A successful hot-relocation process involves: • Failure detection: Detecting the failure of a disk, plex, or RAID-5 subdisk • Notification: Notifying the system administrator and other designated users and identifying the affected Volume Manager objects • Relocation: Determining which subdisks can be relocated, finding space for those subdisks in the disk group, and relocating the subdisks (The system administrator is notified of the success or failure of these actions. Hot relocation does not guarantee the same layout of data or the same performance after relocation.) • Recovery: Initiating recovery procedures, if necessary, to restore the volumes and data (Again, the system administrator is notified of the recovery attempt.) For more information, see the vxrelocd(1m) manual page. 8–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
How Is Space Selected? • Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk. • If there is not enough spare disk space, a combination of spare disk space and free space is used. • If no disks have been designated as spares, VxVM uses any available free space in the disk group in which the failure occurs. • Free space that you exclude from hot relocation is not used.
VM40_Solaris_R1.0_20040115
8-13
How Is Space Selected for Relocation? When relocating subdisks, VxVM attempts to select a destination disk with the fewest differences from the failed disk: 1 Attempt to relocate to the same controller, target, and device as the failed drive. 2 Attempt to relocate to the same controller and target, but to a different device. 3 Attempt to relocate to the same controller, but to any target and any device. 4 Attempt to relocate to a different controller. 5 Potentially scatter the subdisks to different disks. A spare disk must be initialized and placed in a disk group as a spare before it can be used for replacement purposes. • Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk, if possible. • If no disks have been designated as spares, VxVM automatically uses any available free space in the disk group not currently on a disk used by the volume. • If there is not enough spare disk space, a combination of spare disk space and free space is used. Free space that you exclude from hot relocation is not used. In all cases, hot relocation attempts to relocate subdisks to a spare in the same disk group, which is physically closest to the failing or failed disk. When hot relocation occurs, the failed subdisk is removed from the configuration database. The disk space used by the failed subdisk is not recycled as free space.
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–13
Managing Spare Disks VEA: Actions—>Set Disk Usage
vxdiskadm: • • • •
“Mark a disk as a spare for a disk group” “Turn off the spare flag on a disk” “Exclude a disk from hot-relocation use” “Make a disk available for hot-relocation use”
CLI: To designate a disk as a spare: vxedit -g diskgroup set spare=on|off dm_name To exclude/include a disk for hot relocation: vxedit -g diskgroup set nohotuse=on|off dm_name To force hot relocation to only use spare disks: Add spare=only to /etc/default/vxassist
VM40_Solaris_R1.0_20040115
VM40_Solaris_R1.0_20040115
8-14
8-14
Managing Spare Disks When you add a disk to a disk group, you can specify that the disk be added to the pool of spare disks available to the hot relocation feature of VxVM. Any disk in the same disk group can use the spare disk. Try to provide at least one hotrelocation spare disk per disk group. While designated as a spare, a disk is not used in creating volumes unless you specifically name the disk on the command line. Managing Spare Disks: VEA Select:
An initialized disk
Navigation path:
Actions—>Set Disk Usage
Input:
Turn disk usage tags on or off: Spare: Designates the disk as a hot relocation spare No hot use: Excludes the disk from hot relocation use Reserved: Designates the disk as a reserved disk. A reserved disk is not considered part of the free space pool. Reserved for Allocator: Designates the disk to be used for the Intelligent Storage Provisioning (ISP) feature of VxVM
Managing Spare Disks: vxdiskadm Using vxdiskadm, you can set up the disk as a spare disk when you add a disk to a disk group. Alternatively, you can select the “Mark a disk as a spare for a disk group” option in the main menu. When prompted, enter the name of the disk to be marked as a spare. 8–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
To remove the spare designation from a disk, select the “Turn off the spare flag on a disk” option in the main menu. To exclude a disk from hot-relocation use, select the “Exclude a disk from hotrelocation use” option. If a disk was previously excluded from hot-relocation use, you can undo the exclusion and add the disk back to the hot-relocation pool by selecting the “Make a disk available for hot-relocation use” option in the main menu. Managing Spare Disks: CLI To set up a disk as a spare: vxedit -g diskgroup set spare=on disk_media_name
Note: A disk with the spare flag set is used only for hot relocation. Subsequent vxassist commands do not allocate a subdisk on that disk, unless you explicitly specify the disk in the argument of a vxassist command. To remove the spare designation for a disk: vxedit -g diskgroup set spare=off disk_media_name
To exclude a disk from hot-relocation use: vxedit -g diskgroup set nohotuse=on disk_media_name
To make a previously excluded disk available for hot relocation: vxedit -g diskgroup set nohotuse=off disk_media_name
You can force the hot-relocation feature to use only the disks marked as spare by adding the flag spare=only into the /etc/default/vxassist file. To display how much free space is on the spare disks in a disk group: # vxdg spare
A spare disk is not the same as a reserved disk. You can reserve a set of disks for special purposes, such as to avoid general use of a particularly slow or a particularly fast disk. To reserve a disk for special purposes, you use the command: # vxedit set reserve=on diskname
After you type this command, vxassist does not allocate space from the selected disk unless that disk is specifically mentioned on the vxassist command line. For example, if disk disk03 is reserved, the command: # vxassist make vol03 20m disk03
overrides the reservation and creates a 20-MB volume on disk03. However: # vxassist make vol04 20m
does not use disk03, even if there is no free space on any other disk. To turn off reservation of a disk, you type: # vxedit set reserve=off diskname
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–15
Disk Replacement Tasks 1 Physical Replacement Replace corrupt disk with a new disk.
2 Logical Replacement • • • •
Replace the disk in VxVM. Start disabled volumes. Resynchronize mirrors. Resynchronize RAID-5 parity.
Volume Volume
VM40_Solaris_R1.0_20040115
8-15
Replacing a Disk Disk Replacement Tasks Replacing a failed or corrupted disk involves both physically replacing the disk and then logically replacing the disk and recovering volumes in VxVM: • Disk replacement: When a disk fails, you replace the corrupt disk with a new disk. The disk used to replace the failed disk must be either an uninitialized disk or a disk in the free disk pool. The replacement disk cannot already be in a disk group. If you want to use a disk that exists in another disk group, then you must remove the disk from the disk group and place it back into the free disk pool before you can use it as the replacement disk. • Volume recovery: When a disk fails and is removed for replacement, the plex on the failed disk is disabled, until the disk is replaced. Volume recovery involves starting disabled volumes, resynchronizing mirrors, and resynchronizing RAID-5 parity. After successful recovery, the volume is available for use again. Redundant (mirrored or RAID-5) volumes can be recovered by VxVM. Nonredundant (unmirrored) volumes must be restored from backup. Note: This lesson only discusses disks that have failed completely. When hot relocation takes place, VxVM removes the disk from VxVM control and marks the disk as FAILED. Partial disk failure, that is, disks marked with a status of FAILING, is covered in another lesson.
8–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Physically Replacing a Disk 1. Connect the new disk. 2. Ensure that the operating system recognizes the disk. 3. Get VxVM to recognize the disk: # vxdctl enable 4. Verify that VxVM recognizes the disk: # vxdisk list Note: In VEA, use Actions—>Rescan to run disk setup commands appropriate for the OS and ensure that VxVM recognizes newly attached hardware.
VM40_Solaris_R1.0_20040115
8-16
Adding a New Disk 1 Connect the new disk. 2 Get the operating system to recognize the disk: Platform
OS-Specific Commands to Recognize a Disk
Solaris
# devfsadm # prtvtoc /dev/dsk/device_name
HP-UX
# ioscan -fC disk # insf -e
AIX
# cfgmgr # lsdev -C -l device_name
Linux
On Linux, a reboot is required to recognize a new disk.
Get VxVM to recognize that a failed disk is now working again: # vxdctl enable 4 Verify that VxVM recognizes the disk: # vxdisk list 3
After the operating system and VxVM recognize the new disk, you can then use the disk as a replacement disk. Note: In VEA, use the Actions—>Rescan option to run disk setup commands appropriate for the operating system. This option ensures that VxVM recognizes newly attached hardware. Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–17
Logically Replacing a Disk VEA: • •
Select the disk to be replaced. Select Actions—>Replace Disk.
vxdiskadm: “Replace a failed or removed disk”
CLI: vxdg -k -g diskgroup adddisk disk_name=device_name The -k option forces VxVM to take the disk media name of the failed disk and assign it to the new disk. Use with caution.
Example: # vxdg -k -g datadg adddisk datadg01=c1t1d0s2 VM40_Solaris_R1.0_20040115
8-17
Replacing a Disk: VEA Select:
The disk to be replaced
Navigation path:
Actions—>Replace Disk
Input:
Select the disk to be used as the new (replacement) disk.
VxVM replaces the disk and attempts to recover volumes. Replacing a Failed Disk: vxdiskadm To replace a disk that has already failed or that has already been removed, you select the “Replace a failed or removed disk” option. This process creates a public and private region on the new disk and populates the private region with the disk media name of the failed disk. Replacing a Disk: CLI Assuming that hot relocation has already removed the failed disk, to replace a failed disk from the command line, you add the new disk in its place: vxdg -k -g diskgroup adddisk disk_name=device_name
The -k switch forces VxVM to take the disk media name of the failed disk and assign it to the new disk. For example, if the failed disk datadg01 in the datadg disk group was removed, and you want to add the new device c1t1d0s2 as the replacement disk: # vxdg -k -g datadg adddisk datadg01=c1t1d0s2 8–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Unrelocating a Disk VEA: • •
Select the disk to be unrelocated. Select Actions—>Undo Hot Relocation.
vxdiskadm: “Unrelocate subdisks back to a disk” CLI: vxunreloc [-f] [-g diskgroup] [-t tasktag] [-n disk_name] orig_disk_name • orig_disk_name
Original disk before relocation
• -n disk_name
Unrelocates to a disk other than the original Forces unrelocation if exact offsets are not possible
• -f VM40_Solaris_R1.0_20040115
8-18
Unrelocating a Disk The vxunreloc Utility The hot-relocation feature detects I/O failures in a subdisk, relocates the subdisk, and recovers the plex associated with the subdisk. VxVM also provides a utility that unrelocates a disk—that is, moves relocated subdisks back to their original disk. After hot relocation moves subdisks from a failed disk to other disks, you can return the relocated subdisks to their original disk locations after the original disk is repaired or replaced. Unrelocation is performed using the vxunreloc utility, which restores the system to the same configuration that existed before a disk failure caused subdisks to be relocated. Unrelocating a Disk: VEA Select:
The original disk that contained the subdisks before hot relocation
Navigation path:
Actions—>Undo Hot Relocation Note: This option is only available after hot relocation or hot sparing has occurred.
Input:
Select the disk that contained the subdisks before relocation occurred
Note: It is not possible to return relocated subdisks to their original disks if their disk group’s relocation information has been cleared. Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–19
Unrelocating a Disk: vxdiskadm To unrelocate a disk using the vxdiskadm interface, select the “Unrelocate subdisks back to a disk” option in the main menu. When prompted, specify the disk media name of the original disk—that is, where the hot-relocated subdisks originally resided. Next, if you do not want to unrelocate the subdisks to the original disk, you can select a new destination disk. If moving subdisks to the original offsets is not possible, you can also choose the “force option” to unrelocate the subdisks to the specified disk, but not necessarily to the exact original offsets. Unrelocating a Disk: CLI vxunreloc [-f] [-g diskgroup] [-t tasktag] [-n disk_name] orig_disk_name • orig_disk_name is the disk where the relocated subdisks originally resided. • -n disk_name unrelocates to a disk other than the original disk. Use this option to specify a new disk media name. • -f is used if unrelocating to the original disk using the same offsets is not possible. This option forces unrelocation to different offsets.
Viewing Relocated Subdisks: CLI When a subdisk is hot-relocated, its original disk media name is stored in the sd_orig_dmname field of the subdisk record files. You can search this field to find all the subdisks that originated from a failed disk using the vxprint command: vxprint -g diskgroup -se ‘sd_orig_dmname=”disk_name”’
For example, to display all the subdisks that were hot-relocated from datadg01 within the datadg disk group: # vxprint -g datadg -se ‘sd_orig_dmname=”datadg01”’
8–20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Recovering a Volume VEA: • •
Select the volume to be recovered. Select Actions—>Recover Volume.
CLI: vxreattach [-bcr] [device_tag] •
Reattaches disks to a disk group if disk has a transient failure, such as when a drive is turned off and then turned back on • -r attempts to recover stale plexes using vxrecover.
vxrecover [-bnpsvV] [-g diskgroup] [volume_name|disk_name] # vxrecover -b -g datadg datavol VM40_Solaris_R1.0_20040115
8-19
Recovering a Volume Recovering a Volume: VEA Select:
The volume to be recovered
Navigation path:
Actions—>Recover Volume
Input:
When prompted, confirm that you want to recover the specified volume.
The vxreattach Command The vxreattach utility reattaches disks to a disk group and retains the same media name. This command attempts to find the name of the drive in the private region and to match it to a disk media record that is missing a disk access record. This operation may be necessary if a disk has a transient failure—for example, if a drive is turned off and then back on, or if the Volume Manager starts with some disk drivers unloaded and unloadable. vxreattach [-bcr] [device_tag] • -b performs the reattach operation in the background. • -c checks to determine if a reattach is possible. No operation is performed, but the disk group name and the disk media name at which the disk can be reattached are displayed. • -r attempts to recover stale plexes of any volumes on the failed disk by invoking vxrecover.
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–21
The vxrecover Command To perform volume recovery operations from the command line, you use the vxrecover command. The vxrecover program performs plex attach, RAID-5 subdisk recovery, and resynchronize operations for specified volumes (volume_name), or for volumes residing on specified disks (disk_name). You can run vxrecover any time to resynchronize mirrors. Note: The vxrecover command only works on a started volume. A started volume displays an ENABLED state in vxprint -ht. Recovery operations are started in an order that prevents two concurrent operations from involving the same disk. Operations that involve unrelated disks run in parallel. vxrecover [-bnpsvV] -g diskgroup [volume_name|disk_name] • -b performs recovery operations in the background. If used with -s, then volumes are started before recovery begins in the background. • -n does not perform any recovery operations. If used with -s, then volumes are started, but no other actions are taken. If used with -p, then the only action of vxrecover is to print a list of startable volumes. • -p prints the list of selected volumes that are startable • -s starts disabled volumes that are selected by the operation. With -s and -n, volumes are started, but no other recovery takes place. • -v displays information about each task started by vxrecover. For recovery operations (as opposed to start operations), prints a completion status when each task completes. The -V option displays more detailed information.
Examples After replacing the failed disk datadg01 in the datadg disk group, and adding the new disk c1t1d0s2 in its place, you can attempt to recover the volume datavol: # vxrecover -bs -g datadg datavol
To recover, in the background, any detached subdisks or plexes that resulted from replacement of the disk datadg01 in the datadg disk group: # vxrecover -b -g datadg datadg01
To monitor the operations during the recovery, you add the -v option: # vxrecover -v -g datadg datadg01
8–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Configuration Backup and Restore engdg
engdg
engdg
vol01
vol01
vol01
Precommit
Commit
Back Up vxconfigbackup diskgroup
vxconfigrestore -p diskgroup vxconfigrestore -c diskgroup VM40_Solaris_R1.0_20040115
8-20
Disk
Protecting the VxVM Configuration The disk group configuration backup and restoration feature enables you to back up and restore all configuration data for disk groups, and for volumes that are configured within the disk groups. The vxconfigbackupd daemon monitors changes to the VxVM configuration and automatically records any configuration changes that occur. The vxconfigbackup utility is provided for backing up and restoring a VxVM configuration for a disk group. The vxconfigrestore utility is provided for restoring the configuration. The restoration process has two stages: precommit and commit. In the precommit stage, you can examine the configuration of the disk group that would be restored from the backup. The actual disk group configuration is not permanently restored until you choose to commit the changes. By default, VxVM configuration data is automatically backed up to the files: • /etc/vx/cbr/bk/diskgroup.dgid/dgid.dginfo • /etc/vx/cbr/bk/diskgroup.dgid/dgid.diskinfo • /etc/vx/cbr/bk/diskgroup.dgid/dgid.binconfig • /etc/vx/cbr/bk/diskgroup.dgid/dgid.cfgrec Configuration data from a backup enables you to reinstall private region headers of VxVM disks in a disk group, re-create a corrupted disk group configuration, or recreate a disk group and the VxVM objects within it. You can also use the
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–23
configuration data to re-create the disk group on another system. However, restoration of a disk group configuration requires that the same physical disks are used as were configured in the disk group when the backup was taken. Backing Up a Disk Group Configuration To manually back up a disk group configuration: # vxconfigbackup diskgroup
To back up all disk group configurations: # vxconfigbackup
Restoring a Disk Group Configuration To perform a precommit analysis of the state of a disk group configuration and reinstall corrupted disk headers: # vxconfigrestore -p [-l directory] diskgroup
directory is the location of the backup configuration files, if other than the default location. The disk group can be specified by either the disk group name or disk group ID. If disks have been replaced and conflicting backups exist for a disk group, you should use the disk group ID to identify the disk group in the command. To specify that disk headers not be reinstalled at the precommit stage: # vxconfigrestore -n [-l directory] diskgroup
To abandon restoration at the precommit stage: # vxconfigrestore -d [-l directory] diskgroup
To commit changes required to restore the disk group configuration: # vxconfigrestore -c [-l directory] diskgroup
8–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Summary You should now be able to: • Describe mirror resynchronization processes. • Describe the hot-relocation process. • Manage spare disks. • Replace a failed disk. • Return relocated subdisks back to their original disk. • Recover a volume. • Describe tasks used to protect the VxVM configuration. VM40_Solaris_R1.0_20040115
8-21
Summary This lesson introduced basic recovery concepts and techniques. This lesson described how data consistency is maintained after a system crash and how hot relocation restores redundancy to failed VxVM objects. This lesson also described how to manage spare disks, replace a failed disk, and recover a volume. Additional Resources • VERITAS Volume Manager Administrator’s Guide This guide provides detailed information on procedures and concepts involving volume management and system administration using VERITAS Volume Manager. • VERITAS Volume Manager User’s Guide—VERITAS Enterprise Administrator This guide describes how to use the VERITAS Enterprise Administrator graphical user interface for VERITAS Volume Manager. • VERITAS Volume Manager Release Notes This document provides software version release information for VERITAS Volume Manager and VERITAS Enterprise Administrator.
Lesson 8 Recovery Essentials Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8–25
Lab 8 Lab 8: Recovery Essentials • In this lab, you perform a variety of basic recovery operations. • Lab instructions are in Appendix A. • Lab solutions are in Appendix B.
VM40_Solaris_R1.0_20040115
8-22
Lab 8: Recovery Essentials To Begin This Lab To begin the lab, go to Appendix A, “Lab Exercises.” Lab solutions are contained in Appendix B, “Lab Solutions.”
8–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Appendix A Lab Exercises
Lab 1: Introducing the Lab Environment Introduction In this lab, you are introduced to the lab environment, the systems, and disks that you will use throughout this course. You will also record some prerequisite information that will prepare you for the installation of VxVM and the labs that follow throughout this course. Lab Environment Introduction The instructor will describe the classroom environment, review the configuration and layout of the systems, and assign disks for you to use. The content of this activity depends on the type of classroom, hardware, and the operating system(s) deployed. Lab Prerequisites Record the following information to be provided by your instructor: root password Host name My Boot Disk: My Data Disks:
Location of VERITAS Volume Manager packages: Location of VERITAS File System packages: Location of Lab Scripts (if any): Location of VERITAS Storage Foundation license keys:
A–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 2: Installation and Interfaces Introduction In this exercise, you add the VxVM packages, install VERITAS Volume Manager, and install VERITAS File System. You also explore the VxVM user interfaces, including the VERITAS Enterprise Administrator interface, the vxdiskadm menu interface, and the command line interface. Preinstallation 1 Determine if there are any VRTS packages currently installed on your system. 2
Before installing VxVM, save the following important system files into backup files named with a “.preVM” extension. Also, save your boot disk information to a file for later use (do not store the file in /tmp). You may need the boot disk information when you bring the boot disk under VxVM control in a later lab.
3
Are any VERITAS license keys installed on your system? Check for installed licenses.
Installing VERITAS Volume Manager 1 Navigate to the directory containing the VxVM installation script. Ask your instructor for the location of the script. Using the VERITAS Volume Manager installation script, run a precheck to determine if your system meets all preinstallation requirements. If any requirements are not met, follow the instructions to take any required actions before you continue. 2
Using the VERITAS Volume Manager installation script, install and perform initial configuration of VxVM. During the installation: – If you do not have Storage Foundation licenses on your system, install licenses when prompted. Your instructor will provide licensing information. – Install all optional and required packages. – Do not use enclosure-based naming. – Do not set a default disk group. – Start VxVM.
3
Check in /.profile to ensure that the following paths are present. Note: This may be done in the jumpstart of your system prior to this lab, but the paths may need to be added after a normal install. # PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSob/bin: /usr/sbin # MANPATH=$MANPATH:/opt/VRTS/man # export PATH MANPATH
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–3
Installing VERITAS File System 1 Navigate to the directory containing the VxFS installation script. Ask your instructor for the location of the script. Using the VERITAS File System installation script, install the VERITAS File System software and documentation on your system. 2
Reboot the system.
3
Check in /.profile to ensure that the following paths are present on your system. /opt/VRTS/bin /opt/VRTSvxfs/sbin
Setting Up VERITAS Enterprise Administrator 1 Is the VEA server running? If not, start it. 2
Start the VEA graphical user interface. Note: On some systems, you may need to configure the system to use the appropriate display. For example, if the display is pc1:0, before you run VEA, type: # DISPLAY=pc1:0 # export DISPLAY It is also important that the display itself is configured to accept connections from your client. If you get permission errors when you try to start VEA, in a terminal window on the display system, type: xhost system or xhost +
A–4
3
Connect to your system as root. Your instructor provides you with the password.
4
Examine the VEA log file.
5
Access the Help system in VEA.
6
What disks are available to the OS?
7
Execute the Disk Scan command.
8
What commands were executed by the Disk Scan task?
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
9
Stop the Volume Manager’s graphical interface.
10 Create a root equivalent administrative account named admin1 for use of
VEA.
11 Test the new account. After you have tested the new account, exit VEA.
Exploring vxdiskadm 1 From the command line, invoke the text-based VxVM menu interface. 2
Display information about the menu or about specific commands.
3
What disks are available to the OS?
4
Exit the vxdiskadm interface.
Accessing CLI Commands (Optional) Note: This exercise introduces several commonly used VxVM commands. These commands and associated concepts are explained in detail throughout this course. If you have used Volume Manager before, you may already be familiar with these commands. If you are new to Volume Manager, you should start by reading the manual pages for these commands. 1 From the command line, invoke the VxVM manual pages and read about the vxassist command. 2
What vxassist command parameter creates a VxVM volume?
3
From the command line, invoke the VxVM manual pages and read about the vxdisk command.
4
What disks are available to VxVM?
5
From the command line, invoke the VxVM manual pages and read about the vxdg command.
6
How do you list locally imported disk groups?
7
From the command line, invoke the VxVM manual pages and read about the vxprint command.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–5
More Installation Exploration (Optional) 1 When does the VxVM license expire?
A–6
2
What is the version and revision number of the installed version of VxVM?
3
What daemons are running after the system boots under VxVM control?
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 3: Managing Disks and Disk Groups Introduction In this lab, you create new disk groups, add and remove disks from disk groups, deport and import disk groups, and destroy disk groups. The first exercise uses the VEA interface. The second exercise uses the command line interface. If you have time, you can also try to perform one of these exercises by using the vxdiskadm interface. If you use object names other than the ones provided, substitute the names accordingly in the commands. Caution: In this lab, do not include the boot disk in any of the tasks. Managing Disks and Disk Groups: VEA 1 Run and log on to the VEA interface. 2
View all the disk devices on the system.
3
Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. View all the disk devices on the system.
4
Add one more disk to your disk group. Initialize the disk and view all the disk devices on the system.
5
Remove all of the disks from your disk group. What happens to your disk group?
6
Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg.
7
Deport your disk group. Do not give it a new owner. View all the disk devices on the system.
8
Import your datadg disk group and view all the disk devices on the system.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–7
9
Deport datadg and assign your machine name, for example, train5, as the New Host.
10 Import the disk group and change its name to data3dg. View all the disk
devices on the system. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data3dg and data4dg.
11 Deport the disk group data3dg by assigning the ownership to anotherhost. View all the disk devices on the system. Why would you do
this?
12 Import data3dg. Were you successful? 13 Now import data3dg and overwrite the disk group lock. What did you have
to do to import it and why?
14 Destroy data3dg. View all the disk devices on the system.
Managing Disks and Disk Groups: CLI 1 View the status of the disks on your system.
A–8
2
Add one uninitialized disk to the free disk pool and view the status of the disk devices to verify your action.
3
Remove the disk from the free disk pool and return the disk to an uninitialized state. View the status of the disk devices to verify your action.
4
Add four data disks to the free disk pool as sliced disks and view the status of the disk devices to verify your action. Note: It is important to create sliced disks and use a non-CDS disk group as specified in the instructions, so that you can practice upgrading the disk group version later in this exercise.
5
Create a non-CDS disk group data4dg with at least one drive. Verify your action.
6
Deport disk group data4dg, then import the disk group back to your machine. Verify your action.
7
Destroy the disk group data4dg. Verify your action.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8
Create a new non-CDS disk group data4dg with an older disk group version assigned to it. Verify your action.
9
Upgrade the disk group to version 60.
10 How would you check that you have upgraded the version? 11 Add two more disks to the disk group data4dg. You should now have three
disks in your disk group. Verify your action.
12 Remove a disk from the disk group data4dg. Verify your action. 13 Deport disk group data4dg and assign the host name as the host name of your
machine. Verify your action.
14 View the status of the disks in the deported disk group using vxdisk list device_tag. What is in the hostid field? 15 Remove a disk from data4dg. Why does this fail? 16 Import the disk group data4dg. Verify your action. 17 Try again to remove a disk from data4dg. Does it work this time? 18 Deport the disk group data4dg and do not assign a host name. Verify your
action.
19 View the status of the disk in the deported disk group using vxdisk list device_tag. What is in the hostid field? 20 Uninitialize a disk that is in data4dg. Were you successful? 21 Import the disk group data4dg. Were you successful? 22 Destroy the disk group and send any initialized disks back to an uninitialized
state.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–9
Lab 4: Creating Volumes Introduction In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. You also practice creating a RAID-5 volume and a layered volume. Attempt to perform the first exercise using command-line interface commands. Solutions for performing tasks from the command line and using the VERITAS Enterprise Administrator (VEA) are included in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. After each step, use the VEA interface to view the volume layout in the main window and in the Volume View window. Setup A minimum of four disks is required to perform this lab, not including the root disk. Creating Volumes 1 Add four initialized disks to a disk group called datadg. Verify your action using vxdisk -o alldgs list. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. 2
Create a 50-MB concatenated volume with one drive.
3
Display the volume layout. What names have been assigned to the plex and subdisks?
4
Remove the volume.
5
Create a 50-MB striped volume on two disks and specify which two disks to use in creating the volume. What names have been assigned to the plex and subdisks?
6
Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. What do you notice about the plexes?
7
Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should not use. Was the volume created?
A–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8
Create a 20-MB striped volume with a mirror that has one less column (3) than number of drives. Was the volume created?
9
Create the same volume specified in step 7, but without the mirror. What names have been assigned to the plex and subdisks?
10 Create a 100-MB RAID-5 volume. Set the number of columns to the number
of drives in the disk group. Was the volume created? Run the command again, but use one less column. What is different about the structure? 11 Remove the volumes created in this exercise.
Creating Layered Volumes Complete this exercise by using the VEA interface. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you are using. 1
First, remove any volumes that you created in the previous lab.
2
Create a 100-MB Striped Mirrored volume with no logging. What command was used to create this volume? Hint: View the task properties.
3
Create a Concatenated Mirrored volume with no logging. The size of the volume should be greater than the size of the largest disk in the disk group; for example, if your largest disk is 8 GB, then create a 10-GB volume. What command was used to create this volume?
4
View the volumes in VEA and compare the layouts.
5
View the volumes from the command line.
6
Remove all of the volumes.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–11
Creating Volumes with User Defaults (Optional) This optional guided practice illustrates how to use the /etc/default/vxassist and /etc/default/alt_vxassist files to create volumes with defaults specified by the user. 1
2
Create two files in /etc/default: # cd /etc/default a
Using the vi editor, create a file called vxassist that includes the following: # when mirroring create three mirrors nmirror=3
b
Using the vi editor, create a file called alt_vxassist that includes the following: # use 256K as the default stripe unit size for # regular volumes stripeunit=256k
Use these files when creating the following volumes: Create a 100-MB volume using layout=mirror: # vxassist -g datadg make testvol 100m layout=mirror Create a 100-MB, two-column stripe volume using -d alt_vxassist so that Volume Manager uses the default file: # vxassist -g datadg -d alt_vxassist make testvol2 100m layout=stripe
A–12
3
View the layout of these volumes using VEA and by using vxprint. What do you notice?
4
Remove any vxassist default files that you created in this optional lab section. The presence of these files can impact subsequent labs where default behavior is assumed.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 5: Configuring Volumes Introduction This lab provides additional practice in configuring volume attributes. In this lab, you add mirrors, logs, and file systems to existing volumes, change the volume read policy, and specify ordered allocation of storage to volumes. You also practice file system administration. Setup Before you begin this lab, ensure that any volumes created in previous labs have been removed. Create a new disk group that contains four disks only. Configuring Volume Attributes Complete this exercise by using the command line interface. If you use object names other than the ones provided, substitute the names accordingly in the commands. Solutions for performing these tasks from the command line and using VEA are described in the Lab Solutions appendix. 1
Create a 20-MB, two-column striped volume with a mirror.
2
Display the volume layout. How are the disks allocated in the volume? Which disk devices are used?
3
Remove the volume you just made, and re-create it by specifying the four disks in an order different from the original layout.
4
Display the volume layout. How are the disks allocated this time?
5
Add a mirror to the existing volume. Were you successful? Why or why not?
6
Remove one of the two mirrors, and display the volume layout.
7
Add a mirror to the existing volume, and display the volume layout.
8
Add a dirty region log to the existing volume and specify the disk to use for the DRL. Display the volume layout.
9
Change the volume read policy to round robin, and display the volume layout.
10 Create a file system for the existing volume.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–13
11 Mount the file system at the mount point /mydirectory and add files. Verify
that the files were added to the new volume.
12 View the mount points using df –k.
Using the VEA interface, open the Volume to Disk Mapping window and display the subdisk information for each disk. 13 Unmount and remove the volume with the file system.
VERITAS File System Administration This lab ensures that you are able to use basic VERITAS File System administrative commands from the command line. Setting Up a VERITAS File System Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. 1 Create a 500-MB striped volume named datavol in the disk group datadg and use the default number of columns and stripe unit size. 2
Create a VERITAS file system on the datavol volume using the default options.
3
Create a mount point /datamnt on which to mount the file system.
4
Mount the newly created file system on the mount point, and use all default options.
5
Using the newly created file system, create, modify, and remove files.
6
Display the content of the mount point directory, showing hidden entries, inode numbers, and block sizes of the files.
7
What is the purpose of the lost+found directory?
8
How many disk blocks are defined within the file system and are used by the file system?
9
Unmount the file system.
10 Mount and, if necessary, check the file system at boot time.
A–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
11 Verify that the mount information has been accepted. 12 Display details of the file system that were set when it was created. 13 Check the structural integrity of the file system using the default log policy. 14 Remove the volume that you created for this lab.
Defragmenting a VERITAS File System In this exercise, you monitor and defragment a file system by using the fsadm command. Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. 1 Create a new 2-GB volume with a VxFS file system mounted on /fs_test. 2
Repeatedly copy /opt to the file system using a new target directory name each time until the file system is approximately 85 percent full. # for i in 1 2 3 > do > cp -r /opt /fs_test/opt$i > done
3
Delete all files over 10 MB in size.
4
Check the level of fragmentation in the file system.
5
Repeat steps two and three using values 4 5 for i in the loop. Fragmentation of both free space and directories will result.
6
Repeat step two using values 6 7 for i. Then delete all files that are smaller than 64K to release a reasonable amount of space.
7
Defragment the file system and display the results. Run fragmentation reports both before and after the defragmentation and display summary statistics after each pass. Compare the fsadm report from step 4 with the final report from the last pass in this step.
8
Unmount the file systems and remove the volumes used in this lab.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–15
Lab 6: Reconfiguring Volumes Online Introduction In this lab, you create and resize volumes and change volume layouts. You also explore the Storage Expert utility. Setup To perform this lab, you should have at least four disks in the disk group that you are using. You can use either the VEA interface or the command line interface, whichever you prefer. The solutions for both methods are covered in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. Note: If you are using VEA, view the properties of the related task after each step to view the underlying command that was issued. Resizing a Volume 1 If you have not already done so, remove the volumes created in the previous lab. 2
Create a 20-MB concatenated mirrored volume with a file system /myfs, and mount the volume.
3
View the layout of the volume.
4
Add data to the volume and verify that the file has been added.
5
Expand the file system and volume to 100 MB.
Changing the Volume Layout 1 Change the volume layout from its current layout (mirrored) to a nonlayered mirror-stripe with two columns and a stripe unit size of 128 sectors (64K). Monitor the progress of the relayout operation, and display the volume layout after each command that you run.
A–16
2
Verify that the file is still accessible.
3
Unmount the file system on the volume and remove the volume.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Resizing a File System Only Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. 1 Create a 50-MB volume named reszvol in the diskgroup datadg by using the VERITAS Volume Manager utility vxassist. 2
Create a VERITAS file system on the volume by using the mkfs command. Specify the file system size as 40 MB.
3
Create a mount point /reszmnt on which to the mount the file system.
4
Mount the newly created file system on the mount point /reszmnt.
5
Verify disk space using the df command. Observe that the available space is smaller than the size of the volume.
6
Expand the file system to the full size of the underlying volume using the fsadm -b newsize option.
7
Verify disk space using the df command.
8
Make a file on the file system mounted at /reszmnt (using mkfile), so that the free space is less than 50 percent of the total file system size.
9
Shrink the file system to 50 percent of its current size. What happens?
10 Unmount the file system and remove the volume.
Using the Storage Expert Utility 1 Add the directory containing the Storage Expert rules to your PATH environment variable in your .profile file. 2
Display a description of Storage Expert rule vxse_drl1. What does this rule do?
3
Does Storage Expert rule vxse_drl1 have any user-settable parameters?
4
From the command line, create a 100-MB mirrored volume with no log. Create and mount a file system on the volume.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–17
5
Run Storage Expert rule vxse_drl1 on the disk group containing the volume. What does Storage Expert report?
6
Expand the volume to a size of 1 GB.
7
Run Storage Expert rule vxse_drl1 again on the disk group containing the volume. What does Storage Expert report?
8
Add a log to the volume.
9
Run Storage Expert rule vxse_drl1 again on the disk group containing the volume. What does Storage Expert report?
10 What are the attributes and parameters that Storage Expert uses in running the vxse_drl1 rule? 11 Shrink the volume to 100 MB and remove the log. 12 Run Storage Expert rule vxse_drl1 again. When running the rule, specify
that you want Storage Expert to test the mirrored volume against a mirror_threshold of 100 MB. What does Storage Expert report? 13 Unmount the file system and remove the volume used in this exercise.
Monitoring Tasks (Optional) Objective: In this advanced section of the lab, you track volume relayout processes using the vxtask command and recover from a vxrelayout crash by using VEA or from the command line. Setup: You should have at least four disks in the disk group that you are using.
A–18
1
Create a mirror-stripe volume with a size of 1 GB using the vxassist command. Assign a task tag to the task and run the vxassist command in the background.
2
View the progress of the task.
3
Slow down the task progress rate to insert an I/O delay of 100 milliseconds. View the layout of the volume in the VEA interface.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
4
After the volume has been created, use vxassist to relayout the volume to stripe-mirror. Use a stripe unit size of 256K, use two columns, and assign the process to the above task tag.
5
In another terminal window, abort the task to simulate a crash during relayout. View the layout of the volume in the VEA interface.
6
Reverse the relayout operation. View the layout of the volume in the VEA interface.
7
Remove all of the volumes.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–19
Lab 7: Encapsulation and Rootability Introduction In this practice, you create a boot disk mirror, disable the boot disk, and boot up from the mirror. Then you boot up again from the boot disk, break the mirror, and remove the boot disk from the boot disk group. Finally, you reencapsulate the boot disk and re-create the mirror. These tasks are performed using a combination of the VEA interface, the vxdiskadm utility, and CLI commands. Encapsulation and Root Disk Mirroring 1 Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of your boot disk group and use rootdisk as the name of your boot disk.
A–20
2
After the reboot, use vxdiskadm to add a disk that will be used for the mirror of rootdisk. If your system has two internal disks, use the second internal disk on your system for the mirror. (This is required due to the nature of the classroom configuration.) When setting up the disk, make sure that the disk layout is sliced. Use altboot as the name of your disk.
3
Next, use vxdiskadm to mirror your system disk, rootdisk, to the disk that you added, altboot.
4
After the mirroring operation is complete, verify that you now have two disks in systemdg: rootdisk and altboot, and that all volumes are mirrored. What order are the volumes mirrored? Check to determine if rootvol is enabled and active. Hint: Use vxprint and examine the STATE fields.
5
From the command line, set the eeprom variable to enable VxVM to create a device alias in the openboot program.
6
To disable the boot disk and make rootvol-01 disabled and offline, use the vxmend command. This command is used to make changes to configuration records. Here, you are using the command to place the plex in an offline state. For more information about this command, see the vxmend (1m) manual page. # vxmend -g systemdg off rootvol-01
7
Verify that rootvol-01 is now disabled and offline.
8
To change the plex to a STALE state, run the vxmend on command on rootvol-01. Verify that rootvol-01 is now in the DISABLED and STALE state.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# vxmend -g systemdg on rootvol-01 9
Reboot the system using init 6.
10 At the OK prompt, check for available boot disk aliases.
OK> devalias
Use the boot disk alias vx-altboot to boot up from the alternate boot disk. For example: OK> boot vx-altboot 11 Verify that rootvol-01 is now in the ENABLED and ACTIVE state.
Note: You may need to wait a few minutes for the state to change from STALE to ACTIVE. You have successfully booted up from the mirror. 12 To boot up from the original boot disk, reboot again using init 6.
You have now booted up from the original boot disk. 13 Using VEA, remove all but one plex of rootvol, swapvol, usr, var, opt, and home (that is, remove the newer plex from each volume in systemdg.) 14 Run the command to convert the root volumes back to disk partitions. 15 Shut down the system when prompted. 16 Verify that the mount points are now slices rather than volumes. 17 At the end of this lab, leave your boot disk unencapsulated and remove any
other disks from systemdg.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–21
Lab 8: Recovery Essentials Introduction In this practice, you perform a variety of basic recovery operations. Perform this lab by using the command line interface. In some of the steps, the commands are provided for you. Setup For this lab, you should have at least four disks (datadg01 through datadg04) in a disk group called datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. Exploring Logging Behavior 1 Create two mirrored, concatenated volumes, 500 MB in size, called vollog and volnolog. 2
Add a log to the volume vollog.
3
Create a file system on both volumes.
4
Create mount points for the volumes, /vollog and /volnolog.
5
Copy /etc/vfstab to a file called origvfstab.
6
Edit /etc/vfstab so that vollog and volnolog are mounted automatically on reboot. (In the /etc/vfstab file, each entry should be separated by a tab.)
7
Type mountall to mount the vollog and volnolog volumes.
8
In root, start an I/O process on each volume. For example: # find /usr -print | cpio -pmud /vollog & # find /usr -print | cpio -pmud /volnolog &
9
Press Stop-A. At the OK prompt, type boot.
10 After the system is running again, check the state of the volumes to ensure that neither of the volumes is in the sync/needsync mode. 11 Run the vxstat command. This utility displays statistical information about
volumes and other VxVM objects. For more information on this command, see the vxstat (1m) manual page. # vxstat -g diskgroup -fab vollog volnolog
A–22
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
The output shows how many I/Os it took to resynchronize the mirrors. Compare the number of I/Os for each volume. What do you notice? 12 Stop the VxVM configuration daemon. 13 Create a 100-MB mirrored volume. What happens? 14 Start the VxVM configuration daemon. 15 Unmount both file systems and remove the volumes vollog and volnolog. 16 Restore your original vfstab file.
Removing a Disk from VxVM Control 1 Create a 100-MB, mirrored volume named recvol. Create and mount a file system on the volume. 2
Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume.
Device
Disk Media Name
Disk 1 Disk 2
3
Remove one of the disks that is being used by the volume.
4
Confirm that the disk was removed.
5
From the command line, check that the state of one the plexes is DISABLED and REMOVED. In VEA, the disk is shown as disconnected, because one of the plexes is unavailable.
6
Replace the disk back into the disk group.
7
Check the status of the disks. What is the status of the disks?
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–23
8
Display volume information. What is the state of the plexes?
9
In VEA, what is the status of the disks? What is the status of the volume?
10 From the command line, recover the volume. During and after recovery, check
the status of the plex in another command window and in VEA. 11 At the end of this lab, destroy your disk group and send your data disks back to
an uninitialized state. In the next exercises, you will use sliced disks and nonCDS disk groups to practice recovery operations. Replacing Physical Drives (Without Hot Relocation) 1 For this exercise, initialize four disks as sliced disks. Place the disks in a nonCDS disk group named datadg. Create a 100-MB mirrored volume, recvol, in the disk group, add a VxFS file system to the volume, and mount the file system at the mount point /recvol.
A–24
2
Stop vxrelocd using ps and kill, in order to stop hot relocation from taking place. Verify that the vxrelocd processes are killed before you continue. Note: There are two vxrelocd processes. You must kill both of them at the same time.
3
Next, you simulate disk failure by removing the public and private regions of one of the disks in the volume. In the commands, substitute the appropriate disk device name for one of the disks in use by recvol, for example c1t2d0s2. # fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2
4
An error will occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root. Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/recvol &
5
When the error occurs, view the status of the disks from the command line.
6
View the status of the volume from the command line.
7
In VEA, what is the status of the disks and volume?
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
8
Rescan for all attached disks:
9
Recover the disk by replacing the private and public regions on the disk: Note: This method for recovering the disk is only used because of the method in which the disk was defaulted (by writing over the private and public regions). In most real-life situations, you do not need to perform this step.
10 Bring the disk back under VxVM control: 11 Check the status of the disks and the volume. 12 From the command line, recover the volume. 13 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered. 14 Unmount the file system and remove the volume.
Exploring Spare Disk Behavior 1 You should have four sliced disks (datadg01 through datadg04) in the nonCDS disk group datadg. Set all disks to have the spare flag on. 2
Create a 100-MB mirrored volume called sparevol. Is the volume successfully created? Why or why not?
3
Attempt to create the same volume again, but this time specify two disks to use. Do not clear any spare flags on the disks.
4
Remove the volume.
5
Verify that the relocation daemon (vxrelocd) is running. If not, start it as follows: # vxrelocd root &
6
Remove the spare flags from three of the four disks.
7
Create a 100-MB concatenated mirrored volume called spare2vol.
8
Save the output of vxprint -thf to a file.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–25
9
Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume. You are going to simulate disk failure on one of the disks. Decide which disk you are going to fail. Open a console screen.
Device Name
Disk Media Name
Disk 1 Disk 2
10 Next, you simulate disk failure by removing the public and private regions of
one of the disks in the volume. In the commands, substitute the appropriate disk device name: # fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2 11 An error occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root.
Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/volume_name &
12 Run vxprint -rth and compare the output to the vxprint output that you
saved earlier. What has occurred?
13 In VEA, view the disks. Notice that the disk is in the disconnected state. 14 Run vxdisk -o alldgs list. What do you notice? 15 Rescan for all attached disks. 16 In VEA, view the status of the disks and the volume. 17 View the status of the disks and the volume from the command line. 18 Recover the disk by replacing the private and public regions on the disk. 19 Bring the disk back under VxVM control and into the disk group. 20 In VEA, undo hot relocation for the disk.
A–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
21 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume are fully recovered. 22 Reboot and then remove the volume. 23 Turn off any spare flags from your disks that you set during this lab.
Appendix A Lab Exercises Copyright © 2004 VERITAS Software Corporation. All rights reserved.
A–27
A–28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Appendix B Lab Solutions
Lab 1 Solutions: Introducing the Lab Environment Introduction In this lab, you are introduced to the lab environment, the systems, and disks that you will use throughout this course. You will also record some prerequisite information that will prepare you for the installation of VxVM and the labs that follow throughout this course. Lab Environment Introduction The instructor will describe the classroom environment, review the configuration and layout of the systems, and assign disks for you to use. The content of this activity depends on the type of classroom, hardware, and the operating system(s) deployed. Lab Prerequisites Record the following information to be provided by your instructor: root password Host name My Boot Disk: My Data Disks:
Location of VERITAS Volume Manager packages: Location of VERITAS File System packages: Location of Lab Scripts (if any): Location of VERITAS Storage Foundation license keys:
B–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 2 Solutions: Installation and Interfaces Introduction In this exercise, you add the VxVM packages, install VERITAS Volume Manager, and install VERITAS File System. You also explore the VxVM user interfaces, including the VERITAS Enterprise Administrator interface, the vxdiskadm menu interface, and the command line interface. Preinstallation 1 Determine if there are any VRTS packages currently installed on your system. Solaris
# pkginfo | grep -i VRTS HP-UX
# swlist -l product | grep VRTS AIX
# lslpp -l ‘VRTS*’ 2
Before installing VxVM, save the following important system files into backup files named with a “.preVM” extension. Also, save your boot disk information to a file for later use (do not store the file in /tmp). You may need the boot disk information when you bring the boot disk under VxVM control in a later lab.
Solaris
# cp /etc/system /etc/system.preVM # cp /etc/vfstab /etc/vfstab.preVM # prtvtoc /dev/rdsk/device_name > /etc/bootdisk.preVM AIX
# cp /etc/filesystems /etc/filesystems.preVM # cp /etc/vfs /etc/vfs.preVM 3
Are any VERITAS license keys installed on your system? Check for installed licenses. # vxlicrep
Installing VERITAS Volume Manager 1 Navigate to the directory containing the VxVM installation script. Ask your instructor for the location of the script. Using the VERITAS Volume Manager installation script, run a precheck to determine if your system meets all preinstallation requirements. If any requirements are not met, follow the instructions to take any required actions before you continue. # installvm -precheck system
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–3
2
Using the VERITAS Volume Manager installation script, install and perform initial configuration of VxVM. During the installation: – If you do not have Storage Foundation licenses on your system, install licenses when prompted. Your instructor will provide licensing information. – Install all optional and required packages. – Do not use enclosure-based naming. – Do not set a default disk group. – Start VxVM. # installvm
3
Check in /.profile to ensure that the following paths are present. Note: This may be done in the jumpstart of your system prior to this lab, but the paths may need to be added after a normal install. # PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSob/bin: /usr/sbin # MANPATH=$MANPATH:/opt/VRTS/man # export PATH MANPATH
Installing VERITAS File System 1 Navigate to the directory containing the VxFS installation script. Ask your instructor for the location of the script. Using the VERITAS File System installation script, install the VERITAS File System software and documentation on your system. # installfs 2 Reboot the system. # shutdown -y -i6 -g0 3 Check in /.profile to ensure that the following paths are present on your system. /opt/VRTS/bin /opt/VRTSvxfs/sbin Setting Up VERITAS Enterprise Administrator 1 Is the VEA server running? If not, start it. # vxsvc -m (to confirm that the server is running) # vxsvc (if the server is not already running) 2
B–4
Start the VEA graphical user interface. # vea
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Note: On some systems, you may need to configure the system to use the appropriate display. For example, if the display is pc1:0, before you run VEA, type: # DISPLAY=pc1:0 # export DISPLAY It is also important that the display itself is configured to accept connections from your client. If you get permission errors when you try to start VEA, in a terminal window on the display system, type: xhost system or xhost + 3
Connect to your system as root. Your instructor provides you with the password. – Hostname: (For example, train13) – Username: root – Password: (Your instructor provides the password.)
4
Examine the VEA log file. # pg /var/vx/isis/vxisis.log
5
Access the Help system in VEA. In the VEA main window, select Help—>Contents.
6
What disks are available to the OS? In the VEA object tree, expand your host and select the Disks node. Examine the Device column in the grid.
7
Execute the Disk Scan command. In the VEA object tree, select your host. Select Actions—>Rescan.
8
What commands were executed by the Disk Scan task? Click the Task tab at the bottom of the main window. Right-click “Scan for new disks” and select Properties. The commands executed are displayed.
9
Stop the Volume Manager’s graphical interface. In the VEA main window, select File—>Exit.
10 Create a root equivalent administrative account named admin1 for use of
VEA.
Solaris
Create a new administrative account named admin1: Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–5
# useradd admin1 # passwd admin1 Type a password for admin1. Modify the /etc/group file to add the vrtsadm group and specify the root and admin1 users by using the vi editor: # vi /etc/group In the file, move to the location where you want to insert the vrtsadm entry, change to insert mode by typing i, then add the line: vrtsadm::99:root,admin1 When you are finished editing, press [Esc] to leave insert mode. Then, save the file and quit: :wq HP-UX
Create a new administrative account named admin1 by using SAM or command line utilities: # useradd admin1 # passwd admin1 Type a password for admin1. Add the vrtsadm group and specify the root and admin1 users as members. Use SAM or modify the /etc/group file by using the vi editor: # vi /etc/group In the file, move to the location where you want to insert the vrtsadm entry, change to insert mode by typing i, then add the line: vrtsadm::99:root,admin1 When you are finished editing, press [Esc] to leave insert mode. Then, save the file and quit: :wq AIX
# mkgroup -A vrtsadm # useradd -m -G vrtsadm admin1 # passwd admin1 (Type the password.)
11 Test the new account. After you have tested the new account, exit VEA.
# vea Hostname: (For example, train13) User: admin1 Password: (Type the password that you created for admin1.) Select File—>Exit.
B–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Exploring vxdiskadm 1 From the command line, invoke the text-based VxVM menu interface. # vxdiskadm 2
Display information about the menu or about specific commands. Type ? at any of the prompts within the interface.
3
What disks are available to the OS? Type list at the main menu, and then type all.
4
Exit the vxdiskadm interface. Type q at the prompts until you exit vxdiskadm.
Accessing CLI Commands (Optional) Note: This exercise introduces several commonly used VxVM commands. These commands and associated concepts are explained in detail throughout this course. If you have used Volume Manager before, you may already be familiar with these commands. If you are new to Volume Manager, you should start by reading the manual pages for these commands. 1 From the command line, invoke the VxVM manual pages and read about the vxassist command. # man vxassist 2
What vxassist command parameter creates a VxVM volume? The make parameter is used in creating a volume.
3
From the command line, invoke the VxVM manual pages and read about the vxdisk command. # man vxdisk
4
What disks are available to VxVM? # vxdisk -o alldgs list All the available disks are displayed in the list.
5
From the command line, invoke the VxVM manual pages and read about the vxdg command. # man vxdg
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–7
6
How do you list locally imported disk groups? # vxdg list
7
From the command line, invoke the VxVM manual pages and read about the vxprint command. # man vxprint
More Installation Exploration (Optional) 1 When does the VxVM license expire? # vxlicrep | more 2
What is the version and revision number of the installed version of VxVM?
Solaris
# pkginfo -l VRTSvxvm In the output, look at the Version field. HP-UX
# swlist -l product | grep -i vxvm The version is in the second column of the output. AIX
# lslpp -l VRTSvxvm In the output, look under the column named Level.
3
B–8
What daemons are running after the system boots under VxVM control? # ps -ef|grep -i vx vxconfigd, vxrelocd, vxnotify, vxcached, vxesd, vxconfigbackupd
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 3 Solutions: Managing Disks and Disk Groups Introduction In this lab, you create new disk groups, add and remove disks from disk groups, deport and import disk groups, and destroy disk groups. The first exercise uses the VEA interface. The second exercise uses the command line interface. If you have time, you can also try to perform one of these exercises by using the vxdiskadm interface. If you use object names other than the ones provided, substitute the names accordingly in the commands. Caution: In this lab, do not include the boot disk in any of the tasks. Managing Disks and Disk Groups: VEA 1 Run and log on to the VEA interface. # vea 2
View all the disk devices on the system. In the object tree, select the Disks node and view the disks in the grid.
3
Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. View all the disk devices on the system. Select the Disk Groups node and select Actions—>New Disk Group. In the New Disk Group wizard, do not select a disk group organization principle. Type a name for the disk group, select a disk to be placed in the disk group, and click Add. Click Next, confirm your selection, and click Finish.
4
Add one more disk to your disk group. Initialize the disk and view all the disk devices on the system. Select an unused disk and select Actions—>Add Disk to Disk Group. In the Add Disk to Disk Group Wizard, select the disk group name, and verify or change the list of disks under Selected disks. Click Next, confirm your selection, and click Finish.
5
Remove all of the disks from your disk group. What happens to your disk group?
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–9
Select a disk that is in your disk group, and select Actions—>Remove Disk from Disk Group. In the Remove Disk dialog box, click Add All to select all disks in the disk group for removal, and click OK. All disks are returned to an uninitialized state, and the disk group is destroyed. 6
Create a new disk group by adding a disk from the free disk pool, or an uninitialized disk, to a new disk group. Initialize the disk (if it is uninitialized) and name the new disk group datadg. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. View all the disk devices on the system. Select the Disk Groups node and select Actions—>New Disk Group. In the New Disk Group Wizard, type a name for the disk group, select a disk to be placed in the disk group, and click Add. Click Next, confirm your selection, and click Finish.
7
Deport your disk group. Do not give it a new owner. View all the disk devices on the system. Select the disk group and select Actions—>Deport Disk Group. Confirm your request when prompted in the Deport Disk Group dialog box.
8
Import your datadg disk group and view all the disk devices on the system. Select the disk group and select Actions—>Import Disk Group. In the Import Disk Group dialog box, click OK.
9
Deport datadg and assign your machine name, for example, train5, as the New Host. Select the disk group and select Actions—>Deport Disk Group. Confirm your request. In the Deport Disk Group dialog box, type your machine name in the New Host field and click OK.
10 Import the disk group and change its name to data3dg. View all the disk
devices on the system. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data3dg and data4dg. Select the disk group and select Actions—>Import Disk Group. Confirm your request. In the Import Disk Group dialog box, type data3dg in the New Name field, and click OK.
B–10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
11 Deport the disk group data3dg by assigning the ownership to anotherhost. View all the disk devices on the system. Why would you do
this? Select the disk group and select Actions—>Deport Disk Group. Confirm your request. In the Deport Disk Group dialog box, type anotherhost in the New Host field. In the list of disks, this status of the disk is displayed as Foreign. You would do this to ensure the disks are not imported accidentally.
12 Import data3dg. Were you successful?
Select the disk group and select Actions—>Import Disk Group. In the Import Disk Group dialog box, click OK. This operation should fail, because data3dg belongs to another host. 13 Now import data3dg and overwrite the disk group lock. What did you have
to do to import it and why? Select the disk group and select Actions—>Import Disk Group. In the Import Disk Group dialog box, mark the Clear host ID check box, and click OK.
14 Destroy data3dg. View all the disk devices on the system.
Select the disk group and select Actions—>Destroy Disk Group. Confirm the operation when prompted. Managing Disks and Disk Groups: CLI 1 View the status of the disks on your system. # vxdisk -o alldgs list or # vxdisk -s list 2
Add one uninitialized disk to the free disk pool and view the status of the disk devices to verify your action. # vxdisksetup -i device_tag # vxdisk -o alldgs list
3
Remove the disk from the free disk pool and return the disk to an uninitialized state. View the status of the disk devices to verify your action. # vxdiskunsetup -C device_tag # vxdisk -o alldgs list
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–11
4
Add four data disks to the free disk pool as sliced disks and view the status of the disk devices to verify your action. Note: It is important to create sliced disks and use a non-CDS disk group as specified in the instructions, so that you can practice upgrading the disk group version later in this exercise. # vxdisksetup -i device_tag format=sliced # vxdisksetup -i device_tag format=sliced # vxdisksetup -i device_tag format=sliced # vxdisksetup -i device_tag format=sliced # vxdisk -o alldgs list
5
Create a non-CDS disk group data4dg with at least one drive. Verify your action. # vxdg init diskgroup data4dg01=device_tag cds=off # vxdisk -o alldgs list
6
Deport disk group data4dg, then import the disk group back to your machine. Verify your action. # vxdg deport diskgroup # vxdg import diskgroup # vxdisk -o alldgs list
7
Destroy the disk group data4dg. Verify your action. # vxdg destroy diskgroup # vxdisk -o alldgs list
8
Create a new non-CDS disk group data4dg with an older disk group version assigned to it. Verify your action. # vxdg –T 20 init diskgroup data4dg01=device_tag cds=off # vxdisk -o alldgs list
9
Upgrade the disk group to version 60. # vxdg –T 60 upgrade diskgroup
10 How would you check that you have upgraded the version?
# vxdg list diskgroup 11 Add two more disks to the disk group data4dg. You should now have three
disks in your disk group. Verify your action. # vxdg –g diskgroup adddisk data4dg02=device_tag # vxdg -g diskgroup adddisk data4dg03=device_tag # vxdisk -o alldgs list
B–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
12 Remove a disk from the disk group data4dg. Verify your action.
# vxdg –g diskgroup rmdisk data4dg01 # vxdisk -o alldgs list 13 Deport disk group data4dg and assign the host name as the host name of your
machine. Verify your action. # vxdg -h host_name deport diskgroup # vxdisk -o alldgs list
14 View the status of the disks in the deported disk group using vxdisk list device_tag. What is in the hostid field?
# vxdisk list device_tag The hostid is the name of your machine. 15 Remove a disk from data4dg. Why does this fail?
# vxdg –g diskgroup rmdisk data4dg03 The operation fails, because you are trying to remove a disk from a deported disk group. 16 Import the disk group data4dg. Verify your action.
# vxdg import diskgroup # vxdisk -o alldgs list 17 Try again to remove a disk from data4dg. Does it work this time?
# vxdg –g diskgroup rmdisk data4dg03 The operation is successful, because the disk group is imported. 18 Deport the disk group data4dg and do not assign a host name. Verify your
action. # vxdg deport diskgroup # vxdisk -o alldgs list
19 View the status of the disk in the deported disk group using vxdisk list device_tag. What is in the hostid field?
# vxdisk list device_tag The host id is now empty. 20 Uninitialize a disk that is in data4dg. Were you successful?
# vxdiskunsetup device_tag This operation should be successful.
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–13
21 Import the disk group data4dg. Were you successful?
# vxdg import diskgroup This operation fails if there are no disks left in the disk group. 22 Destroy the disk group and send any initialized disks back to an uninitialized state. # vxdg destroy diskgroup # vxdiskunsetup device_tag
B–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 4 Solutions: Creating Volumes Introduction In this lab, you create simple concatenated volumes, striped volumes, mirrored volumes, and volumes with logs. You also practice creating a RAID-5 volume and a layered volume. Attempt to perform the first exercise using command-line interface commands. Solutions for performing tasks from the command line and using the VERITAS Enterprise Administrator (VEA) are included in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. After each step, use the VEA interface to view the volume layout in the main window and in the Volume View window. Setup A minimum of four disks is required to perform this lab, not including the root disk. Creating Volumes: CLI 1 Add four initialized disks to a disk group called datadg. Verify your action using vxdisk -o alldgs list. Note: If you are sharing a disk array, each participant should select a different disk group name, such as data1dg and data2dg. Create a new disk group and add disks: # vxdg init diskgroup datadg01=device_tag datadg02=device_tag To add each additional disk to the disk group: # vxdg –g diskgroup adddisk disk_name=device_tag 2
Create a 50-MB concatenated volume with one drive. # vxassist –g diskgroup make vol01 50m
3
Display the volume layout. What names have been assigned to the plex and subdisks? To view the assigned names, view the volume using: # vxprint –g diskgroup –thr | more
4
Remove the volume. # vxedit –g diskgroup –rf rm vol01
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–15
5
Create a 50-MB striped volume on two disks and specify which two disks to use in creating the volume. # vxassist –g diskgroup make vol02 50m layout=stripe datadg01 datadg02 What names have been assigned to the plex and subdisks? To view the assigned names, view the volume using: vxprint -g diskgroup -thf | more
6
Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. # vxassist –g diskgroup make vol03 20m layout=mirror-stripe ncol=2 stripeunit=128k What do you notice about the plexes? View the volume using vxprint –g diskgroup –thf | more. Notice that you now have a second plex.
7
Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should not use. # vxassist –g diskgroup make vol04 20m layout=mirror-stripe ncol=2 stripeunit=128k !datadg03 Was the volume created? This operation should fail, because there are not enough disks available in the disk group. A two-column striped mirror requires at least 4 disks.
8
Create a 20-MB striped volume with a mirror that has one less column (3) than number of drives. # vxassist –g diskgroup –b make vol04 20m layout=mirror-stripe ncol=3 datadg01 datadg02 datadg03 Was the volume created? Again, this operation should fail, because there are not enough disks available in the disk group. At least four disks are required for this type of volume configuration.
9
B–16
Create the same volume specified in step 7, but without the mirror. # vxassist –g diskgroup –b make vol05 20m layout=stripe ncol=3 datadg01 datadg02 datadg03 What names have been assigned to the plex and subdisks? To view the assigned names, view the volume using: vxprint -g diskgroup -thr | more
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
10 Create a 100-MB RAID-5 volume. Set the number of columns to the number
of drives in the disk group. # vxassist –g diskgroup make vol06 100m layout=raid5 ncol=4 datadg01 datadg02 datadg03 datadg04 Was the volume created? This operation should fail, because when you create a RAID-5 volume, a RAID-5 log is created by default. Therefore, at least five disks are required for this volume configuration. Run the command again, but use one less column. # vxassist –g diskgroup make vol06 100m layout=raid5 ncol=3 datadg01 datadg02 datadg03 datadg04 What is different about the structure? View the volume using vxprint –g diskgroup –thf | more. Notice that you now have a log plex. 11 Remove the volumes created in this exercise.
For each volume: # vxedit -g diskgroup -rf rm volume_name Creating Volumes: VEA Solutions 1 Add four initialized disks to a disk group called datadg. Verify your action in the main window. Create a new disk group and add disks: Select a disk, and select Actions—>New Disk Group. In the New Disk Group wizard, specify the disk group name, select the disks you want to use from the Available disks list, and click Add. Click Next, confirm your selection, and click Finish. 2
Create a 50-MB concatenated volume with one drive. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type the name of the volume, and specify a size of 50 MB. Verify that the Concatenated layout is selected in the Layout region. Complete the wizard by accepting all remaining defaults to create the volume.
3
Display the volume layout. Notice the naming convention of the plex and subdisk.
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–17
Select the volume in the object tree, and select Actions—>Volume View. In the Volumes window, click the Expand button. Compare the information in the Volumes window to the information under the Mirrors, Logs, and Subdisks tabs in the right pane of the main window. 4
Remove the volume. Select the volume, and select Actions—>Delete Volume. In the Delete Volume dialog box, click Yes.
5
Create a 50-MB striped volume on two disks, and specify which two disks to use in creating the volume. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, select “Manually select disks for use by this volume.” Move two disks into the Included box, and then click Next. Type the name of the volume, and specify a size of 50 MB. Select the Striped option in the Layout region. Verify that the number of columns is 2. Complete the wizard by accepting all remaining defaults to create the volume.
View the volume. Select the volume, and select Actions—>Volume View. Close the Volumes window when you are satisfied. 6
Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type the name of the volume, and specify a size of 20 MB. Select the Striped option in the Layout region. Verify that the number of columns is 2. Set the Stripe unit size to 256 (sectors), or 128K. Mark the Mirrored check box in the Mirror Info region. Complete the wizard by accepting all remaining defaults to create the volume. View the volume. Notice that you now have a second plex. Select the volume, and select Actions—>Volume View. Close the Volumes window when you are satisfied.
B–18
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
7
Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should not use. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, select “Manually select disks for use by this volume.” Move one disk into the Excluded box, and then click Next. Type the name of the volume, and specify a size of 20 MB. Select the Striped option in the Layout region. Verify that the number of columns is 2. Set the Stripe unit size to 256 (sectors), or 128K. Mark the Mirrored check box in the Mirror Info region. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? This operation should fail, because there are not enough disks available in the disk group. A two-column striped mirror requires at least four disks.
8
Create a 20-MB striped volume with a mirror with one less column than number of drives. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type the name of the volume, and specify a size of 20 MB. Select the Striped option in the Layout region. Change the number of columns to 3. Mark the Mirrored check box in the Mirror Info region. You receive an error and are not able to complete the wizard. Was the volume created? Again, this operation should fail, because there are not enough disks available in the disk group. At least six disks are required for this type of volume configuration.
9
Create the same volume specified in step 7, but without the mirror. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type the name of the volume, and specify a size of 20 MB. Select the Striped option in the Layout region. Change the number of columns to 3. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? Yes, the volume is created this time.
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–19
10 Create a 100-MB RAID-5 volume. Set the number of columns to the number
of drives in the disk group. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type the name of the volume, and a size of 100 MB. Select the RAID-5 option in the Layout region. Change the number of columns to 4. You receive an error and are not able to complete the wizard. Was the volume created? This operation should fail, because when you create a RAID-5 volume, a RAID-5 log is created by default. Therefore, at least five disks are required for this volume configuration. Run the command again, but use one less column. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type the name of the volume, and a size of 100 MB. Select the RAID-5 option in the Layout region. Verify that the number of columns is 3. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? Yes, the volume is created this time. 11 Delete all volumes from the disk group.
For each volume, select the volume and select Actions—>Delete Volume. Click Yes to delete the volume. Creating Layered Volumes Complete this exercise by using the VEA interface. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you are using.
B–20
1
First, remove any volumes that you created in the previous lab. To remove a volume, highlight a volume in the main window, and select Actions—>Remove Volume.
2
Create a 100-MB Striped Mirrored volume with no logging.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Select a disk group in the main window. Select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type a volume name, specify a volume size of 100 MB, and select a Striped Mirrored layout. Complete the wizard by accepting all remaining defaults to create the volume. What command was used to create this volume? Hint: View the task properties. Click the Tasks tab at the bottom of the screen. In the Tasks tab, rightclick the latest Create Volume tasks and select Properties. The command issued is displayed in the Commands Executed field. 3
Create a Concatenated Mirrored volume with no logging. The size of the volume should be greater than the size of the largest disk in the disk group; for example, if your largest disk is 8 GB, then create a 10-GB volume. Select a disk group, and select Actions—>New Volume. In the New Volume wizard, let VxVM decide which disks to use. Type a volume name, an appropriate volume size, and select a Concatenated Mirrored layout. Complete the wizard by accepting all remaining defaults to create the volume. What command was used to create this volume? Click the Tasks tab at the bottom of the screen. In the Tasks tab, rightclick the latest Create Volume tasks and select Properties. The command issued is displayed in the Commands Executed field.
4
View the volumes in VEA and compare the layouts. Highlight the disk group and select Actions—>Volume View. Click the Expand button in the Volumes window. You can also highlight each volume in the object tree and view information in the tabs in the right pane. Notice the information on the Mirrors, Logs, and Subdisks tabs.
5
View the volumes from the command line. # vxprint -rth volume_name
6
Remove all of the volumes. To remove a volume, use the command: # vxedit –g diskgroup –rf rm volume_name
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–21
Creating Volumes with User Defaults (Optional) This optional guided practice illustrates how to use the /etc/default/vxassist and /etc/default/alt_vxassist files to create volumes with defaults specified by the user. 1
2
Create two files in /etc/default: # cd /etc/default a
Using the vi editor, create a file called vxassist that includes the following: # when mirroring create three mirrors nmirror=3
b
Using the vi editor, create a file called alt_vxassist that includes the following: # use 256K as the default stripe unit size for # regular volumes stripeunit=256k
Use these files when creating the following volumes: Create a 100-MB volume using layout=mirror: # vxassist -g datadg make testvol 100m layout=mirror Create a 100-MB, two-column stripe volume using -d alt_vxassist so that Volume Manager uses the default file: # vxassist -g datadg -d alt_vxassist make testvol2 100m layout=stripe
B–22
3
View the layout of these volumes using VEA and by using vxprint. What do you notice? – The first volume should show three plexes rather than the standard two. – The second volume should show a stripe size of 256K instead of the standard 64K.
4
Remove any vxassist default files that you created in this optional lab section. The presence of these files can impact subsequent labs where default behavior is assumed.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 5 Solutions: Configuring Volumes Introduction This lab provides additional practice in configuring volume attributes. In this lab, you add mirrors, logs, and file systems to existing volumes, change the volume read policy, and specify ordered allocation of storage to volumes. You also practice file system administration. Setup Before you begin this lab, ensure that any volumes created in previous labs have been removed. Create a new disk group that contains four disks only. Configuring Volume Attributes: CLI Complete this exercise by using the command line interface. If you use object names other than the ones provided, substitute the names accordingly in the commands. Solutions for performing these tasks from the command line and using VEA are described in the Lab Solutions appendix. 1
Create a 20-MB, two-column striped volume with a mirror. # vxassist -g diskgroup make volume_name 20m layout=mirror-stripe ncol=2
2
Display the volume layout. How are the disks allocated in the volume? Which disk devices are used? # vxprint -htr Notice which two disks are allocated to the first plex and which two disks are allocated to the second plex and record your observation.
3
Remove the volume you just made, and re-create it by specifying the four disks in an order different from the original layout. # vxassist -g diskgroup remove volume volume_name # vxassist -g diskgroup -o ordered make volume_name 20m layout=mirror-stripe ncol=2 datadg04 datadg03 datadg02 datadg01
4
Display the volume layout. How are the disks allocated this time? # vxprint -htr The plexes are now allocated in the order specified on the command line.
5
Add a mirror to the existing volume. # vxassist -g diskgroup mirror volume_name Were you successful? Why or why not?
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–23
The original volume already occupied all four disks in the disk group. To add another mirror requires two extra disks. 6
Remove one of the two mirrors, and display the volume layout. # vxplex -g diskgroup -o rm dis plex_name # vxprint -rth
7
Add a mirror to the existing volume, and display the volume layout. # vxassist -g diskgroup mirror volume_name # vxprint -rth
8
Add a dirty region log to the existing volume and specify the disk to use for the DRL. Display the volume layout. # vxassist -g diskgroup addlog volume_name logtype=drl disk_name # vxprint -rth
9
Change the volume read policy to round robin, and display the volume layout. # vxvol -g diskgroup rdpol round volume_name # vxprint -rth
10 Create a file system for the existing volume.
To create a VxFS file system: # mkfs –F vxfs /dev/vx/rdsk/diskgroup/volume_name Or, use an OS-specific command to create a different file system type. 11 Mount the file system at the mount point /mydirectory and add files. Verify
that the files were added to the new volume. Create a mount point: # mkdir /mydirectory Mount the file system: # mount –F vxfs /dev/vx/dsk/diskgroup/volume_name /mydirectory
12 View the mount points using df –k.
Using the VEA interface, open the Volume to Disk Mapping window and display the subdisk information for each disk. Select the disk group, and select Actions—>Disk/Volume Map. In the Volume to Disk Mapping window, click the triangle to the left of each disk name to view the subdisks. 13 Unmount and remove the volume with the file system.
B–24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Unmount the file system: # umount /mydirectory Remove the volume: # vxassist –g diskgroup remove volume volume_name Configuring Volume Attributes: VEA Solutions 1 Create a 20-MB, two-column striped volume with a mirror. Highlight a disk group and select Actions—>New Volume. Complete the New Volume wizard. 2
Display the volume layout. How are the disks allocated in the volume? Which disk devices are used? Highlight the volume and click each of the tabs in the right pane and notice the information under the Mirrors, Logs, and Subdisks tabs. Select Actions—>Volume View, click the Expand button, and compare the information to the information in the main window.
3
Remove the volume you just made, and re-create it by specifying the four disks in order of highest target first (for example, datadg04, datadg03, datadg02, datadg01, where datadg04=c1t15d0, datadg03=c1t14d0, and so on). When you create the volume, select “Manually select disks to use for this volume.” Move the disks into the Included box in the desired order, mark the Ordered check box, click Next, and click Finish.
4
Display the volume layout. How are the disks allocated this time? Highlight the volume and click each of the tabs in the right pane. Notice the information in the Mirrors, Logs, and Subdisks tabs. Select Actions—>Volume View, click the Expand button, and compare the information to the information in the main window.
5
Remove a mirror and update the layout display. What happened? Highlight the volume, and click the Mirrors tab in the right pane. Rightclick a plex, and select Actions—>Remove Mirror. In the Remove Mirror dialog box, click Yes. Only the selected plex is removed.
6
Add a mirror to the existing volume and show the layout. Highlight the volume to be mirrored, and select Actions—>Mirror—> Add. Complete the Add Mirror dialog box and click OK.
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–25
7
Add a dirty region log to the existing volume, specify the target disk for the log, and then show the layout. Highlight the volume, and select Actions—>Log—>Add. Complete the Add Log dialog box, specify a target disk for the log, and click OK. Highlight the volume and click the Logs tab.
8
Change the volume read policy to round robin. Highlight the volume, and select Actions—>Set Volume Usage. Select Round robin and click OK.
9
Create a file system for the existing volume. Highlight the volume, and select Actions—>File System—>New File System. In the New File System dialog box, specify a mount point for the volume, and click OK.
10 Add files to the new volume. Verify that the files were added to the new
volume. After adding files to the file system, you can verify that files were added by displaying file system information. Expand the File Systems node in the object tree, and right-click the file system in the right pane, and select Properties. Using the VEA interface, open the Volume to Disk Mapping window and display the subdisk information for each disk. Highlight the disk group and select Actions—>Disk/Volume Map. 11 Unmount and remove the volume with the file system.
Highlight the volume, and select Actions—>Delete Volume. In the Delete Volume dialog box, click Yes. In the Unmount File System dialog box, click Yes. VERITAS File System Administration This lab ensures that you are able to use basic VERITAS File System administrative commands from the command line. Setting Up a VERITAS File System Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands.
B–26
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
1
2
3 4
5
6
Create a 500-MB striped volume named datavol in the disk group datadg and use the default number of columns and stripe unit size. # vxassist -g diskgroup make datavol 500m layout=stripe Create a VERITAS file system on the datavol volume using the default options. # mkfs -F vxfs /dev/vx/rdsk/diskgroup/datavol Create a mount point /datamnt on which to mount the file system. # mkdir /datamnt Mount the newly created file system on the mount point, and use all default options. # mount -F vxfs /dev/vx/dsk/diskgroup/datavol /datamnt Using the newly created file system, create, modify, and remove files. # cd /datamnt # cp /etc/r* . # touch file1 file2 # mkfile 64b file3 # vi newfile (Enter some content into the new file and save the file.) # rm reboot Display the content of the mount point directory, showing hidden entries, inode numbers, and block sizes of the files. # ls -alis
What is the purpose of the lost+found directory? To hold the data blocks salvaged from running fsck. 8 How many disk blocks are defined within the file system and are used by the file system? # df # df -k # du -s . 9 Unmount the file system. # cd / # umount /datamnt 10 Mount and, if necessary, check the file system at boot time. #vi /etc/vfstab G o In the /etc/vfstab file, add the following information: device to mount: /dev/vx/dsk/diskgroup/datavol device to fsck: /dev/vx/rdsk/diskgroup/datavol mount point: /datamnt FS Type: vxfs fsck pass: 2 7
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–27
mount at boot: mount options: [Esc] :wq
yes -
11 Verify that the mount information has been accepted.
# mount -a 12 Display details of the file system that were set when it was created. # fstyp -v /dev/vx/dsk/diskgroup/datavol 13 Check the structural integrity of the file system using the default log policy. # umount /datamnt # fsck -F vxfs /dev/vx/dsk/diskgroup/datavol 14 Remove the volume that you created for this lab. # vxassist -g diskgroup remove volume datavol
Defragmenting a VERITAS File System In this exercise, you monitor and defragment a file system by using the fsadm command. Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. 1 Create a new 2-GB volume with a VxFS file system mounted on /fs_test. # vxassist -g diskgroup make volume 2g layout=stripe # mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume # mkdir /fs_test # mount -F vxfs /dev/vx/dsk/diskgroup/volume /fs_test
B–28
2
Repeatedly copy /opt to the file system using a new target directory name each time until the file system is approximately 85 percent full. # for i in 1 2 3 > do > cp -r /opt /fs_test/opt$i > done
3
Delete all files over 10 MB in size. # find /fs_test -size +20480b -exec rm {} \;
4
Check the level of fragmentation in the file system. # fsadm -D -E /fs_test
5
Repeat steps two and three using values 4 5 for i in the loop. Fragmentation of both free space and directories will result. VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
6
Repeat step two using values 6 7 for i. Then delete all files that are smaller than 64K to release a reasonable amount of space. # find /fs_test -size -64k -exec rm {} \;
7
Defragment the file system and display the results. Run fragmentation reports both before and after the defragmentation and display summary statistics after each pass. Compare the fsadm report from step 4 with the final report from the last pass in this step. # fsadm -e -E -d -D -s /fs_test
8
Unmount the file systems and remove the volumes used in this lab. # umount mount_point # vxassist -g diskgroup remove volume volume_name
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–29
Lab 6 Solutions: Reconfiguring Volumes Online Introduction In this lab, you create and resize volumes and change volume layouts. You also explore the Storage Expert utility. Setup To perform this lab, you should have at least four disks in the disk group that you are using. You can use either the VEA interface or the command line interface, whichever you prefer. The solutions for both methods are covered in the Lab Solutions appendix. If you use object names other than the ones provided, substitute the names accordingly in the commands. Note: If you are using VEA, view the properties of the related task after each step to view the underlying command that was issued. Resizing a Volume 1 If you have not already done so, remove the volumes created in the previous lab. VEA: For each volume in your disk group, highlight the volume, and select Actions—>Delete Volume. CLI: # umount /filesystem # vxedit –g diskgroup –rf rm volume_name 2
Create a 20-MB concatenated mirrored volume with a file system /myfs, and mount the volume. VEA: Highlight the disk group, and select Actions—>New Volume. Specify a volume name, the size, a concatenated layout, and select mirrored. Ensure that “Enable logging” is not checked. Add a VxFS file system and set a mount point. CLI: # # # #
B–30
vxassist -g diskgroup make volume_name 20m layout=mirror mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume_name mkdir /myfs mount –F vxfs /dev/vx/dsk/diskgroup/volume_name /myfs
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
3
View the layout of the volume. VEA: Highlight the volume and click each of the tabs in the right pane to display information about Mirrors, Logs, and Subdisks. You can also select Actions—>Volume View, click the Expand button, and compare the information to the main window. CLI: # vxprint -rth
4
Add data to the volume and verify that the file has been added. # echo “hello myfs” > /myfs/hello
5
Expand the file system and volume to 100 MB. VEA: Highlight the volume and select Actions—>Resize Volume. In the Resize Volume dialog box, specify 100 MB in the “New volume size” field, and click OK. CLI: # vxresize –g diskgroup volume_name 100m
Changing the Volume Layout 1 Change the volume layout from its current layout (mirrored) to a nonlayered mirror-stripe with two columns and a stripe unit size of 128 sectors (64K). Monitor the progress of the relayout operation, and display the volume layout after each command that you run. VEA: Highlight the volume and select Actions—>Change Layout. In the Change Volume Layout dialog box, select a Striped layout, specify two columns, and click OK. To monitor the progress of the relayout, the Relayout status monitor window is automatically displayed when you start the relayout operation. When you view the task properties of the relayout operation, notice that two commands are issued: # vxassist -t taskid -g diskgroup relayout volume_name layout=mirror-stripe nmirror=2 ncol=2 stripeunit=128 # vxassist -g diskgroup convert volume_name layout=mirror-stripe
CLI: Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–31
To begin the relayout operation: # vxassist –g diskgroup relayout volume_name layout=mirror-stripe ncol=2 stripeunit=128 To monitor the progress of the task, run: # vxtask monitor Run vxprint to display the volume layout. Notice that a layered layout is created: # vxprint -rth Recall that when you relayout a volume to a striped layout, a layered layout is created first, then you must use vxassist convert to complete the conversion to a nonlayered mirror-stripe: # vxassist -g diskgroup convert volume_name layout=mirror-stripe Run vxprint to confirm the resulting layout. Notice that the volume is now a nonlayered volume: # vxprint -rth 2
Verify that the file is still accessible. # cat /myfs/hello
3
Unmount the file system on the volume and remove the volume. VEA: Highlight the volume, and select Actions—>Delete Volume. In the Delete Volume dialog box, click Yes. In the Unmount File System dialog box, click Yes. CLI: # umount /filesystem # vxedit –g diskgroup –rf rm volume_name
Resizing a File System Only Remove any volumes created in previous labs. Ensure that the external disks on your system are in a disk group named datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. 1 Create a 50-MB volume named reszvol in the diskgroup datadg by using the VERITAS Volume Manager utility vxassist. # vxassist -g datadg make reszvol 50m 2 Create a VERITAS file system on the volume by using the mkfs command. Specify the file system size as 40 MB. # mkfs -F vxfs /dev/vx/rdsk/datadg/reszvol 40m 3 Create a mount point /reszmnt on which to the mount the file system. B–32
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# mkdir /reszmnt 4 Mount the newly created file system on the mount point /reszmnt. # mount -F vxfs /dev/vx/dsk/datadg/reszvol /reszmnt 5 Verify disk space using the df command. Observe that the available space is smaller than the size of the volume. # df -k 6 Expand the file system to the full size of the underlying volume using the fsadm -b newsize option. # fsadm -b 50m -r /dev/vx/rdsk/datadg/reszvol /reszmnt 7 Verify disk space using the df command. # df -k 8 Make a file on the file system mounted at /reszmnt (using mkfile), so that the free space is less than 50 percent of the total file system size. # mkfile 25m /reszmnt/myfile 9 Shrink the file system to 50 percent of its current size. What happens? # fsadm -b 25m -r /dev/vx/rdsk/datadg/reszvol /reszmnt The command fails. You cannot shrink the file system because blocks are currently in use. 10 Unmount the file system and remove the volume. # umount /reszmnt # vxassist -g datadg remove volume reszvol
Using the Storage Expert Utility 1 Add the directory containing the Storage Expert rules to your PATH environment variable in your .profile file. # PATH=$PATH:/opt/VRTS/vxse/vxvm # export PATH 2
Display a description of Storage Expert rule vxse_drl1. What does this rule do? # vxse_drl1 info This rule checks for large mirrored volumes that do not have an associated log.
3
Does Storage Expert rule vxse_drl1 have any user-settable parameters? # vxse_drl1 list This rule does not have any user-settable parameters.
4
From the command line, create a 100-MB mirrored volume with no log. Create and mount a file system on the volume.
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–33
# vxassist -g diskgroup make volume_name 100m layout=mirror # mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume_name # mkdir /sefs # mount –F vxfs /dev/vx/dsk/diskgroup/volume_name /sefs 5
Run Storage Expert rule vxse_drl1 on the disk group containing the volume. What does Storage Expert report? # vxse_drl1 -g diskgroup run Storage Expert reports information; the mirrored volume is skipped, since the volume is less than the size of volumes tested by the rule.
6
Expand the volume to a size of 1 GB. # vxresize -g diskgroup datavol 1g
7
Run Storage Expert rule vxse_drl1 again on the disk group containing the volume. What does Storage Expert report? # vxse_drl1 -g diskgroup run Storage Expert reports a violation, since the large mirrored volume does not have a log.
8
Add a log to the volume. # vxassist -g diskgroup addlog datavol
9
Run Storage Expert rule vxse_drl1 again on the disk group containing the volume. What does Storage Expert report? # vxse_drl1 -g diskgroup run Storage Expert reports that the volume passes the test, since the large mirrored volume now has a log.
10 What are the attributes and parameters that Storage Expert uses in running the vxse_drl1 rule?
# vxse_drl1 list The attribute is mirror_threshold. Storage Expert will warn if a mirror is greater than this size and the volume does not have a log. # vxse_drl1 check The mirror_threshold is a 1-GB mirrored volume. 11 Shrink the volume to 100 MB and remove the log.
# vxresize -g diskgroup datavol 100m # vxassist -g diskgroup remove log datavol
B–34
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
12 Run Storage Expert rule vxse_drl1 again. When running the rule, specify
that you want Storage Expert to test the mirrored volume against a mirror_threshold of 100 MB. What does Storage Expert report? # vxse_drl1 -g diskgroup run mirror_threshold=100m This step demonstrates how to specify different attribute values from the command line. Because you set the mirror_threshold parameter to 100 MB, Storage Expert reports a violation.
13 Unmount the file system and remove the volume used in this exercise.
# umount /sefs # vxedit –g diskgroup –rf rm volume_name
Monitoring Tasks (Optional) Objective: In this advanced section of the lab, you track volume relayout processes using the vxtask command and recover from a vxrelayout crash by using VEA or from the command line. Setup: You should have at least four disks in the disk group that you are using. 1
Create a mirror-stripe volume with a size of 1 GB using the vxassist command. Assign a task tag to the task and run the vxassist command in the background. VEA: Highlight a disk group and select Actions—>New Volume. Specify a volume name, the size, a striped layout, and select mirrored. Ensure that “No layered volumes” is checked. Add a VxFS file system and create a mount point. Note: You cannot assign a task tag when using VEA. CLI: # vxassist –g diskgroup -b –t task_name make volume_name 1g layout=mirror-stripe
2
View the progress of the task. VEA: Click the Tasks tab at the bottom of the main window to display the task and the percent complete. CLI: # vxtask list task_name or # vxtask monitor
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–35
3
Slow down the task progress rate to insert an I/O delay of 100 milliseconds. VEA: Right-click the task in the Tasks tab, and select Throttle Task. Specify 100 as the Throttling value, and click OK. CLI: # vxtask set slow=100 task_name View the layout of the volume in the VEA interface.
4
After the volume has been created, use vxassist to relayout the volume to stripe-mirror. Use a stripe unit size of 256K, use two columns, and assign the process to the above task tag. VEA: Highlight the volume and select Actions—>Change Layout. In the Change Volume Layout dialog box, select a Striped Mirrored layout. Change the stripe unit size value to 512. CLI: # vxassist –g diskgroup –t task_name relayout volume_name layout=stripe-mirror stripeunit=256k ncol=2
5
In another terminal window, abort the task to simulate a crash during relayout. VEA: In the Relayout status monitor window, click Abort. CLI: # vxtask abort task_name View the layout of the volume in the VEA interface.
6
Reverse the relayout operation. VEA: In the Relayout status monitor window, click Reverse. CLI: # vxrelayout –g diskgroup reverse volume_name View the layout of the volume in the VEA interface.
7
B–36
Remove all of the volumes. VEA: Highlight the volume, select Actions—>Delete Volume, and click Yes.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
CLI: # vxedit –g diskgroup –rf rm volume_name
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–37
Lab 7 Solutions: Encapsulation and Rootability Introduction In this practice, you create a boot disk mirror, disable the boot disk, and boot up from the mirror. Then you boot up again from the boot disk, break the mirror, and remove the boot disk from the boot disk group. Finally, you reencapsulate the boot disk and re-create the mirror. These tasks are performed using a combination of the VEA interface, the vxdiskadm utility, and CLI commands. Encapsulation and Root Disk Mirroring 1 Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of your boot disk group and use rootdisk as the name of your boot disk. Select the vxdiskadm option, “Encapsulate one or more disks,” and follow the steps to encapsulate your system disk. Select the system disk as the disk to encapsulate. Add the system disk to a disk group named systemdg. Specify the name of the disk as rootdisk. Shutdown and reboot when prompted. 2
After the reboot, use vxdiskadm to add a disk that will be used for the mirror of rootdisk. If your system has two internal disks, use the second internal disk on your system for the mirror. (This is required due to the nature of the classroom configuration.) When setting up the disk, make sure that the disk layout is sliced. Use altboot as the name of your disk. Select the vxdiskadm option, “Add or initialize one or more disks,” and follow the steps to add a disk to the systemdg disk group. Select the second internal disk as the device to add. Add the disk to the systemdg disk group. Specify a sliced format when prompted. Specify the name of the disk as altboot.
3
Next, use vxdiskadm to mirror your system disk, rootdisk, to the disk that you added, altboot. Select the vxdiskadm option, “Mirror volumes on a disk,” and follow the steps to mirror the volumes. Specify the disk containing the volumes to be mirrored as rootdisk. Specify the destination disk as altboot.
4
After the mirroring operation is complete, verify that you now have two disks in systemdg: rootdisk and altboot, and that all volumes are mirrored. # vxprint -g systemdg -htr What order are the volumes mirrored? Alphabetical order Check to determine if rootvol is enabled and active. Hint: Use vxprint and examine the STATE fields.
B–38
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# vxprint -thf The rootvol should be in the ENABLED and ACTIVE state, and you should also see two plexes for each of the volumes in systemdg. 5
From the command line, set the eeprom variable to enable VxVM to create a device alias in the openboot program. # eeprom use-nvramrc?=true
6
To disable the boot disk and make rootvol-01 disabled and offline, use the vxmend command. This command is used to make changes to configuration records. Here, you are using the command to place the plex in an offline state. For more information about this command, see the vxmend (1m) manual page. # vxmend -g systemdg off rootvol-01
7
Verify that rootvol-01 is now disabled and offline. # vxprint -thf
8
To change the plex to a STALE state, run the vxmend on command on rootvol-01. Verify that rootvol-01 is now in the DISABLED and STALE state. # vxmend -g systemdg on rootvol-01 # vxprint -thf
9
Reboot the system using init 6. # init 6
10 At the OK prompt, check for available boot disk aliases.
OK> devalias
Use the boot disk alias vx-altboot to boot up from the alternate boot disk. For example: OK> boot vx-altboot 11 Verify that rootvol-01 is now in the ENABLED and ACTIVE state.
Note: You may need to wait a few minutes for the state to change from STALE to ACTIVE. # vxprint -thf You have successfully booted up from the mirror. 12 To boot up from the original boot disk, reboot again using init 6.
# init 6
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–39
You have now booted up from the original boot disk. 13 Using VEA, remove all but one plex of rootvol, swapvol, usr, var, opt, and home (that is, remove the newer plex from each volume in systemdg.)
For each volume in systemdg, remove all of the newly created mirrors. More specifically, for each volume, two plexes are displayed, and you should remove the newer (-02) plexes from each volume. To remove a mirror, highlight a volume and select Actions—>Mirror—>Remove. 14 Run the command to convert the root volumes back to disk partitions.
# vxunroot 15 Shut down the system when prompted. 16 Verify that the mount points are now slices rather than volumes.
# df -k 17 At the end of this lab, leave your boot disk unencapsulated and remove any
other disks from systemdg.
B–40
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Lab 8 Solutions: Recovery Essentials Introduction In this practice, you perform a variety of basic recovery operations. Perform this lab by using the command line interface. In some of the steps, the commands are provided for you. Setup For this lab, you should have at least four disks (datadg01 through datadg04) in a disk group called datadg. If you use object names other than the ones provided, substitute the names accordingly in the commands. Exploring Logging Behavior 1 Create two mirrored, concatenated volumes, 500 MB in size, called vollog and volnolog. # vxassist -g diskgroup make vollog 500m layout=mirror # vxassist -g diskgroup make volnolog 500m layout=mirror 2
Add a log to the volume vollog. # vxassist -g diskgroup addlog vollog
3
Create a file system on both volumes. # mkfs -F vxfs /dev/vx/rdsk/diskgroup/volnolog # mkfs -F vxfs /dev/vx/rdsk/diskgroup/vollog
4
Create mount points for the volumes, /vollog and /volnolog. # mkdir /vollog # mkdir /volnolog
5
Copy /etc/vfstab to a file called origvfstab. # cp /etc/vfstab /origvfstab
6
Edit /etc/vfstab so that vollog and volnolog are mounted automatically on reboot. (In the /etc/vfstab file, each entry should be separated by a tab.)
7
Type mountall to mount the vollog and volnolog volumes. # mountall
8
In root, start an I/O process on each volume. For example: # find /usr -print | cpio -pmud /vollog & # find /usr -print | cpio -pmud /volnolog &
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–41
9
Press Stop-A. At the OK prompt, type boot. OK? boot
10 After the system is running again, check the state of the volumes to ensure that neither of the volumes is in the sync/needsync mode.
# vxprint -thf vollog volnolog 11 Run the vxstat command. This utility displays statistical information about
volumes and other VxVM objects. For more information on this command, see the vxstat (1m) manual page. # vxstat -g diskgroup -fab vollog volnolog The output shows how many I/Os it took to resynchronize the mirrors. Compare the number of I/Os for each volume. What do you notice? You should notice that fewer I/O operations were required to resynchronize vollog. The log keeps track of data that needs to be resynchronized.
12 Stop the VxVM configuration daemon.
# vxdctl stop 13 Create a 100-MB mirrored volume. What happens?
# vxassist -g diskgroup make testvol 100m layout=mirror The task fails, because the configuration daemon is not running. 14 Start the VxVM configuration daemon.
# vxconfigd 15 Unmount both file systems and remove the volumes vollog and volnolog.
# umount /vollog # umount /volnolog # vxedit -g diskgroup -rf rm vollog volnolog 16 Restore your original vfstab file.
# cp /origvfstab /etc/vfstab
Removing a Disk from VxVM Control 1 Create a 100-MB, mirrored volume named recvol. Create and mount a file system on the volume. # vxassist -g datadg make recvol 100m layout=mirror # mkfs -F vxfs /dev/vx/rdsk/datadg/recvol B–42
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# mkdir /recvol # mount -F vxfs /dev/vx/dsk/datadg/recvol /recvol 2
Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume. # vxprint -thf For example, the volume recvol uses datadg02 and datadg04: Device
Disk Media Name
Disk 1
c1t2d0s2
datadg02
Disk 2
c1t3d0s2
datadg04
3
Remove one of the disks that is being used by the volume. # vxdg -g datadg -k rmdisk datadg02
4
Confirm that the disk was removed. # vxdisk -o alldgs list
5
From the command line, check that the state of one the plexes is DISABLED and REMOVED. # vxprint -thf In VEA, the disk is shown as disconnected, because one of the plexes is unavailable.
6
Replace the disk back into the disk group. # vxdg -g datadg -k adddisk datadg02=c1t2d0
7
Check the status of the disks. What is the status of the disks? # vxdisk -o alldgs list The status of the disks is ONLINE.
8
Display volume information. What is the state of the plexes? # vxprint -thf The plex you removed is marked RECOVER.
9
In VEA, what is the status of the disks? What is the status of the volume? The disk is reconnected and shows that the disk contains a volume that is recoverable. Select the volume in the left pane, and click the Mirrors tab in the right pane. The plex is marked recoverable.
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–43
10 From the command line, recover the volume. During and after recovery, check
the status of the plex in another command window and in VEA. # vxrecover In VEA, the status of the plex changes to Recovering, and eventually to Attached. With vxprint, the status of the plex changes to STALE and eventually to ACTIVE. 11 At the end of this lab, destroy your disk group and send your data disks back to
an uninitialized state. In the next exercises, you will use sliced disks and nonCDS disk groups to practice recovery operations. Replacing Physical Drives (Without Hot Relocation) 1 For this exercise, initialize four disks as sliced disks. Place the disks in a nonCDS disk group named datadg. Create a 100-MB mirrored volume, recvol, in the disk group, add a VxFS file system to the volume, and mount the file system at the mount point /recvol. For each disk: # vxdisksetup -i device_tag format=sliced Initialize a non-CDS disk group using one of the disks: # vxdg init diskgroup disk_name=device_tag cds=off Add the other three sliced disks to the non-CDS disk group: # vxdg -g diskgroup adddisk disk_name=device_tag Create a mirrored volume and create and mount a file system: # vxassist -g diskgroup make recvol 100m layout=mirror # mkfs -F vxfs /dev/vx/rdsk/diskgroup/recvol # mkdir /recvol # mount -F vxfs /dev/vx/dsk/diskgroup/recvol /recvol
B–44
2
Stop vxrelocd using ps and kill, in order to stop hot relocation from taking place. Verify that the vxrelocd processes are killed before you continue. # ps -e | grep vx # kill -9 pid1 pid2 # ps -e | grep vx Note: There are two vxrelocd processes. You must kill both of them at the same time.
3
Next, you simulate disk failure by removing the public and private regions of one of the disks in the volume. In the commands, substitute the appropriate disk device name for one of the disks in use by recvol, for example c1t2d0s2. VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2 4
An error will occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root. Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/recvol &
5
When the error occurs, view the status of the disks from the command line. # vxdisk -o alldgs list The physical device is no longer associated with the disk media name and the disk group.
6
View the status of the volume from the command line. # vxprint -thf The plex displays a status of DISABLED NODEVICE.
7
In VEA, what is the status of the disks and volume? The status is that the disk is disconnected, and the volume has a disconnected plex.
8
Rescan for all attached disks: # vxdctl enable
9
Recover the disk by replacing the private and public regions on the disk: # vxdisksetup -i c1t2d0 format=sliced Note: This method for recovering the disk is only used because of the method in which the disk was defaulted (by writing over the private and public regions). In most real-life situations, you do not need to perform this step.
10 Bring the disk back under VxVM control:
# vxdg -g diskgroup -k adddisk disk_name=device_tag 11 Check the status of the disks and the volume.
# vxdisk -o alldgs list # vxprint -thf 12 From the command line, recover the volume.
# vxrecover
Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–45
13 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered. # vxdisk -o alldgs list # vxprint -thf 14 Unmount the file system and remove the volume.
# umount /recvol # vxassist -g diskgroup remove volume recvol
Exploring Spare Disk Behavior 1 You should have four sliced disks (datadg01 through datadg04) in the nonCDS disk group datadg. Set all disks to have the spare flag on. # vxedit -g diskgroup set spare=on datadg01 # vxedit -g diskgroup set spare=on datadg02 # vxedit -g diskgroup set spare=on datadg03 # vxedit -g diskgroup set spare=on datadg04 2
Create a 100-MB mirrored volume called sparevol. # vxassist -g diskgroup make sparevol 100m layout=mirror Is the volume successfully created? Why or why not? No, the volume is not created, and you receive the error: cannot allocate space for size block volume The volume is not created, because all disks are set as spares, and vxassist and VEA do not find enough free space to create the volume.
B–46
3
Attempt to create the same volume again, but this time specify two disks to use. Do not clear any spare flags on the disks. # vxassist -g diskgroup make sparevol 100m layout=mirror datadg03 datadg04 Notice that VxVM overrides its default and applies the two spare disks to the volume, because the two disks were specified by the administrator.
4
Remove the volume. # vxedit -g diskgroup -rf rm sparevol
5
Verify that the relocation daemon (vxrelocd) is running. If not, start it as follows: # vxrelocd root &
6
Remove the spare flags from three of the four disks. VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
# vxedit -g diskgroup set spare=off datadg01 # vxedit -g diskgroup set spare=off datadg02 # vxedit -g diskgroup set spare=off datadg03 7
Create a 100-MB concatenated mirrored volume called spare2vol. # vxassist -g diskgroup make spare2vol 100m layout=mirror
8
Save the output of vxprint -thf to a file. # vxprint -thf > savedvxprint
9
Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume. You are going to simulate disk failure on one of the disks. Decide which disk you are going to fail. Open a console screen. For example, the volume spare2vol uses datadg02 and datadg04: Device Name
Disk Media Name
Disk 1
c1t2d0s2
datadg02
Disk 2
c1t3d0s2
datadg04
10 Next, you simulate disk failure by removing the public and private regions of
one of the disks in the volume. In the commands, substitute the appropriate disk device name: # fmthard -d 3:0:0:0:0 /dev/rdsk/c1t2d0s2 # fmthard -d 4:0:0:0:0 /dev/rdsk/c1t2d0s2 11 An error occurs when you start I/O to the volume. You can view the error on the console or in tail -f /var/adm/messages. A summary of the mail can be viewed in /var/mail/root.
Start I/O to the volume using the command: # dd if=/dev/zero of=/dev/vx/rdsk/diskgroup/volume_name &
12 Run vxprint -rth and compare the output to the vxprint output that you
saved earlier. What has occurred? Hot relocation has taken place. The failed disk has a status of NODEVICE. VxVM has relocated the mirror of the failed disk onto the designated spare disk.
13 In VEA, view the disks. Notice that the disk is in the disconnected state. 14 Run vxdisk -o alldgs list. What do you notice? Appendix B Lab Solutions Copyright © 2004 VERITAS Software Corporation. All rights reserved.
B–47
This disk is displayed as a failed disk. 15 Rescan for all attached disks.
# vxdctl enable 16 In VEA, view the status of the disks and the volume.
Highlight the volume and click each of the tabs in the right pane. You can also select Actions—>Volume View and Actions—>Disk View to view status information. 17 View the status of the disks and the volume from the command line.
# vxdisk -o alldgs list # vxprint -thf 18 Recover the disk by replacing the private and public regions on the disk.
# vxdisksetup -i c1t2d0 format=sliced 19 Bring the disk back under VxVM control and into the disk group.
# vxdg -g diskgroup -k adddisk datadg02=c1t2d0 20 In VEA, undo hot relocation for the disk.
Right-click the disk group and select Undo Hot Relocation. In the dialog box, select the disk for which you want to undo hot relocation and click OK. After the task has completed, the alert on the disk group should be removed. Alternatively, from the command line, run: # vxunreloc -g datadg datadg02 21 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume are fully recovered. # vxdisk -o alldgs list # vxprint -thf 22 Reboot and then remove the volume.
# vxedit -g diskgroup -rf rm spare2vol 23 Turn off any spare flags from your disks that you set during this lab.
# vxedit -g diskgroup set spare=off datadg04
B–48
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Appendix C VxVM/VxFS Command Reference
VxVM Command Quick Reference This section contains some frequently used commands and options described in the VERITAS Volume Manager training. For more information on specific commands, see the VERITAS Volume Manager manual pages. Disk Operations Task
Command
Initialize disk
vxdisksetup -i device (CDS disk) vxdisksetup -i device format=sliced (sliced disk) or vxdiskadm option, “Add or initialize one or more disks”
Uninitialize disk
vxdiskunsetup device
List disks
vxdisk -o alldgs list
List disk header
vxdisk -g diskgroup list diskname|device
Evacuate a disk
vxevac -g diskgroup from_disk to_disk
Rename a disk
vxedit -g diskgroup rename oldname newname
Set a disk as a spare
vxedit -g diskgroup set spare=on|off diskname
Unrelocate a disk
vxunreloc -g diskgroup original_diskname
Disk Group Operations Task
Command
Create disk group
vxdg init diskgroup diskname=device (CDS disk group) vxdg init diskgroup diskname=device cds=off (non-CDS disk group)
Add disk to disk group
vxdg -g diskgroup adddisk diskname=device
Deport disk group
vxdg deport diskgroup
Import disk group
vxdg import diskgroup
Destroy disk group
vxdg destroy diskgroup
List disk groups
vxdg list
List specific disk group details
vxdg list diskgroup
Remove disk from disk group
vxdg –g diskgroup rmdisk diskname
Upgrade disk group version
vxdg [-T version] upgrade diskgroup
Move an object between disk groups
vxdg move sourcedg targetdg object...
Split objects between disk groups
vxdg split sourcedg targetdg object...
Join disk groups
vxdg join sourcedg targetdg
C–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Task
Command
List objects affected by a disk group move operation
vxdg listmove sourcedg targetdg object...
Display bootdg
vxdg bootdg
Display defaultdg
vxdg defaultdg
Set defaultdg
vxdctl defaultdg diskgroup
Manually back up the disk group configuration
vxconfigbackup diskgroup
Perform precommit analysis vxconfigrestore -p diskgroup of a restore Restore the disk group configuration
vxconfigrestore -c [-l directory] diskgroup
Subdisk Operations Task
Command
Create a subdisk
vxmake -g diskgroup sd subdiskname diskname offset length
Remove a subdisk
vxedit -g diskgroup rm subdisk_name
Display subdisk info
vxprint -st vxprint -l subdisk_name
Associate a subdisk to a plex vxsd assoc plex_name subdisk_name Dissociate a subdisk
vxsd dis subdisk_name
Plex Operations Task
Command
Create a plex
vxmake -g diskgroup plex plex_name sd=subdisk_name,…
Associate a plex (to a volume)
vxplex –g diskgroup att vol_name plex_name
Dissociate a plex
vxplex dis plex_name
Remove a plex
vxedit –g diskgroup rm plex_name
List all plexes
vxprint -lp
Detach a plex
vxplex –g diskgroup det plex_name
Attach a plex
vxplex –g diskgroup att vol_name plex_name
Appendix C VxVM/VxFS Command Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
C–3
Volume Operations Task
Command
Create a volume
vxassist -g diskgroup make vol_name size layout=format diskname or vxmake -g diskgroup -U fsgen vol vol_name len=size plex plex_name
Remove a volume
vxedit -g diskgroup -rf rm vol_name or vxassist -g diskgroup remove volume vol_name
Display a volume
vxprint -g diskgroup -vt vol_name vxprint -g diskgroup -l vol_name
Change volume attributes
vxedit -g diskgroup set attribute=value vol_name vxvol -g diskgroup set attribute=value vol_name
Resize a volume
vxassist -g diskgroup growto vol_name new_length vxassist -g diskgroup growby vol_name length_change vxassist -g diskgroup shrinkto vol_name new_length vxassist -g diskgroup shrinkby vol_name length_change vxresize -g diskgroup vol_name [+|-]length
Resize a dynamic LUN
vxdisk -g diskgroup resize disk_name length=attribute
Change volume read policy
vxvol -g diskgroup rdpol round vol_name vxvol -g diskgroup rdpol prefer vol_name preferrred_plex_name vxvol -g diskgroup rdpol select vol_name
Start/Stop volumes
vxvol start vol_name
Start all volumes vxvol startall Start all volumes in a dg vxvol -g diskgroup startall Stop a volume vxvol stop vol_name Stop all volumes vxvol stopall Recover a volume vxrecover -sn vol_name List unstartable volumes
C–4
vxinfo [vol_name]
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Task
Command
Mirror an existing plex
vxassist -g diskgroup mirror vol_name or vxmake -g diskgroup plex plex_name sd=subdisk_name vxplex -g diskgroup att vol_name plex_name
Relayout a volume
vxassist -g diskgroup relayout vol_name layout=new_layout [attributes...]
To or from a layered vxassist -g diskgroup convert vol_name layout layout=new_layout [attributes...] Add a log to a volume
vxassist –g diskgroup addlog vol_name
Create and mount a VxFS file system on a volume
mkfs -F vxfs /dev/vx/rdsk/diskgroup/vol_name mkdir /mount_point mount -F vxfs /dev/vx/dsk/diskgroup/vol_name /mount_point
Run a Storage Expert rule
rule_name -g diskgroup run
Display rule description
rule_name info
Display rule attributes
rule_name list
Display default attributes
rule_name check
DMP, DDL, and Task Management Task
Command
Manage tasks
vxtask list vxtask monitor
Manage device discovery layer (DDL) Discover new devices vxdisk scandisks new List supported disk arrays vxddladm listsupport Exclude support for an array vxddladm excludearray libname=library vxddladm excludearray vid=vid pid=pid Reinclude support vxddladm includearray libname=library vxddladm includearray vid=vid pid=pid List excluded arrays vxddladm listexclude List supported JBODs vxddladm listjbod Add/remove JBOD support vxddladm addjbod vid=vid pid=pid vxddladm rmjbod vid=vid pid=pid Add a foreign device vxddladm addforeign blockdir=path chardir=path
Appendix C VxVM/VxFS Command Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
C–5
Task
Command
Manage dynamic multipathing (DMP) List controllers on system vxdmpadm listctlr all Display subpaths vxdmpadm getsubpaths ctlr=ctlr Display DMP nodes vxdmpadm getdmpnode nodename=nodename Enable/disable I/O to vxdmpadm enable ctlr=ctlr controller vxdmpadm disable ctlr=ctlr Display enclosure attributes vxdmpadm listenclosure all Rename an enclosure vxdmpadm setattr enclosure orig_name name=new_name Enable statistics gathering vxdmpadm iostat start Reset statistics counters vxdmpadm iostat reset Display stats for all paths vxdmpadm iostat show all Change the I/O policy vxdmpadm setattr enclosure enc_name iopolicy=policy Set path attributes vxdmpadm setattr path path_name pathtype=type
C–6
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Using VxVM Commands: Examples Initialize disks c1t0d0, c1t1d0, c1t2d0, c2t0d0, c2t1d0, and c2t2d0: # vxdisksetup -i c1t0d0 # vxdisksetup -i c1t1d0 # vxdisksetup -i c1t2d0 # vxdisksetup -i c2t0d0 # vxdisksetup -i c2t1d0 # vxdisksetup -i c2t2d0
Create a disk group named datadg and add the six disks: # vxdg init datadg datadg01=c1t0d0 datadg02=c1t1d0 datadg03=c1t2d0 # vxdg -g datadg adddisk datadg04=c2t0d0 datadg05=c2t1d0 datadg06=c2t2d0
Using a top-down technique, create a RAID-5 volume named datavol of size 2 GB on the six disks (5 + log). Also, create and mount a UFS file system on the volume: # vxassist -g datadg make datavol 2g layout=raid5 datadg01 datadg02 datadg03 datadg04 datadg05 datadg06 # newfs /dev/vx/rdsk/datadg/datavol # mkdir /datamnt # mount /dev/vx/dsk/datadg/datavol /datamnt
Remove the volume datavol: # umount /datamnt # vxvol stop datavol # vxedit -g datadg -r rm datavol
Using a bottom-up technique, create a two-way mirrored volume named datavol02 of size 1 GB using disks datadg01 and datadg04: • 1 GB = 2097152 sectors • Subdisks should be cylinder aligned. • If disk uses 1520 sectors/cylinder, subdisk size = 2097600 sectors. # vxmake -g datadg sd sd01 datadg01,0,2097600 # vxmake -g datadg sd sd02 datadg04,0,2097600 # vxmake -g datadg plex plex01 sd=sd01:0/0 # vxmake -g datadg plex plex02 sd=sd02:0/0 # vxmake -g datadg -U fsgen vol datavol02 plex=plex01,plex02 Appendix C VxVM/VxFS Command Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
C–7
# vxvol start datavol02
Change the permissions of the volume so that dba is the owner and dbgroup is the group: # vxedit set user=dba group=dbgroup mode=0744 datavol02
Destroy the volume and remove the disks from the disk group datadg. Also, remove disks from Volume Manager control: # vxedit -g datadg -rf rm datavol02 # vxdg -g datadg rmdisk datadg01 datadg02 datadg03 datadg04 datadg05 # vxdg deport datadg # vxdiskunsetup c1t1d0 # vxdiskunsetup c1t2d0 # vxdiskunsetup c1t3d0...
Advanced vxmake Operation: Create a three-way striped volume: # vxmake -g acctdg sd sd01 acctdg01,0,1520000 # vxmake -g acctdg sd sd02 acctdg02,0,1520000 # vxmake -g acctdg sd sd03 acctdg03,0,1520000 # vxmake -g acctdg plex plex1 layout=stripe ncolumn=3 stwidth=64k sd=sd01:0/0,sd02:1/0,sd03:2/0 # vxmake -g acctdg -U fsgen vol datavol05 plex=plex1 # vxvol -g acctdg start datavol05
Advanced vxmake Operation: Create a RAID 0+1 volume with a DRL Log: # vxmake -g acctdg sd sd01 acctdg01,0,194560 # vxmake -g acctdg sd sd02 acctdg02,0,194560 # vxmake -g acctdg sd sd03 acctdg03,0,194560 # vxmake -g acctdg sd sd04 acctdg04,0,194560 # vxmake -g acctdg sd logsd acctdg01,194560,2 # vxmake -g acctdg plex plex1 layout=stripe ncolumn=2 stwidth=64k sd=sd01:0/0,sd02:1/0 # vxmake -g acctdg plex plex2 layout=stripe ncolumn=2 stwidth=64k sd=sd03:0/0,sd04:1/0 # vxmake -g acctdg plex logplex log_sd=logsd # vxmake -g acctdg -U fsgen vol datavol06 plex=plex1,plex2,logplex # vxvol -g acctdg start datavol06
C–8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VxFS Command Quick Reference This section contains some VxFS commands and examples. For more information on specific commands, see the VERITAS File System manual pages. Setting Up a File System Task
Command
Create a VERITAS file system
mkfs [fstype] [generic_options] [-o specific_options] special [size] # mkfs -F vxfs /dev/vx/rdsk/datadg/datavol Options -o N -o version=n -o bsize=size -o logsize=size
Check VxFS structure without writing to device. Create VxFS with different layout version. Create VxFS with a specific block size. size is the block size in bytes. Create VxFS with a specific logging area size. size is the number of file system blocks to be used for the intent log.
Mount a VERITAS file system
mount [fstype] [generic_options] [-r] [-o specific_options] special mount_point # mount -F vxfs /dev/vx/dsk/datadg/datavol /mydata
List mounted file systems
mount -v
List mounted file systems in the file system table format
mount -p
Unmount a mounted file system
umount special|mount_point # umount /mnt
Unmount all mounted file systems
umount -a
Determine the file system type
fstyp [-v] special # fstyp /dev/dsk/c0t6d0s0
Report free disk blocks and inodes
df [-F vxfs] [generic_options] [-o s] [special|mount] # df -F vxfs /mnt
Check the consistency of and repair a file system
fsck [-F vxfs] [generic_options] [-y|Y] [-n|N] special # fsck -F vxfs /dev/vx/rdsk/datadg/datavol
Online Administration Task
Command
Resize a VERITAS file system
fsadm [-b newsize] [-r rawdev] mount_point # /usr/lib/fs/vxfs/fsadm -b 1024000 -r /dev/vx/rdsk/datadg/datavol /mnt vxresize [-bsx] [-F vxfs] [-g diskgroup] [-t tasktag] volume new_length [medianame] # vxresize -F vxfs -g datadg datavol 5g Note: The vxresize command automatically resizes the underlying volume. The fsadm command does not.
Dump a file system
vxdump [options] mount_point
Restore a file system
vxrestore [options] mount_point
Upgrade the VxFS layout
vxupgrade [-n new_version] [-r rawdev] mount_point # vxupgrade -n 6 /mnt
Appendix C VxVM/VxFS Command Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
C–9
Task
Command
Display layout version number
vxupgrade mount_point
Convert a file system to VxFS
vxfsconvert [-s size] [-efnNvyY] special # vxfsconvert /dev/vx/rdsk/datadg/datavol
Activate file change log
fcladm on mount_point
Benchmarking Task
Command
Create different combinations of I/O workloads
vxbench -w workload [options] filename . . . # vxbench -w write -i iosize=8,iocount=131072 /mnt/testfile01 # vxbench -w rand_write -i iosize=8,iocount=131072, maxfilesize=1048576 /mnt/testfile01
List vxbench command options
vxbench -h
Workloads read write rand_read rand_write rand_mixed mmap_read mmap_write
Options
-i Suboptions
Display help Use processes and threads (default) Use processes Use threads Lock I/O buffers in memory Print summary results Print per-thread results Print throughput in kbytes/sec Print throughput in mbytes/sec Specify suboptions
-h -P -p -t -m -s -v -k -M -i
nrep=n nthreads=n iosize=n fsync remove iocount=n reserveonly maxfilesiz=n randseed=n rdpct=n
Repeat I/O loop n times Specify the number of threads Specify I/O size (in kilobytes) Perform an fsync on the file Remove each file after the test Specify the number of I/Os Only reserve space for the file Max offset for random I/O Seed value for random number generator Read percentage of the job mix
Managing Extents Task
Command
List file names and inode information
ff [-F vxfs] [generic_options] [-o s] special
Generate path names from inode numbers for a VxFS file system
ncheck [-F vxfs] [generic_options] [-o options] special
Set extent attributes
setext [-e extent_size] [-f flags] [-r reservation] file Options -e -r -f Flags align chgsize contig noextend noreserve trim
C–10
Specify a fixed extent size. Preallocate, or reserve, space for a file. Set allocation flags. Align extents to the start of allocation units. Add the reservation into the file. Allocate the reservation contiguously. File may not be extended after reservation is used. Space reserved is allocated only until the close of the file, and then is freed. Reservation is reduced to the current file size after the last close.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Task
Command
Display extent attributes
getext [-f] [-s] file . . . Options -f -s
Use extent-aware versions of standard commands, such as mv, cp, and cpio
Do not print the filename. Do not print output for files without fixed extent sizes or reservations. Warns if extent information cannot be preserved (default). Causes a copy to fail if extent information cannot be preserved. Does not try to preserve extent information.
-e warn -e force -e ignore
Defragmenting a File System Task
Command
Report on directory fragmentation
fsadm -D mount_point
Report on extent fragmentation
fsadm -E [-l largesize] mount_point
Defragment, or reorganize, a file system
fsadm [-d] [-D] [-e] [-E] [-s] [-v] [-l largesize] [a days] [-t time] [-p passes] [-r rawdev] mount_point # fsadm -edED /mnt1 Options -d -a -e -D -E -v -l -t -p -s -r
Reorganize a file system to support files > 2 GB
Reorganize directories. Move aged files to the end of the directory. Default is 14 days. Reorganize extents. Report on directory fragmentation. Report on extent fragmentation. Report reorganization activity in verbose mode. Size of a file that is considered large. Default is 64 blocks. Maximum length of time to run, in seconds. Maximum number of passes to run. Default is five passes. Print a summary of activity at the end of each pass. Pathname of the raw device to read to determine file layout and fragmentation. Used when fsadm cannot determine the raw device.
# fsadm -o largefiles /mnt1
Intent Logging Task
Command
Check the consistency of and repair a VERITAS file system. By default the fsck utility replays the intent log instead of doing a full structural file system check.
fsck [-F vxfs] [generic_options] [-y|Y] [-n|N] [-o full,nolog] [-o p] special # fsck -F vxfs /dev/vx/rdsk/datadg/datavol
Perform a full file system check without the intent log
# fsck -F vxfs -o full,nolog special # fsck -F vxfs -o full,nolog /dev/vx/rdsk/datadg/datavol
Options -m -n|N -V -y|Y -o full -o nolog -o p
Checks, but does not repair, a file system before mounting. Assumes a response of no to all prompts by fsck. Echoes the command line, but does not execute. Assumes a response of yes to all prompts by fsck. Perform a full file system check after log replay. Do not perform log replay. Check two file systems in parallel.
Appendix C VxVM/VxFS Command Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
C–11
Task
Command
Resize the intent log
# fsadm -F vxfs -o log=size[,logdev=device] mount_point
Alter default logging behavior mount [-F vxfs] [generic_options] [-o specific_options] special mount_point mount -F vxfs -o tmplog /dev/vx/dsk/datadg/datavol /mnt Options -o log -o delaylog -o tmplog -o blkclear -o logiosize=size
All structural changes are logged. Default option in which some logging is delayed. Intent logging is almost always delayed. Guarantees that storage is initialized before allocation. Sets a specific I/O size to be used for logging.
I/O Types and Cache Advisories Task
Command
Alter the way in which VxFS handles buffered I/O operations
mount -F vxfs [generic_options] -o mincache=suboption special mount_point mount -F vxfs -o mincache=closesync /dev/vx/dsk/datadg/datavol /mnt Options mincache=closesync mincache=direct mincache=dsync mincache=unbuffered mincache=tmpcache
Alter the way in which VxFS handles I/O requests for files opened with the O_SYNC and O_DSYNC flags
mount -F vxfs [generic_options] -o convosync=suboption special mount_point mount -F vxfs -o convosync=closesync /dev/vx/dsk/datadg/datavol /mnt Options convosync=closesync convosync=direct convosync=dsync convosync=unbuffered convosync=delay
C–12
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
File System Tuning Task
Command
Set tuning parameters for mounted file systems
vxtunefs [-ps] [-f filename] [-o parameter=value] [{mount_point | block_special}]... # vxtunefs -o write_pref_io=32768 /mnt Options -f filename -p -s Tuning Parameters read_ahead read_pref_io read_nstream write_pref_io write_nstream discovered_direct_iosz hsm_write_prealloc initial_extent_size max_direct_iosz max_diskq max_seqio_extent_size qio_cache_enable
write_throttle
Specifies a parameters file other than the default /etc/vx/tunefstab Prints tuning parameters Sets new tuning parameters Enables enhanced read ahead to detect patterns. Preferred read request size. Default is 64K. Desired number of parallel read requests to have outstanding at one time. Default is 1. Preferred write request size. Default is 64K. Desired number of parallel write requests to have outstanding at one time. Default is 1. I/O requests larger than this value are handled as discovered direct I/O. Default is 256K. Improves performance when using HSM applications with VxFS Default initial extent size, in file system blocks. Maximum size of a direct I/O requests. Maximum disk queue generated by a single file. Default is 1M. Maximum size of an extent. Default is 2048 file system blocks. Enables or disables caching on Quick I/O for Databases files. Default is disabled. To enable caching, you set qio_cache_enable=1. Limits dirty pages per file that a file system generates before flushing pages to disk.
Display current tuning parameters
vxtunefs mount_point # vxtunefs /mnt
Set read-ahead size
Use vxtunefs to set the tuning parameters read_pref_io and read_nstream. Read-ahead size = (read_pref_io x read_nstream)
Set write-behind size
Use vxtunefs to set the tuning parameters write_pref_io and write_nstream. Write-behind size = (write_pref_io x write_nstream)
Appendix C VxVM/VxFS Command Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
C–13
Controlling Users Task
Command
Create a quotas files
touch /mount_point/quotas touch /mount_point/quotas.grp
Turn on quotas for a mounted file system
vxquotaon [-u|-g] mount_point # vxquotaon -u /mnt
Mount a file system and turn on quotas at the same time
mount -F vxfs -o quota|usrquota|grpquota special mount_point # mount -F vxfs -o quota /dev/dsk/c0t5d0s2 /mnt
Invoke the quota editor
vxedquota username|UID|groupname|GID # vxedquota rsmith
Modify the quota time limit
vxedquota -t
View quotas for a user
vxquota -v username|groupname # vxquota -v rsmith
Display summary of quotas and disk usage
vxrepquota mount_point # vxrepquota /mnt
Display a summary of ownership and usage
vxquot mount_point # vxquot /mnt
Turn off quotas for a mounted file system
vxquotaoff [-u|-g] mount_point # vxquotaoff /mnt
Set or modify an ACL for a file
setfacl [-r] -s acl_entries file setfacl [-r] -md acl_entries file setfacl [-r] -f acl_file file # setfacl -m user:bob:r-- myfile # setfacl -d user:scott myfile # setfacl -s user::rwx,group::r--,user:maria:r--, mask:rw-,other:--- myfile Options -s -m -d
Set an ACL for a file. Add new or modify ACL entries to a file. Remove an ACL entry for a user.
Elements in an ACL Entry entry_type:[uid|gid]:permissions Entry type: user, group, other, or mask. User or group name or identification number. Read, write, and/or execute indicated by rwx.
entry_type uid|gid permissions Display ACL entries for a file
getfacl filename # getfacl myfile
Copy existing ACL entries from one file to another file
getfacl file1 | setfacl -f file2 # getfacl myfile | setfacl -f - newfile
C–14
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
QuickLog Task
Command
Create a volume or format a disk partition to contain the QuickLog device
vxassist -g diskgroup make qlog_volume size [vxvm_disk] # vxassist -g datadg make qvol01 32m
Build the QuickLog volume layout
qlogmk -g diskgroup vxlog[x] qlog_volume # qlogmk -g datadg vxlog1 qvol01
Enable a QuickLog device
# mount -F vxfs -o qlog= special mount_point # mount -F vxfs -o qlog= /dev/vx/dsk/datadg/datvol /mnt Or qlogenable [qlog_device] mount_point # qlogenable /mnt
Disable logging by QuickLog without unmounting a VERITAS File System
qlogdisable mount_point # qlogdisable /mnt
Detach a QuickLog volume from its QuickLog device
qlogrm qlog_volume # qlogrm qvol01
Remove the QuickLog volume from the underlying VxVM volume
vxedit -g diskgroup -rf rm qlog_volume # vxedit -g datadg -rf rm qvol01
Display status of QuickLog devices, QuickLog volumes, and VxFS file systems
qlogprint
Print statistical data for QuickLog devices, QuickLog volumes, and VxFS file systems
qlogstat [-dvf] [-l qlogdev] [-i interval] [-c count] Options -d -v -f -l qlogdev -i interval -c count
Report statistics for all QuickLog devices only. Report statistics for all QuickLog volumes only. Report statistics for all logged VxFS file systems only. Report statistics for a specified QuickLog device only. Print the change in statistics after every interval seconds Default is 10 seconds. Stop after printing interval statistics count times. Default is 1.
Appendix C VxVM/VxFS Command Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
C–15
Quick I/O Task
Command
Enable Quick I/O at mount time
mount -F vxfs -o qio mount_point
Disable Quick I/O
mount -F vxfs -o noqio mount_point
Treat a file as a raw character device
filename::cdev:vxfs: mydbfile::cdev:vxfs:
Create a Quick I/O file through a symbolic link
qiomkfile [-h [headersize]] [-a] [-s size] [-e|-r size] file # qiomkfile -s 100m /database/dbfile Options -h -s -e -r -a
Obtain Quick I/O statistics
For Oracle database files. Creates a file with additional space allocated for the Oracle header. Preallocates space for a file For Oracle database files. Extends the file by a specified amount to allow Oracle tablespace resizing. For Oracle database files. Increases the file to a specified size to allow Oracle tablespace resizing. Creates a symbolic link with an absolute pathname. Default behavior creates relative pathnames.
qiostat [-i interval] [-c count] [-l] [-r] file... # qiostat -i 5 /database/dbfile Options -c count -i interval -l -r
Stop after printing statistics count times. Print updated I/O statistics after every interval seconds. Print the statistics in long format. Also prints the caching statistics when Cached Quick I/O is enabled. Reset statistics instead of printing them.
Enable Cached Quick I/O for all files in a file system
vxtunefs -s -o qio_cache_enable=1 mount_point # vxtunefs -s -o qio_cache_enable=1 /oradata
Disable Cached Quick I/O for a file
qioadmin -S filename=OFF mount_point # qioadmin -S /oradata/sal/hist.dat=OFF /oradata
C–16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Appendix D VxVM/VxFS 3.5 to 4.0 Differences Quick Reference
VxVM and VxFS 3.5 to 4.0 Differences Quick Reference This section lists the major differences between VxVM/VxFS 3.5 and VxVM/VxFS 4.0. Feature
Description
New product naming
Foundation Suite has been renamed as Storage Foundation; other related products (DBE, DBEAC, SPFSHA) are also renamed.
New product packaging and licensing bundles
There are new licensing bundles, including Storage Foundation QuickStart, Standard, and Enterprise.
New software packages
Several new software packages have been added to support the new VM and FS features. These include VRTSalloc, VRTSap, VRTScpi, VRTSddlpr, VRTSfppm, VRTSperl, VRTStep.
New installation scripts
New installation scripts have been added to simplify installation of multiple VERITAS products as well as individual VERITAS products. The Installer script available in 3.5 is still available and is the easiest way to install multiple VERITAS products from within one menu system. New individual product installation scripts have been added (installvm, installfs, installsf) that simplify installation of individual products. These scripts include vxinstall, which has been considerably reduced in scope of what it sets up.
Simplified upgrade procedures
You can now upgrade a product through the installation scripts (installvm, installfs, installsf).
Display/Change default disk layout attributes
A new task is available in the vxdiskadm menu interface that enables you to change or display default disk layout attributes, such as default private region size, and the offset of the private and public regions.
Dynamic LUN expansion
You can resize a LUN while preserving the existing data. You do not have to stop the use of the volume, therefore you can proceed without disruption to the application. VxVM can keep access open at all times to all volumes using the device that is being resized.
Removal of rootdg requirement
The rootdg disk group is no longer required for VxVM to function. The vxinstall script no longer prompts you to set up rootdg.
Reserved disk groups
There are three disk group names that are reserved and cannot be used to name any disk groups that you create: bootdg, defaultdg, and nodg.
Configuration Backup and Restore
Using the disk group configuration backup and restoration (CBR) functionality, you can backup and restore all configuration data for VxVM disk groups and VxVM objects such as volumes that are configured within the disk groups. After the disk group configuration has been restored, and the volume enabled, the user data in the public region is available again without the need to restore this from backup media.
D–2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
DMP enhancements
There are several new DMP commands that support features such as partial device discovery (the ability to exclude/include specific devices during a rescan); changing the I/O policy for balancing I/O across multiple paths; setting enclosure attributes; and adding foreign devices. These new features are available through the command line as well as in VEA.
Storage Expert
The VERITAS Storage Expert is a new utility designed to help you monitor volumes and locate poor volume configurations. The utility provides advice on how to improve volume configurations.
SDS to VxVM conversion
A new utility is available that enables you to convert objects under SDS control to VxVM control. There are many restrictions on this utility in this release, for example, it is only supported for Solaris 8 environments.
VxFS layout version 6
This new VxFS disk layout is implemented to support the new VxFS features in this release.
Larger file/file system support
VxFS 4.0 enables the creation of files and file systems up to 8 exabytes in size.
File change log
VxFS 4.0 includes a new file change log that tracks changes to files and directories of a file system. This log saves time when used by applications that typically scan an entire file system searching for modifications since a prior scan.
Intent log resizing
You can dynamically increase or decrease the size of the intent log.
New VEA functionality
In 4.0, more VxVM and VxFS functions can be performed through VEA, including administering storage checkpoints, managing the file change log, managing QuickLog devices, and new DMP functions.
DCO volume versioning
The internal layout of the DCO volume changed in VxVM 4.0 to support new features, such as full-sized and space-optimized instant snapshots.
Instant volume snapshots
VxVM 4.0 introduces instant snapshots that offer advantages such as immediate availability for use, quick refreshment, and easier configuration and administration. Instant snapshots can be fullsized, space-optimized, or cascaded.
File system restore from storage checkpoints
Storage checkpoints can now be used by backup/restore applications to restore individual files or an entire file system.
Storage checkpoint quotas
Storage checkpoint quotas are a new feature that will aid in managing space used by checkpoints by setting limits on the number of blocks used by a primary fileset and all of its related storage checkpoints.
Appendix D VxVM/VxFS 3.5 to 4.0 Differences Quick Reference Copyright © 2004 VERITAS Software Corporation. All rights reserved.
D–3
Volume sets
The volume set feature is provided to support for the multi-device enhancement to VERITAS File System. Volume sets allow several volumes to be represented by a single logical object. A volume set is a container for multiple different volumes. Each volume can have its own geometry. This feature allows file systems to make best use of the different performance and availability characteristics of the underlying volumes.
Multivolume file systems
Using multivolume support, a single file system can be created over multiple volumes, each volume having its own properties. Benefits of this include online hierarchical storage management (HSM), raw volume encapsulation, storage checkpoint separation, and metadata separation.
Quality of Storage Service
The VxFS Quality of Storage Service (QoSS) feature enables the mapping of more than one device into a single file system. Using different devices can enhance performance for applications that access specific types of files by managing where the types of files are located.
Cross-platform data sharing
Cross-platform data sharing (CDS) provides you with the ability to share data concurrently across heterogeneous systems. With CDS, all systems have direct access to physical devices holding data to retrieve and store data or move objects between machines that are running under different operating systems.
Intelligent storage provisioning
The intelligent storage provisioning (ISP) Service is an alternative to the traditional method of creating and managing volumes. ISP aids you in managing large sets of storage by providing an allocation engine that chooses which storage to use based on the capabilities that you specify for the volumes to be created. ISP creates volumes from available storage with capabilities that you specify by consulting the externally defined rule base for creating volumes, and comparing it to the properties of the storage that is available.
D–4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Glossary A access control list (ACL) A list of users or groups who have access privileges to a specified file. A file may have its own ACL or may share an ACL with other files. ACLs allow detailed access permissions for multiple users and groups. active/active disk arrays This type of multipathed disk array enables you to access a disk in the disk array through all the paths to the disk simultaneously. active/passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. agent A process that manages predefined VERITAS Cluster Server (VCS) resource types. Agents bring resources online, take resources offline, and monitor resources to report any state changes to VCS. When an agent is started, it obtains configuration information from VCS and periodically monitors the resources and updates VCS with the resource status. AIX coexistence label Data on disk that identifies the disk to the AIX volume manager (LVM) as being controlled by VxVM. The contents have no relation to VxVM ID Blocks. alert An indication that an error or failure has occurred on an object on the system. When an object fails or experiences an error, an alert icon appears. alert icon An icon that indicates that an error or failure has occurred on an object on the system. Alert icons usually appear in the status area of the VEA main window and on the affected object’s group icon.
allocation unit A basic structural component of VxFS. The VxFS Version 4 and later file system layout divides the entire file system space into fixed size allocation units. The first allocation unit starts at block zero, and all allocation units are a fixed length of 32K blocks. application volume A volume created by the intelligent storage provisioning (ISP) feature of VERITAS Volume Manager (VxVM). associate The process of establishing a relationship between VxVM objects; for example, a subdisk that has been created and defined as having a starting point within a plex is referred to as being associated with that plex. associated plex A plex associated with a volume. associated subdisk A subdisk associated with a plex. asynchronous writes A delayed write in which the data is written to a page in the system’s page cache, but is not written to disk before the write returns to the caller. This improves performance, but carries the risk of data loss if the system crashes before the data is flushed to disk. atomic operation An operation that either succeeds completely or fails and leaves everything as it was before the operation was started. If the operation succeeds, all aspects of the operation take effect at once and the intermediate states of change are invisible. If any aspect of the operation fails, then the operation aborts without leaving partial changes. attached A state in which a VxVM object is both associated with another object and enabled for use. Glossary-1
Copyright © 2004 VERITAS Software Corporation. All rights reserved.
attribute Allows the properties of a LUN to be defined in an arbitrary conceptual space, such as a manufacturer or location.
button A window control that the user clicks to initiate a task or display another object (such as a window or menu).
B
C
back-rev disk group A disk group created using a version of VxVM released prior to the release of CDS. Adding CDS functionality rolls over to the latest disk group version number.
capability A feature that is provided by a volume. For example, a volume may exhibit capabilities, such as performance and reliability to various degrees. Applies to the ISP feature of VxVM.
block The minimum unit of data transfer to or from a disk or array.
CDS disk A disk whose contents and attributes are such that the disk can be used for CDS as part of a CDS disk group. In contrast, a non-CDS disk can neither be used for CDS nor be part of a CDS disk group.
Block-Level Incremental Backup (BLI Backup) A VERITAS backup capability that does not store and retrieve entire files. Instead, only the data blocks that have changed since the previous backup are backed up. boot disk A disk used for booting purposes. This disk may be under VxVM control for some operating systems. boot disk group A disk group that contains the disks from which the system may be booted. bootdg A reserved disk group name that is an alias for the name of the boot disk group. browse dialog box A dialog box that is used to view and/or select existing objects on the system. Most browse dialog boxes consist of a tree and grid. buffered I/O During a read or write operation, data usually goes through an intermediate file system buffer before being copied between the user buffer and disk. If the same data is repeatedly read or written, this file system buffer acts as a cache, which can improve performance. See direct I/O and unbuffered I/O.
CDS disk group A VxVM disk group whose contents and attributes are such that the disk group can be used to provide for cross-platform data sharing. In contrast, a non-CDS disk group (that is, a back-rev disk group or a current-rev disk group) cannot be used for cross-platform data sharing. A CDS disk group can only contain CDS disks. CFS VERITAS Cluster File System. check box A control button used to select optional settings. A check mark usually indicates that a check box is selected. children Objects that belong to an object group. clean node shutdown The ability of a node to leave the cluster gracefully when all access to shared volumes has ceased. clone pool A storage pool that contains one or more full-sized instant volume snapshots of volumes within a data pool. Applies to the ISP feature of VxVM. cluster A set of host machines (nodes) that share a set of disks.
Glossary-2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
cluster file system A VxFS file system mounted on a selected volume in cluster (shared) mode. cluster manager An externallyprovided daemon that runs on each node in a cluster. The cluster managers on each node communicate with each other and inform VxVM of changes in cluster membership. cluster mounted file system A shared file system that enables multiple hosts to mount and perform file operations on the same file. A cluster mount requires a shared storage device that can be accessed by other cluster mounts of the same file system. Writes to the shared device can be performed concurrently from any host on which the cluster file system is mounted. To be a cluster mount, a file system must be mounted using the mount –o cluster option. See local mounted file system. cluster-shareable disk group A disk group in which the disks are shared by multiple hosts (also referred to as a shared disk group).
configuration database A set of records containing detailed information on existing VxVM objects (such as disk and volume attributes). A single copy of a configuration database is called a configuration copy. contiguous file A file in which data blocks are physically adjacent on the underlying media. Cross-platform Data Sharing (CDS) Sharing data between heterogeneous systems (such as SUN and HP), where each system has direct access to the physical devices used to hold the data and understands the data on the physical device. current-rev disk group A disk group created using a version of VxVM providing CDS functionality; however, the CDS attribute is not set. If the CDS attribute is set for the disk group, the disk group is called a CDS disk group. CVM The cluster functionality of VERITAS VxVM.
D
column A set of one or more subdisks within a striped plex. Striping is achieved by allocating data alternately and evenly across the columns within a plex.
data blocks Blocks that contain the actual data belonging to files and directories.
command log A log file that contains a history of VEA tasks performed in the current session and previous sessions. Each task is listed with the task originator, the start/finish times, the task status, and the low-level commands used to perform the task.
data change object (DCO) A VxVM object that is used to manage information about the FastResync maps in the DCO log volume. Both a DCO object and a DCO log volume must be associated with a volume to implement Persistent FastResync on that volume.
concatenation A layout style characterized by subdisks that are arranged sequentially and contiguously.
data pool The first storage pool that is created within a disk group. Applies to the ISP feature of VxVM.
configuration copy A single copy of a configuration database.
data stripe This represents the usable data portion of a stripe and is equal to the stripe minus the parity region.
Glossary-3 Copyright © 2004 VERITAS Software Corporation. All rights reserved.
data synchronous writes A form of synchronous I/O that writes the file data to disk before the write returns, but only marks the inode for later update. If the file size changes, the inode will be written before the write returns. In this mode, the file data is guaranteed to be on the disk before the write returns, but the inode modification times may be lost if the system crashes. DCO log volume A special volume that is used to hold Persistent FastResync change maps. defragmentation Reorganizing data on disk to keep file data blocks physically adjacent so as to reduce access times. detached A state in which a VxVM object is associated with another object, but not enabled for use.
device name The device name or address used to access a physical disk, such as c0t0d0s2 on Solaris, c0t0d0 on HP-UX, hdisk1 on AIX, and hda on Linux. In a SAN environment, it is more convenient to use enclosure-based naming, which forms the device name by concatenating the name of the enclosure (such as enc0) with the disk’s number within the enclosure, separated by an underscore (for example, enc0_2). The term disk access name can also be used to refer to a device name. dialog box A window in which the user submits information to VxVM. Dialog boxes can contain selectable buttons and fields that accept information.
Glossary-4
dirty region logging The procedure by which the VxVM monitors and logs modifications to a plex. A bitmap of changed regions is kept in an associated subdisk called a log subdisk. disabled path A path to a disk that is not available for I/O. A path can be disabled due to real hardware failures or if the user has used the vxdmpadm disable command on that controller. discovered direct I/O Discovered Direct I/O behavior is similar to direct I/O and has the same alignment constraints, except writes that allocate storage or extend the file size do not require writing the inode changes before returning to the application.
Device Discovery Layer (DDL) A facility of VxVM for discovering disk attributes needed for VxVM DMP operation.
direct extent An extent that is referenced directly by an inode.
direct I/O An unbuffered form of I/O that bypasses the file system’s buffering of data. With direct I/O, the file system transfers data directly between the disk and the user-supplied buffer. See buffered I/O and unbuffered I/O.
disk A collection of read/write data blocks that are indexed and can be accessed fairly quickly. Each disk has a universally unique identifier. disk access name The name used to access a physical disk. The c#t#d#s# syntax identifies the controller, target address, disk, and partition. The term device name can also be used to refer to the disk access name. disk access records Configuration records used to specify the access path to particular disks. Each disk access record contains a name, a type, and possibly some type-specific information, which is used by the VxVM in deciding how to access and manipulate the disk that is defined by the disk access record.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
disk array A collection of disks logically arranged into an object. Arrays tend to provide benefits, such as redundancy or improved performance. disk array serial number This is the serial number of the disk array.; it is usually printed on the disk array cabinet or can be obtained by issuing a vendorspecific SCSI command to the disks on the disk array. This number is used by the DMP subsystem to uniquely identify a disk array. disk controller The controller (HBA) connected to the host or the disk array that is represented as the parent node of the disk by the operating system; it is called the disk controller by the multipathing subsystem of VxVM.
For example, if a disk is represented by the device name: /devices/sbus@1f,0/ QLGC,isp@2,10000/sd@8,0:c then the disk controller for the disk sd@8,0:c is: QLGC,isp@2,10000 This controller (HBA) is connected to the host. disk enclosure An intelligent disk array that usually has a backplane with a built-in Fibre Channel loop, and which permits hot-swapping of disks. disk group A collection of disks that are under VxVM control and share a common configuration. A disk group configuration is a set of records containing detailed information on existing VxVM objects (such as disk and volume attributes) and their relationships. Each disk group has an administrator-assigned name and an internally defined unique ID.
disk group ID A unique identifier used to identify a disk group. disk ID A universally unique identifier that is given to each disk and can be used to identify the disk, even if it is moved. disk media name A logical or administrative name chosen for the disk, such as disk03. The term disk name is also used to refer to the disk media name. disk media record A configuration record that identifies a particular disk, by disk ID, and gives that disk a logical (or administrative) name. disk name A logical or administrative name chosen for a disk that is under the control of VxVM, such as disk03. The term disk media name is also used to refer to a disk name. dissociate The process by which any link that exists between two VxVM objects is removed. For example, dissociating a subdisk from a plex removes the subdisk from the plex and adds the subdisk to the free space pool. dissociated plex A plex dissociated from a volume. dissociated subdisk A subdisk dissociated from a plex. distributed lock manager A lock manager that runs on different systems and ensures consistent access to distributed resources. dock To separate or attach the main window and a subwindow. Dynamic Multipathing (DMP) The method that VxVM uses to manage two or more hardware paths directing I/O to a single drive.
Glossary-5 Copyright © 2004 VERITAS Software Corporation. All rights reserved.
E
Fibre Channel A collective name for the fiber optic technology that is commonly used to set up a Storage Area Network (SAN).
enabled path A path to a disk that is available for I/O. encapsulation A process that converts existing partitions on a specified disk to volumes. If any partitions contain file systems, the file system table entries are modified so that the file systems are mounted on volumes instead. Encapsulation is not applicable on some systems. enclosure A disk array. enclosure-based naming An alternative disk naming method, beneficial in a SAN environment, which forms the device name by concatenating the name of the enclosure (such as enc0) with the disk’s number within the enclosure, separated by an underscore (for example, enc0_2). extent A group of contiguous file system data blocks that are treated as a unit. An extent is defined by a starting block and a length. extent attributes The extent allocation policies associated with a file. external quotas file A quotas file (named quotas) must exist in the root directory of a file system for quota-related commands to work. See internal quotas file and quotas file.
F fabric mode disk A disk device that is accessible on a Storage Area Network (SAN) through a Fibre Channel switch. FastResync A fast resynchronization feature that is used to perform quick and efficient resynchronization of stale mirrors, and to increase the efficiency of the snapshot mechanism.
Glossary-6
file system A collection of files organized together into a structure. The UNIX file system is a hierarchical structure consisting of directories and files. file system block The fundamental minimum size of allocation in a file system. This is equivalent to the ufs fragment size. file system snapshot An exact copy of a mounted file system at a specific point in time. Used to perform online backups. fileset A collection of files within a file system. fixed extent size An extent attribute associated with overriding the default allocation policy of the file system. fragmentation The ongoing process on an active file system in which the file system is spread further and further along the disk, leaving unused gaps or fragments between areas that are in use. This leads to degraded performance because the file system has fewer options when assigning a file to an extent. free disk pool Disks that are under VxVM control, but that do not belong to a disk group. free space An area of a disk under VxVM control that is not allocated to any subdisk or reserved for use by any other VxVM object. free subdisk A subdisk that is not associated with any plex and has an empty putil[0] field.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
G gap A disk region that does not contain VxVM objects (subdisks). GB Gigabyte (230 bytes or 1024 megabytes). graphical view A window that displays a graphical view of objects. In VEA, the graphical views include the Object View window and the Volume Layout Details window. grid A tabular display of objects and their properties. The grid lists VxVM objects, disks, controllers, or file systems. The grid displays objects that belong to the group icon that is currently selected in the object tree. The grid is dynamic and constantly updates its contents to reflect changes to objects. group icon The icon that represents a specific object group. GUI Graphical User Interface.
H hard limit The hard limit is an absolute limit on system resources for individual users for file and data block usage on a file system. See quotas. host A machine or system.
hostid
A string that identifies a host to the VxVM. The hostid for a host is stored in its volboot file, and is used in defining ownership of disks and disk groups. hot relocation A technique of automatically restoring redundancy and access to mirrored and RAID-5 volumes when a disk fails. This is done by relocating the affected subdisks to disks designated as spares and/or free space in the same disk group.
hot swap Refers to devices that can be removed from, or inserted into, a system without first turning off the power supply to the system. HP-UX coexistence label Data on disk that identifies the disk to the HP volume manager (LVM) as being controlled by VxVM. The contents of this label are identical to the contents of the VxVM ID block.
I I/O clustering The grouping of multiple I/O operations to achieve better performance. indirect address extent An extent that contains references to other extents, as opposed to file data itself. A single indirect address extent references indirect data extents. A double indirect address extent references single indirect address extents. indirect data extent An extent that contains file data and is referenced through an indirect address extent. initiating node The node on which the system administrator is running a utility that requests a change to VxVM objects. This node initiates a volume reconfiguration. inode A unique identifier for each file within a file system, which also contains metadata associated with that file. inode allocation unit A group of consecutive blocks that contain inode allocation information for a given fileset. This information is in the form of a resource summary and a free inode map. Intelligent Storage Provisioning (ISP) ISP enables you to organize and manage your physical storage by creating application volumes. ISP creates volumes from available storage with the required
Glossary-7 Copyright © 2004 VERITAS Software Corporation. All rights reserved.
capabilities that you specify. To achieve this, ISP selects storage by referring to the templates for creating volumes. intent The intent of an ISP application volume is a conceptualization of its purpose as defined by its characteristics and implemented by a template. intent logging A logging scheme that records pending changes to the file system structure. These changes are recorded in a circular intent log file. internal quotas file VxFS maintains an internal quotas file for its internal usage. The internal quotas file maintains counts of blocks and inodes used by each user. See external quotas file and quotas.
local mounted file system A file system mounted on a single host. The single host mediates all file system writes to storage from other clients. To be a local mount, a file system cannot be mounted using the mount –o cluster option. See cluster mounted file system. log plex A plex used to store a RAID-5 log. The term log plex may also be used to refer to a dirty region logging plex. log subdisk A subdisk that is used to store a dirty region log. LUN Logical Unit Number. Each disk in an array has a LUN. Disk partitions may also be assigned a LUN.
M
J JBOD The common name for an unintelligent disk array which may, or may not, support the hot-swapping of disks. The name is derived from “just a bunch of disks.”
K K Kilobyte (210 bytes or 1024 bytes).
L large file A file larger than 2 gigabytes. VxFS supports files up to two terabytes in size. large file system A file system more than 2 gigabytes in size. VxFS supports file systems up to 32 terabytes in size. latency For file systems, this typically refers to the amount of time it takes a given file system operation to return to the user. launch To start a task or open a window.
main window The main VEA window. This window contains a tree and grid that display volumes, disks, and other objects on the system. The main window also has a menu bar and a toolbar. master node A node that is designated by the software as the “master” node. Any node is capable of being the master node. The master node coordinates certain VxVM operations. mastering node The node to which a disk is attached. This is also known as a disk owner. MB Megabyte (220 bytes or 1024 kilobytes). menu A list of options or tasks. A menu item is selected by pointing to the item and clicking the mouse. menu bar A bar that contains a set of menus for the current window. The menu bar is typically placed across the top of a window. metadata Structural data describing the attributes of files on a disk.
Glossary-8
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
mirror A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each mirror is one copy of the volume with which the mirror is associated. The terms mirror and plex can be used synonymously. mirroring A layout technique that mirrors the contents of a volume onto multiple plexes. Each plex duplicates the data stored on the volume, but the plexes themselves may have different layouts. multipathing Where there are multiple physical access paths to a disk connected to a system, the disk is called multipathed. Any software residing on the host (for example, the DMP driver) that hides this fact from the user is said to provide multipathing functionality. multivolume file system A single file system that has been created over multiple volumes, with each volume having its own properties.
N node In an object tree, a node is an element attached to the tree. In a cluster environment, a node is a host machine in a cluster. node abort A situation where a node leaves a cluster (on an emergency basis) without attempting to stop ongoing operations. node join The process through which a node joins a cluster and gains access to shared disks. nonpersistent FastResync A form of FastResync that cannot preserve its maps across reboots of the system because it stores its change map in memory.
O object An entity that is defined to and recognized internally by VxVM. The VxVM objects are: volume, plex, subdisk, disk, and disk group. There are actually two types of disk objects—one type for the physical aspect of the disk and the other for the logical aspect. object group A group of objects of the same type. Each object group has a group icon and a group name. In VxVM, object groups include disk groups, disks, volumes, controllers, free disk pool disks, uninitialized disks, and file systems. object location table (OLT) The information needed to locate important file system structural elements. The OLT is written to a fixed location on the underlying media (or disk). object location table replica A copy of the OLT in case of data corruption. The OLT replica is written to a fixed location on the underlying media (or disk). object tree A dynamic hierarchical display of VxVM objects and other objects on the system. Each node in the tree represents a group of objects of the same type. Object View Window A window that displays a graphical view of the volumes, disks, and other objects in a particular disk group. The objects displayed in this window are automatically updated when object properties change. This window can display detailed or basic information about volumes and disks.
P page file A fixed-size block of virtual address space that can be mapped onto any of the physical addresses available on a system.
Glossary-9 Copyright © 2004 VERITAS Software Corporation. All rights reserved.
parity A calculated value that can be used to reconstruct data after a failure. While data is being written to a RAID-5 volume, parity is also calculated by performing an exclusive OR (XOR) procedure on data. The resulting parity is then written to the volume. If a portion of a RAID-5 volume fails, the data that was on that portion of the failed volume can be recreated from the remaining data and the parity. parity stripe unit A RAID-5 volume storage region that contains parity information. The data contained in the parity stripe unit can be used to help reconstruct regions of a RAID-5 volume that are missing because of I/O or disk failures. partition The standard division of a physical disk device, as supported directly by the operating system and disk drives. path When a disk is connected to a host, the path to the disk consists of the Host Bus Adapter (HBA) on the host, the SCSI or fiber cable connector and the controller on the disk or disk array. These components constitute a path to a disk. A failure on any of these results in DMP trying to shift all I/Os for that disk onto the remaining (alternate) paths. pathgroup In case of disks that are not multipathed by vxdmp, VxVM will see each path as a disk. In such cases, all paths to the disk can be grouped. This way only one of the paths from the group is made visible to VxVM. persistent FastResync A form of FastResync that can preserve its maps across reboots of the system by storing its change map in a DCO log volume on disk.
prevents failed mirrors from being selected for recovery. This is also known as kernel logging. physical disk The underlying storage device, which may or may not be under VxVM control. platform block Data placed in sector 0, which contains OS-specific data for a variety of platforms that require its presence for proper interaction with each of those platforms. The platform block allows a disk to masquerade as if it was initialized by each of the specific platforms. plex A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each plex is one copy of the volume with which the plex is associated. preallocation The preallocation of space for a file so that disk blocks will physically be part of a file before they are needed. Enabling an application to preallocate space for a file guarantees that a specified amount of space will be available for that file, even if the file system is otherwise out of space. primary fileset A fileset that contains the files that are visible and accessible to users. primary path In active/passive type disk arrays, a disk can be bound to one particular controller on the disk array or owned by a controller. The disk can then be accessed using the path through this particular controller. private disk group A disk group in which the disks are accessed by only one specific host.
persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes and
Glossary-10
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
private region A region of a physical disk used to store private, structured VxVM information. The private region contains a disk header, a table of contents, and a configuration database.
The table of contents maps the contents of the disk. The disk header contains a disk ID. All data in the private region is duplicated for extra reliability. properties window A window that displays detailed information about a selected object. public region A region of a physical disk managed by VxVM that contains available space and is used for allocating subdisks.
Q Quick I/O file A regular VxFS file that is accessed using the ::cdev:vxfs: extension. Quick I/O for Databases Quick I/O is a VERITAS File System feature that improves database performance by minimizing read/write locking and eliminating double buffering of data. This allows online transactions to be processed at speeds equivalent to that of using raw disk devices, while keeping the administrative benefits of file systems. QuickLog VERITAS QuickLog is a high performance mechanism for receiving and storing intent log information for VxFS file systems. QuickLog increases performance by exporting intent log information to a separate physical volume. quotas Quota limits on system resources for individual users for file and data block usage on a file system. See hard limit and soft limit.
quotas file The quotas commands read and write the external quotas file to get or change usage limits. When quotas are turned on, the quota limits are copied from the external quotas file to the internal quotas file. See external quotas file, internal quotas file, and quotas.
R radio buttons A set of buttons used to select optional settings. Only one radio button in the set can be selected at any given time. These buttons toggle on or off. RAID A Redundant Array of Independent Disks (RAID) is a disk array set up with part of the combined storage capacity used for storing duplicate information about the data stored in that array. This makes it possible to regenerate the data if a disk failure occurs. read-writeback mode A recovery mode in which each read operation recovers plex consistency for the region covered by the read. Plex consistency is recovered by reading data from blocks of one plex and writing the data to all other writable plexes. reservation An extent attribute associated with preallocating space for a file. root disk The disk containing the root file system. This disk may be under VxVM control. root disk group In versions of VxVM prior to 4.0, a special private disk group had to exist on the system. The root disk group was always named rootdg. This requirement does not apply to VxVM 4.x and higher. root file system The initial file system mounted as part of the UNIX kernel startup sequence.
Glossary-11 Copyright © 2004 VERITAS Software Corporation. All rights reserved.
root partition The disk region on which the root file system resides. root volume The VxVM volume that contains the root file system, if such a volume is designated by the system configuration.
shared disk group A disk group in which the disks are shared by multiple hosts (also referred to as a clustershareable disk group). shared VM disk A VxVM disk that belongs to a shared disk group.
rootability The ability to place the root file system and the swap device under VxVM control. The resulting volumes can then be mirrored to provide redundancy and allow recovery in the event of disk failure.
shared volume A volume that belongs to a shared disk group and is open on more than one node at the same time. Shortcut menu A context-sensitive menu that only appears when you click a specific object or area.
rule A statement written in the VERITAS ISP language that specifies how a volume is to be created.
slave node A node that is not designated as a master node.
S
slice The standard division of a logical disk device. The terms partition and slice are sometimes used synonymously.
scroll bar A sliding control that is used to display different portions of the contents of a window. Search window The VEA search tool. The Search window provides a set of search options that can be used to search for objects on the system.
snapped file system A file system whose exact image has been used to create a snapshot file system.
secondary path In active/passive type disk arrays, the paths to a disk other than the primary path are called secondary paths. A disk is supposed to be accessed only through the primary path until it fails, after which ownership of the disk is transferred to one of the secondary paths. sector A unit of size, which can vary between systems. A sector is commonly 512 bytes. sector size Sector size is an attribute of a disk drive (or SCSI LUN for an arraytype device) that is set when the drive is formatted. Sectors are the smallest addressable unit of storage on the drive, and are the units in which the device performs I/O.
Glossary-12
snapshot A point-in-time copy of a volume (volume snapshot) or a file system (file system snapshot).
snapshot file system An exact copy of a mounted file system at a specific point in time. Used to do online backups. See file system snapshot. soft limit The soft limit is lower than a hard limit. The soft limit can be exceeded for a limited time. There are separate time limits for files and blocks. See hard limit and quota. spanning A layout technique that permits a volume (and its file system or database) that is too large to fit on a single disk to span across multiple physical disks. sparse plex A plex that is not as long as the volume or that has holes (regions of the plex that do not have a backing subdisk).
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
splitter A bar that separates two panes of a window (such as the object tree and the grid). A splitter can be used to adjust the sizes of the panes. status area An area of the main window that displays an alert icon when an object fails or experiences some other error. Storage Area Network (SAN) A networking paradigm that provides easily reconfigurable connectivity between any subset of computers, disk storage and interconnecting hardware such as switches, hubs and bridges. storage checkpoint A facility that provides a consistent and stable view of a file system or database image and keeps track of modified data blocks since the last checkpoint. storage pool A policy-based container within a disk group in VxVM, for use by ISP, that contains LUNs and volumes. storage pool definition A grouping of template sets that defines the characteristics of a storage pool. Applies to the ISP feature of VxVM. storage pool policy Defines how a storage pool behaves when more storage is required, and when you try to create volumes whose capabilities are not permitted by the current templates. Applies to the ISP feature of VxVM. storage pool set A bundled definition of the capabilities of a data pool and its clone pools. Applies to the ISP feature of VxVM. stripe A set of stripe units that occupy the same positions across a series of columns. stripe size The sum of the stripe unit sizes that compose a single stripe across all columns being striped.
stripe unit Equally sized areas that are allocated alternately on the subdisks (within columns) of each striped plex. In an array, this is a set of logically contiguous blocks that exist on each disk before allocations are made from the next disk in the array. A stripe unit may also be referred to as a stripe element. stripe unit size The size of each stripe unit. The default stripe unit size is 32 sectors (16K). A stripe unit size has also historically been referred to as a stripe width. striping A layout technique that spreads data across several physical disks using stripes. The data is allocated alternately to the stripes within the subdisks of each plex. structural fileset A special fileset that stores the structural elements of a VxFS file system in the form of structural files. These files define the structure of the file system and are visible only when using utilities such as the file system debugger. subdisk A consecutive set of contiguous disk blocks that form a logical disk segment. Subdisks can be associated with plexes to form volumes. super-block A block containing critical information about the file system such as the file system type, layout, and size. The VxFS super-block is always located 8192 bytes from the beginning of the file system and is 8192 bytes long. swap area A disk region used to hold copies of memory pages swapped out by the system pager process. swap volume A VxVM volume that is configured for use as a swap area. synchronous writes A form of synchronous I/O that writes the file data to disk, updates the inode times, and writes
Glossary-13 Copyright © 2004 VERITAS Software Corporation. All rights reserved.
the updated inode to disk. When the write returns to the caller, both the data and the inode have been written to disk.
U
T
UFS The UNIX file system; derived from the 4.2 Berkeley Fast File System.
task properties window A window that displays detailed information about a task listed in the Task Request Monitor window. Task Request Monitor A window that displays a history of tasks performed in the current VEA session. Each task is listed with the task originator, the task status, and the start/ finish times for the task. TB Terabyte (240 bytes or 1024 gigabytes). template A meaningful collection of ISP rules that provide a capability for a volume. Also known as a volume template. template set consists of related capabilities and templates that have been collected together for convenience to create ISP volumes. throughput For file systems, this typically refers to the number of I/O operations in a given unit of time. toolbar A set of buttons used to access VEA windows. These include another main window, a task request monitor, an alert monitor, a search window, and a customize window. transaction A set of configuration changes that succeed or fail as a group, rather than individually. Transactions are used internally to maintain consistent configurations. tree A dynamic hierarchical display of objects on the system. Each node in the tree represents a group of objects of the same type.
Glossary-14
ufs The UNIX file system type. Used as parameter in some commands.
unbuffered I/O I/O that bypasses the file system cache to increase I/O performance. This is similar to direct I/O, except when a file is extended. For direct I/O, the inode is written to disk synchronously; for unbuffered I/O, the inode update is delayed. See buffered I/O and direct I/O. uninitialized disks Disks that are not under VxVM control. user template Consists of related capabilities and templates that have been collected together for convenience for creating ISP application volumes.
V VCS VERITAS Cluster Server. VEA VERITAS Enterprise Administrator graphical user interface. VM disk A disk that is both under VxVM control and assigned to a disk group. VM disks are sometimes referred to as Volume Manager disks or simply disks. In the graphical user interface, VM disks are represented iconically as cylinders labeled D. VMSA Volume Manager Storage Administrator, an earlier version of the VxVM GUI used prior to VxVM version 3.5.
volboot file A small file that is used
to store the host ID of the system on which VxVM is installed and the values of bootdg and defaultdg.
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
volume A virtual disk or entity that is made up of portions of one or more physical disks. A volume represents an addressable range of disk blocks used by applications such as file systems or databases. A volume is a collection of from one to 32 plexes. volume configuration device The volume configuration device (/dev/vx/ config) is the interface through which all configuration changes to the volume device driver are performed. volume device driver The driver that forms the virtual disk drive between the application and the physical device driver level. The volume device driver is accessed through a virtual disk device node whose character device nodes appear in /dev/vx/rdsk, and whose block device nodes appear in /dev/vx/dsk.
vxconfigd
The VxVM configuration daemon, which is responsible for making changes to the VxVM configuration. This daemon must be running before VxVM operations can be performed. vxfs The VERITAS File System type. Used as a parameter in some commands. VxFS VERITAS File System. VxVM VERITAS Volume Manager. VxVM ID block Data on disk that indicates the disk is under VxVM control. The VxVM ID Block provides dynamic VxVM private region location, GUID, and other information.
volume event log The volume event log device (/dev/vx/event) is the interface through which volume driver events are reported to the utilities. Volume Layout Window A window that displays a graphical view of a volume and its components. The objects displayed in this window are not automatically updated when the volume’s properties change. volume set A volume set allows several volumes to be treated as a single object with one logical I/O interface. Applies to the ISP feature of VxVM. volume template A meaningful collection of ISP rules that provide a capability for a volume. Also known as a template. Volume to Disk Mapping Window A window that displays a tabular view of volumes and their underlying disks. This window can also display details such as the subdisks and gaps on each disk. Glossary-15 Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Glossary-16
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index Files and Directories
A
/dev/vx/dsk 3-9 /dev/vx/rdsk 3-9 /etc/default/fs 5-29 /etc/default/vxassist 4-17 /etc/default/vxdisk 3-12 /etc/default/vxencap 3-12 /etc/default/vxse 6-40 /etc/filesystems 4-15, 5-26, 5-27 /etc/fs/vxfs 5-28 /etc/fstab 4-15, 5-27 /etc/group 2-44 /etc/init.d/isisd 2-43 /etc/rc2.d/S50isisd 2-41 /etc/system 2-13, 7-20 /etc/vfs 5-29 /etc/vfstab 3-43, 4-15, 5-27, 7-20 /etc/vx/elm 2-9 /etc/vx/isis/Registry 2-45 /etc/vx/licenses/lic 2-9 /lost+found 5-39 /opt/VRTS/bin 5-28 /opt/VRTS/install/logs 2-21 /opt/VRTS/man 2-37 /opt/VRTS/vxse/vxvm 6-36 /opt/VRTSob/bin 2-39 /opt/VRTSob/bin/vxsvc 2-43 /opt/VRTSvxfs/sbin 5-28 /sbin 5-28 /sbin/fs 5-28 /sbin/init.d/isisd 2-43 /stand/system 2-13 /usr/lib/fs/vxfs 5-28 /usr/sbin 5-28 /var/adm/vx/veacmdlog 2-34, 6-24 /var/vx/isis/command.log 2-34 /var/vx/isis/vxisis.lock 2-43 /var/vx/isis/vxisis.log 2-43
aborting online relayout 6-20 ACTIVE 8-6 adding a disk to a disk group 3-13 address-length pair 5-36 AIX supported versions 2-4 AIX disk 1-5 AIX physical volume 1-5 allocation units 5-40 alternate boot disk creating 7-14 creating in CLI 7-16 creating in VEA 7-15 creating in vxdiskadm 7-15 reasons for creating 7-14 alternate mirror booting 7-18 array 1-10 atomic-copy resynchronization 8-5 autoconfig 3-25 autoimport 3-25, 3-37
B backing up the VxVM configuration 8-23 Bad Block Relocation Area 1-5 bad block revectoring 7-6 BBRA 1-5 blkclear mount option 5-54 block clustering 5-35 block device file 5-25 block-based allocation 5-35 boot block 1-4 boot disk 1-4, 7-5 creating an alternate 7-14 creating an alternate in CLI 7-16 creating an alternate in VEA 7-15 creating an alternate in vxdiskadm 7-15 encapsulating in vxdiskadm 7-11 Index-1
Copyright © 2004 VERITAS Software Corporation. All rights reserved.
mirroring 7-14 mirroring in vxdiskadm 7-15 mirroring requirements 7-14 reasons for creating an alternate 7-14 unencapsulating 7-19 boot disk encapsulation 7-5 effect on /etc/system 7-12 effect on /etc/vfstab 7-13 file system requirements 7-8 planning 7-10 using vxdiskadm 7-11 viewing 7-12 boot disk errors 7-17 boot disk failure protecting against 7-14 boot mirror verification 7-18 bootdg 2-18, 3-9, 3-45, 7-10 booting from alternate mirror 7-18 bsize 5-31
concatenated 4-12 Concatenated Mirrored 4-13, 4-39, 4-41 concatenated volume 1-19, 4-4 creating 4-17 concatenation 1-19 advantages 4-8 disadvantages 4-8 concat-mirror 4-39 configuration backup and restoration 8-23 Console/Task History 2-31 controller 1-8 creating a layered volume 4-32 creating a volume 4-10 CLI 4-16 creating an alternate boot disk 7-14 crfs 5-26 cron 5-48 cross-platform data sharing 1-13
C
D
CDS 1-13 CDS disk 1-14 CDS disk layout 1-13 cfgmgr 8-17 CFS 2-15 chfs 5-27 CLI 2-29 CLI commands in VEA 2-33 cluster 2-15 cluster environment 3-37 Cluster File System 2-15, 2-16 cluster functionality 2-15 cluster group 3-15 cluster management 3-6 Cluster Volume Manager licensing 2-16 col_switch 5-21, 5-23 column 4-5 command line interface 2-29, 2-36 command log file 2-34, 6-24 complete plex 1-16
data change object 4-31 data consistency maintaining 8-4 data redundancy 1-20 databases on file systems 2-15 default disk group 2-18 defaultdg 2-18, 3-9 defragmentation scheduling 5-48 defragmenting a file system 5-46 defragmenting directories 5-47 defragmenting extents 5-46 delaylog mount option 5-54 deporting a disk group and renaming 3-33 CLI 3-35 to new host 3-33 VEA 3-34 vxdiskadm 3-35 destroying a disk group 3-45 CLI 3-45 VEA 3-45 devalias 7-18
Index-2
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
devfsadm 8-17 device file 5-41 device naming enclosure-based 3-5 traditional 3-4 device naming scheme selecting 3-7 device naming schemes 3-4 device node 4-10 device path 3-24 devicetag 3-24 df 5-34, 5-44 directories defragmenting 5-47 directory fragmentation 5-42 dirty region log bitmaps 8-9 dirty region log size 8-9 dirty region logging 5-9, 7-8, 8-7, 8-8 disk adding in CLI 3-17 adding new 8-17 adding to a disk group in VEA 3-16 AIX 1-5 configuring for VxVM 3-11 creating sliced 3-27 displaying summary information 3-24 evacuating in CLI 3-29 evacuating in VEA 3-29 evacuating in vxdiskadm 3-29 HP-UX 1-5 Linux 1-6 managing spares in CLI 8-15 managing spares in vxdiskadm 8-14 naming 1-8 non-CDS 3-27 offlining 3-35 recognizing by operating system 8-17 removing 3-28 removing in VEA 3-30 renaming in CLI 3-32 renaming in VEA 3-32 replacing 8-16 replacing failed in vxdiskadm 8-18 replacing in CLI 8-18 replacing in VEA 8-18 reserving 8-15 setting up as spare in VEA 8-14
Solaris 1-4 uninitializing 3-31 unrelocating 8-19 unrelocating in CLI 8-20 unrelocating in VEA 8-19 unrelocating in vxdiskadm 8-20 viewing in CLI 3-23 viewing in VEA 3-19, 3-20 viewing information about 3-19 disk access name 3-12 disk access record 1-16, 3-12 disk array 1-10 multipathed 1-10 disk device naming 3-4 disk encapsulation data disks 7-4 disk enclosure 2-17 disk failure partial 8-11 disk flags 3-25 disk group adding a disk in CLI 3-17 adding a disk in VEA 3-16 adding disks to 3-13 clearing host locks 3-36 creating 3-13 creating a non-CDS 3-27 creating in CLI 3-17 creating in VEA 3-15 creating in vxdiskadm 3-17 definition 1-15 deporting 3-33 deporting in CLI 3-35 deporting in VEA 3-34 deporting in vxdiskadm 3-35 destroying 3-45 destroying in CLI 3-45 destroying in VEA 3-45 displaying deported 3-26 displaying free space in 3-26 displaying properties for 3-26 displaying reserved 3-9 forcing an import 3-37, 3-39 forcing an import in VEA 3-38 high availability 1-15, 3-8 importing 3-36 importing and renaming 3-36 importing and renaming in VEA 3-38
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index-3
importing as temporary in CLI 3-39 importing as temporary in VEA 3-38 importing in CLI 3-39 importing in VEA 3-38 importing in vxdiskadm 3-39 moving between systems 3-41 moving in CLI 3-42 moving in VEA 3-41 moving in vxdiskadm 3-42 non-CDS 3-27 purpose 1-15, 3-8 removing a disk 3-28 renaming 3-43 renaming in CLI 3-44 renaming in VEA 3-43 reserved names 3-9 setting the default 3-10 shared 3-15 temporary import 3-37 upgrading the version 3-48 upgrading the version in CLI 3-49 disk group configuration 1-15 backup and restore 8-24 disk group ID 3-24 disk group import 1-12 disk group properties viewing 3-22 Disk Group Properties window 3-22 disk group split and join licensing 2-16 disk group versions 3-46 supported features 3-47 disk ID 3-24 disk initialization 3-11 disk label 1-4 disk layout 1-13 changing 3-12 disk media name 1-15, 3-12, 3-13 default 1-16 disk name 3-24 disk naming 3-13 AIX 1-9 enclosure-based 2-17 HP-UX 1-8 Linux 1-9 Solaris 1-8 disk properties 3-21
Index-4
disk replacement 8-16 disk spanning 1-19 disk status Deported 3-20 Disconnected 3-20 External 3-20 Free 3-19 Imported 3-19 Not Setup 3-19 online 3-23 online invalid 3-23 disk structure 1-4 Disk View window 2-31, 3-20, 4-24 disk-naming scheme changing 3-7 disks adding to a disk group 3-13 displaying detailed information 3-24 evacuating data 3-29 removing in CLI 3-31 removing in vxdiskadm 3-30 renaming 3-32 uninitialized 3-11 DMP 1-12 DRL 8-8 dynamic LUN resizing 6-13 dynamic multipathing 1-12, 2-17, 3-6
E eeprom 7-10, 7-18 encapsulating root benefits 7-6 limitations 7-6 encapsulation 3-11, 7-4 effect on /etc/system 7-12 requirements for boot disk 7-5 requirements for data disk 7-4 root disk 7-5 unencapsulating a boot disk 7-19 enclosure 2-17, 3-7 enclosure-based naming 2-17, 3-5 benefits 3-6 evacuating a disk 3-29 CLI 3-29
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
VEA 3-29 vxdiskadm 3-29 evaluation license 2-6 exclusive OR 4-7 EXT2 5-28 EXT3 5-28 Extended File System 5-28 extended partition 1-6 extent 5-36 extent allocation unit state file 5-41 extent allocation unit summary file 5-41 extent fragmentation 5-43 extent-based allocation 5-35, 5-36 benefits 5-37 extents defragmenting 5-46
F fabric mode disks 3-7 FAILED 8-16 FAILING 8-16 FastResync 2-14, 4-13, 8-7 licensing 2-16 favorite host adding 2-42 removing 2-42 FCL 5-57 fdisk 1-6 Fibre Channel 2-17, 3-7 file change log 5-57 compared to intent log 5-57 file system adding in VEA 5-24 adding to a volume 4-14, 5-24 adding to a volume in CLI 5-25 allocation units 5-40 consistency checking 5-50 defragmenting 5-46 file change log 5-57 fragmentation 5-42 fragmentation reports 5-44 fragmentation types 5-42 intent log 5-49 intent log resizing 5-52
log I/O size 5-56 logging and performance 5-55 logging options 5-53 maximum sizes 5-32 mounting at boot 5-27 mounting in VEA 5-24 resizing 6-4, 6-11 resizing in VEA 6-8 resizing methods 6-6 structural components of 5-40 structural files 5-40 unmounting 5-33 unmounting in VEA 5-24 upgrading the layout 5-38 file system free space identifying 5-34 file system layout version displaying 5-39 file system type 4-14, 5-34 fileset header file 5-41 flags disk 3-25 FlashSnap 2-14 format 1-5 fragmentation 5-42 controlling 5-42 directory 5-42 extent 5-43 free disk pool 3-11 free extent map file 5-41 free space pool 3-12 fsadm 5-39, 5-42, 5-44, 6-11 fsck 5-49, 5-50 fsck pass 4-15 fstyp 5-34
G grid 2-31 group name 3-24
H help information in VEA 2-35 HFS 5-28
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index-5
Hierarchical File System 5-28 high availability 1-12, 2-15, 3-40, 7-6 host removing favorites 2-42 host ID clearing at import 3-38 host locks clearing 3-36 hostid 2-6, 3-24 hot relocation 1-12 definition 8-11 failure detection 8-12 notification 8-12 process 8-12 recovery 8-12 selecting space 8-13 unrelocating a disk 8-19 HP-UX supported versions 2-4 HP-UX disk 1-5
I I/O rate controlling 6-31 I/O size controlling 6-31 imported 3-25 importing a disk group 3-36 and clearing host locks 3-36 and renaming 3-36 CLI 3-39 forcing 3-37 forcing in CLI 3-39 temporarily 3-37 VEA 3-38 vxdiskadm 3-39 initialize zero 4-13 inode 5-36 inode allocation unit file 5-41 inode list file 5-41 insf 8-17 installation manually installing VxVM 2-24 installation log file 2-21
Index-6
installation menu 2-20 installation scripts 2-22 installboot 7-16 installer 2-19 installfs 2-19, 2-22 installing VEA 2-39 installing VxVM 2-19 initial configuration 2-28 manually on AIX 2-25 manually on HP-UX 2-24 manually on Linux 2-25 manually on Solaris 2-24 package space requirements 2-12 verifying on AIX 2-26 verifying on HP-UX 2-26 verifying on Linux 2-27 verifying on Solaris 2-26 verifying package installation 2-26 vxinstall 2-28 installp 2-19, 2-25 installsf 2-19, 2-22 installvm 2-19, 2-22 and upgrades 7-22 Intelligent Storage Provisioning 3-15, 4-31 intent log resizing 5-52 intent log replay parallel 5-51 intent logging 5-49 interfaces 2-29 command line interface 2-29 VERITAS Enterprise Administrator 2-29 vxdiskadm 2-29 iodelay 6-31 ioscan 8-17
J JFS 1-6, 5-28 JFS2 5-28 Journaled File System 5-28 journaling 5-49
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
K kernel issues and VxFS 2-13
L label file 5-41 largesize 5-44 layered volume 1-19, 1-20, 4-32 advantages 4-35 creating 4-32 creating in CLI 4-41 creating in VEA 4-41 disadvantages 4-36 preventing creation 4-13 viewing in CLI 4-42 viewing in VEA 4-42 layered volume layouts 4-37 license key 2-6 adding 2-9 viewing 2-9 licensing 2-6 for optional features 2-16 for upgrades 7-21 generating a license key 2-8 management utilities 2-10 obtaining a license key 2-6 viewing license keys 2-9 vLicense 2-8 Linux supported versions 2-4 Linux disk 1-6 listing installed packages 2-26 load balancing 1-12, 4-8 location code 1-9 log adding in CLI 5-12 adding in VEA 5-11 removing in CLI 5-12 removing in VEA 5-11 log file 5-41 log mount option 5-54 log plex 1-17, 8-8 log subdisks 8-8 logdisk 5-21
logging 4-13, 5-9, 8-8 and VxFS performance 5-55 dirty region logging 8-8 for mirrored volumes 5-9 RAID-5 5-10, 8-10 logging options for a file system 5-53 logical unit number 1-8 Logical Volume Manager 1-5 logiosize 5-56 logsize 5-31 logtype 4-21, 5-12 lsdev 8-17 lsfs 5-27 lslpp 2-27 LUN 1-8 and resizing VxVM structures 6-13 LVM 1-5
M man 2-37 manual pages 2-37 menu bar 2-31 metadata 5-35 mirror adding 5-4 adding in CLI 5-5 adding in VEA 5-5 booting from alternate 7-18 removing 5-6 removing by disk 5-7 removing by mirror 5-7 removing by quantity 5-7 removing in CLI 5-7 removing in VEA 5-7 mirror=ctlr 5-17 mirror=disk 5-17 mirror=enclr 5-17 mirror=target 5-17 mirror-concat 4-37 mirror-concat layout 1-19 mirrored volume 1-20, 4-6 creating 4-20, 4-21
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index-7
mirroring 1-20 across controllers 5-18 across enclosures 5-19 advantages 4-9 disadvantages 4-9 enhanced 4-32 mirroring a volume 4-13 mirroring the boot disk 7-14 errors 7-17 requirements 7-14 vxdiskadm 7-15 mirrors adding 4-20 mirror-stripe 4-38 mirror-stripe layout 1-19, 4-33 mkdir 5-25 mkfs 5-25 mkfs options 5-30 bsize 5-31 largefiles 5-30 logsize 5-31 N 5-30 version 5-30 model 2-7 mount 5-25, 5-33, 5-53 mount at boot 4-15 CLI 5-27 mount options blkclear 5-53, 5-54 delaylog 5-53, 5-54 log 5-53, 5-54 tmplog 5-53, 5-54 mount point 4-14 mounting a file system VEA 5-24 mounting all file systems 5-33 moving a disk vxdiskadm 6-13 moving a disk group 3-41 CLI 3-42 VEA 3-41 vxdiskadm 3-42 multipathed disk array 1-10
Index-8
N naming disk devices 3-4 naming disks defaults 3-13 ncheck 5-41 ncol 4-18 NEEDSYNC 8-6 New Volume wizard 4-11 newfs 5-25 nlog 4-21, 5-12 nmirror 4-20 node 2-15 nodg 2-18, 3-9 nolog 5-21 non-CDS disk group 3-27 noraid5log 5-21 nostripe 4-17
O Object Data Manager 1-9 object location table 5-40 object location table file 5-41 Object Properties window 2-31 object tree 2-31 off-host processing 2-15 offlining disks 3-35 online invalid status 3-23 online ready 3-25 online relayout 6-14 aborting 6-20 and log plexes 6-18 and sparse plexes 6-18 and volume length 6-18 and volume snapshots 6-18 continuing 6-20 in CLI 6-21 limitations 6-18 monitoring 6-20 pausing 6-20 process 6-16 reversing 6-18, 6-20 supported transformations 6-15
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
temporary storage space 6-17 online status 3-23 operating system versions 2-4 opt 7-7, 7-8 optional features for VxVM and VxFS 2-16 optional VxFS features 2-15 ordered allocation 5-20 concatenating columns 5-23 order of columns 5-22 order of mirrors 5-22 specifying in CLI 5-21 specifying in VEA 5-21 storage classes 5-23 ordered option 5-21 organization principle 3-15
preferred plex read policy 5-13 primary partition 1-6 private 3-25 private region 1-13, 3-11 private region size 1-13 AIX 1-13 HP-UX 1-13 Linux 1-13 Solaris 1-13 projection 4-24 prtvtoc 7-10, 8-17 PTID 6-26 public region 1-13, 1-15 PVRA 1-5
Q
P packages 2-12 listing 2-26 space requirements 2-12 parallel log replay 5-51 parent task 6-26 parity 1-20, 4-7 partial disk failure 8-11 partition 1-5, 1-8 partitions after encapsulation 7-12 PATH 5-28, A-17, B-33 pausing online relayout 6-20 physical disk naming 1-8 Physical Volume Reserved Area 1-5 pkgadd 2-19, 2-24 pkginfo 2-26 plex 1-16, 4-6 definition 1-16 log 1-17 naming 1-16 sparse 1-17 types 1-17 plex name default 1-16 Preferences window 2-32
Quick I/O 2-15, 2-16 QuickLog 2-16, 5-56 quotas files 5-41
R RAID 1-18 RAID levels 1-18 RAID-0 1-19 RAID-0+1 1-19 RAID-1 1-19 RAID-1+0 1-19 RAID-5 1-19, 4-13 advantages 4-9 disadvantages 4-9 logging 5-10 RAID-5 column 4-7 default size 4-13 RAID-5 log 8-10 RAID-5 logging 8-7 RAID-5 volume 1-20, 4-7 creating 4-19 raw device file 5-25 read policy 5-13 changing in CLI 5-14 changing in VEA 5-14 preferred plex 5-13
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index-9
round robin 5-13 selected plex 5-13 read-writeback resynchronization 8-6 recovering a volume 8-21 VEA 8-21 redundancy 1-20 registry settings modifying 2-45 relayout 6-14 aborting 6-20 pausing 6-20 resuming 6-20 reversing 6-20 Relayout Status Monitor window 6-20, 6-24 relocated subdisks viewing 8-20 relocating subdisks 8-13 removing a disk 3-28 CLI 3-31 VEA 3-30 vxdiskadm 3-30 removing a volume 4-43 CLI 4-43 VEA 4-43 renaming a disk 3-32 CLI 3-32 VEA 3-32 renaming a disk group 3-43 CLI 3-44 VEA 3-43 replacing a disk 8-16 CLI 8-18 VEA 8-18 replacing a failed disk vxdiskadm 8-18 replicated volume group 4-31 Rescan option 8-17 resilience 1-20 resilience level changing 6-23 resilient volume 1-20 resizing a dynamic LUN 6-13 resizing a file system 6-11 resizing a volume 6-4 VEA 6-8
Index-10
with vxassist 6-10 with vxresize 6-9 resizing a volume and file system 6-6 resizing a volume with a file system 6-4 response file 2-21 resynchronization 8-4 atomic-copy 8-5 read-writeback 8-6 revectoring 7-6 reversing online relayout 6-20 rlink 4-31 rmfs 5-27 root 7-8, 7-12 root encapsulation 7-5 benefits 7-6 limitations 7-7 root plex errors 7-17 rootability 7-5 rootdg 2-18 rootvol 7-7 round robin read policy 5-13 rpm 2-19, 2-25, 2-27
S S95vxvm-recover 8-12 SAN 2-16, 2-17 SAN management 3-6 SANPoint Control QuickStart 2-16 scratch pad 6-16 security for VEA 2-44 selected plex read policy 5-13 shared 3-25 size of a volume 4-12 slice 1-5, 1-8 sliced disk 1-14, 3-27 slow attribute 6-32 snap object 4-31 software packages 2-11 Solaris supported versions 2-4
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Solaris disk 1-4 space requirements 2-12 spanning 1-12 spare disks managing 8-14 managing in CLI 8-15 managing in VEA 8-14 managing in vxdiskadm 8-14 sparse plex 1-17 starting all volumes after renaming a disk group 3-44 status area 2-31 storage allocating for volumes 5-15 storage area network 2-16, 2-17 storage attributes specifying for volumes 5-15 specifying in CLI 5-16 specifying in VEA 5-16 storage cache 4-31 Storage Checkpoints licensing 2-16 Storage Expert 6-33 customizing default attributes 6-40 examples 6-38 list of rules 6-41 prerequisites 6-36 rule output 6-37 rule syntax 6-36 rules 6-33 rules engine 6-33 types of rules 6-34 stripe unit 4-5, 4-7 default size 4-12 striped 4-12 Striped Mirrored 4-13, 4-40, 4-41 striped volume 1-20, 4-5 creating 4-18 stripe-mirror 4-40 stripe-mirror layout 4-34 stripeunit 4-18 striping 1-19 advantages 4-8 disadvantages 4-9 subdisk 1-16
definition 1-16 subdisk name default 1-16 subvolume 4-32 summary file 2-21 superblock 5-40 support for VxVM 2-5 swap 7-5, 7-12 swapvol 7-7 swinstall 2-19, 2-24 swlist 2-26 SYNC 8-6
T tag 14 7-12 tag 15 7-12 target 1-8 task controlling 6-29 controlling progress rate 6-31 displaying information about 6-26 monitoring 6-28 slowing 6-32 Task History window 2-33, 6-24 task throttling 6-32 TASKID 6-26 tasks aborting 2-33 accessing through VEA 2-32 clearing history 2-33 managing 6-24 managing in VEA 6-24 pausing 2-33 resuming 2-33 throttling 2-33 technical support for VxVM 2-5 temporary storage area 6-17 throttling a task 6-32 tmplog mount option 5-54 toolbar 2-31 true mirror 4-6 true mirroring 1-20 type 3-24
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index-11
U UFS 5-28, 7-8 allocation 5-35 uname 2-6 uninitialized disks 3-11 UNIX File System 5-28 unmounting a file system 5-33 VEA 5-24 unrelocating a disk 8-19 CLI 8-20 VEA 8-19 vxdiskadm 8-20 upgrade_finish 7-25 upgrade_start 6-7, 7-7, 7-25 upgrading Solaris operating system 7-28 VxVM and Solaris 7-30 VxVM software only 7-22 upgrading a disk group CLI 3-49 VEA 3-48 upgrading a disk group version 3-46 upgrading the file system layout 5-38 upgrading VxFS 7-33 upgrading VxVM 7-21 with installvm 7-23 upgrading VxVM manually 7-24 upgrading VxVM with scripts 7-25 use-nvramrc? 7-18 user interfaces 2-29 usr 7-7, 7-8, 7-12
V var 7-7, 7-8, 7-12 VEA 2-29 aborting a task 2-33 accessing tasks 2-32 adding a disk to a disk group 3-16 adding a file system to a volume 5-24 adding a log 5-11 adding a mirror 5-5 changing volume layout 6-19 changing volume read policy 5-14
Index-12
clearing task history 2-33 command log file 2-34, 6-24 confirming server startup 2-43 controlling user access 2-44 creating a disk group 3-15 creating a layered volume 4-41 creating a volume 4-11 creating an alternate boot disk 7-15 deporting a disk group 3-34 destroying a disk group 3-45 disk properties 3-21 disk view 2-31 Disk View window 4-24 displaying the version 2-43 displaying volumes 4-23 evacuating a disk 3-29 help information 2-35 importing a disk group 3-38 installing 2-39 installing client on Windows 2-40 installing the server and client 2-39 main window 2-31 managing spare disks 8-14 menu bar 2-31 modifying registry settings 2-45 monitoring events and tasks 2-43 mounting a file system 5-24 moving a disk group 3-41 multiple host support 2-30 multiple views of objects 2-30 object properties 2-31 object tree 2-31 pausing a task 2-33 recovering a volume 8-21 remote administration 2-30 removing a disk 3-30 removing a mirror 5-7 removing a volume 4-43 renaming a disk 3-32 renaming a disk group 3-43 replacing a disk 8-18 resizing a volume 6-8 resuming a task 2-33 scanning disks 8-17 security 2-30 setting preferences 2-32 specifying ordered allocation 5-21 starting 2-41 starting the client 2-41 status area 2-31
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
stopping the server 2-43 Task History window 2-33, 6-24 Task Properties window 2-33 throttling a task 6-32 throttling tasks 2-33 toolbar 2-31 unmounting a file system 5-24 unrelocating a disk 8-19 upgrading a disk group version 3-48 viewing a layered volume 4-42 viewing CLI commands 2-33 viewing disk group properties 3-22 viewing disk information 3-19 viewing disks 3-20 Volume Layout window 4-27 Volume Properties window 4-28 Volume to Disk Mapping window 4-26 volume to disk mapping window 2-31 volume view 2-31 Volume View window 4-25 VERITAS Cluster File System 2-15 VERITAS Cluster Server 2-15 VERITAS Enterprise Administrator 2-29, 2-30 VERITAS FastResync 2-14 VERITAS File System 5-28 VERITAS FlashSnap 2-14 VERITAS Quick I/O for Databases 2-15 VERITAS QuickLog 2-16 VERITAS SANPoint Control QuickStart 2-16 VERITAS Storage Expert 6-33 VERITAS Volume Manager 2-15 VERITAS Volume Replicator 2-14, 4-13, 4-31 versioning and disk groups 3-46 VGDA 1-5 VGRA 1-5 virtual storage objects 1-11 vLicense 2-8 VMSA 2-39 vol_subdisk_num 1-16 volboot 3-9 volume 1-11, 3-12 accessing 1-11 adding a file system 4-14 adding a file system in CLI 5-25
adding a file system in VEA 5-24 adding a log in CLI 5-12 adding a log in VEA 5-11 adding a mirror 5-4 adding a mirror in CLI 5-5 adding a mirror in VEA 5-5 analyzing with Storage Expert 6-33 Concatenated Mirrored 4-39 creating 4-10 creating a layered volume 4-32 creating in CLI 4-16 creating in VEA 4-11 creating layered in CLI 4-41 creating layered in VEA 4-41 creating mirrored and logged 4-21 creating on specific disks 5-18 definition 1-11, 1-17 disk requirements 4-10 estimating expansion 4-22 estimating size 4-22 excluding storage from 5-18 expanding the size 6-4 layered layouts 4-37 logging 4-13 managing tasks 6-24 mirroring 4-13 mirroring across controllers 5-18 mirroring across enclosures 5-19 online relayout 6-14 ordered allocation in CLI 5-21 ordered allocation in VEA 5-21 recovering 8-21 recovering in VEA 8-21 reducing the size 6-4 removing 4-43 removing a log in CLI 5-12 removing a mirror 5-6 removing a mirror in CLI 5-7 removing a mirror in VEA 5-7 removing in CLI 4-43 removing in VEA 4-43 resizing 6-4 resizing in VEA 6-8 resizing methods 6-6 resizing with vxassist 6-10 resizing with vxresize 6-9 specifying ordered allocation 5-20 specifying storage attributes in CLI 5-16 specifying storage attributes in VEA 5-16 starting manually 3-39
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index-13
Striped Mirrored 4-40 viewing layered in CLI 4-42 viewing layered in VEA 4-42 volume attributes 4-11 Volume Group Descriptor Area 1-5 Volume Group Reserved Area 1-5 volume layout 1-19 changing in CLI 6-21 changing in VEA 6-19 changing online 6-14 concatenated 1-19 displaying 4-23 displaying in CLI 4-29 layered 1-20 mirrored 1-20 RAID-5 1-20 selecting 4-4 striped 1-20 Volume Layout window 4-27 Volume Manager control 1-13 Volume Manager disk 1-15 naming 1-16 Volume Manager Support Operations 2-29, 238 volume name default 1-17 Volume Properties window 4-28 volume read policy 5-13 changing 5-13 changing in CLI 5-14 changing in VEA 5-14 volume recovery 8-16 volume replication licensing 2-16 Volume Replicator 2-15 volume set 5-56 volume size 4-12 volume table of contents 1-4 volume tasks managing in VEA 6-24 Volume to Disk Mapping window 2-31, 4-26 Volume View window 2-31, 4-25 volumes allocating storage for 5-15 starting after disk group renaming 3-44
Index-14
vrtsadm 2-41, 2-44 vrtsadm group 2-44 VRTSalloc 2-12 VRTSap 2-13 VRTScpi 2-12 VRTSddlpr 2-12 VRTSfppm 2-13 VRTSfsdoc 2-13 VRTSfspro 2-12 VRTSlic 2-9, 2-10 VRTSmuob 2-12 VRTSob 2-12 VRTSobadmin 2-12 VRTSobgui 2-12 VRTSobgui.msi 2-40 VRTSperl 2-12 VRTSqio 2-15 VRTSqlog 2-15 VRTSspc 2-16 VRTSspcq 2-16 VRTStep 2-13 VRTSvlic 2-9, 2-10, 2-12 VRTSvmdoc 2-12 VRTSvmman 2-12 VRTSvmpro 2-12 VRTSvxfs 2-13 VRTSvxvm 2-12, 2-14 VTOC 1-4, 7-12 VVR 2-14 vxassist 4-16, 6-6, 6-10 vxassist addlog 5-12 vxassist convert 6-21, 6-23 vxassist growby 6-10 vxassist growto 6-10 vxassist make 4-16, 4-41, 5-16 vxassist maxgrow 4-22 vxassist maxsize 4-22 vxassist mirror 5-5, 7-16 vxassist relayout 6-21, 6-22 vxassist remove log 5-12 vxassist remove mirror 5-7
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
vxassist remove volume 4-43 vxassist shrinkby 6-10 vxassist shrinkto 6-10 vxbootsetup 7-16 vxconfigbackup 8-23 vxconfigbackupd 8-23 vxconfigd 3-6, 3-7, 7-24, 7-32 vxconfigrestore 8-23 vxdctl defaultdg 3-10 vxdctl enable 3-12, 3-15, 8-17 vxdctl stop 7-24 vxdg adddisk 3-18, 8-18 vxdg bootdg 3-9 vxdg defaultdg 3-9 vxdg deport 3-35, 3-42 vxdg destroy 3-45 vxdg import 3-39, 3-42 vxdg init 3-17, 3-27 vxdg list 3-49 vxdg rmdisk 3-31 vxdg upgrade 3-49 vxdisk list 3-17, 3-23, 3-24, 8-17 vxdisk resize 6-13 vxdiskadm 2-29, 2-38, 3-7, 3-12 creating a disk group 3-17 creating an alternate boot disk 7-15 deporting a disk group 3-35 encapsulating the boot disk 7-11 evacuating a disk 3-29 importing a disk group 3-39 managing spare disks 8-14 moving a disk group 3-42 removing a disk 3-30 replacing a failed disk 8-18 starting 2-38 unrelocating a disk 8-20 vxdisksetup 3-27 vxdiskunsetup 3-31 vxedit 5-8 vxedit rename 3-32 vxedit rm 4-43, 5-8 vxedit set nohotuse 8-15 vxedit set reserve 8-15
vxedit set spare 8-15 vxevac 3-29 VxFS 5-28, 7-8 allocation 5-35, 5-36 allocation units 5-40 and fragmentation 5-42 and logging 5-55 command locations 5-28 command syntax 5-29 comparison to UFS 5-35 defragmenting 5-46 file change log 5-57 file system layout 5-30 file system switchout mechanisms 5-29 file system type 5-34 fragmentation reports 5-44 fragmentation types 5-42 identifying free space 5-34 intent log 5-49 intent log resizing 5-52 log I/O size 5-56 logging options 5-53 maintaining consistency 5-50 maximum sizes 5-32 multivolume support 5-56 optional features 2-15 resizing 6-4, 6-11 resizing in VEA 6-8 software packages 2-13 space requirements 2-13 structural components 5-40 structural files 5-40 unmounting 5-33 upgrading 7-33 upgrading the layout 5-38 using by default 5-29 VxFS versions 2-4 vxinstall 2-19, 2-28 vxiod 7-24 vxlicinst 2-9 vxlicrep 2-9 vxmake 4-35 vxmirror 7-16 vxnotify 7-32 vxplex 5-8, 7-20 vxplex dis 5-8 vxprint 3-49, 4-29, 4-42, 7-20, 7-32, 8-20
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.
Index-15
options 4-30 vxreattach 8-21 vxrecover 8-22 vxregctl 2-46 vxrelayout 6-25, 6-30 vxrelayout reverse 6-30 vxrelayout start 6-30 vxrelayout status 6-30 vxrelocd 7-32, 8-12 vxresize 6-6, 6-9 vxrootmir 7-16 VxSE 6-33 vxse_dc_failures 6-41 vxse_dg1 6-41 vxse_dg2 6-41 vxse_dg3 6-41 vxse_dg4 6-41 vxse_dg5 6-41 vxse_dg6 6-41 vxse_disk 6-41 vxse_disklog 6-41 vxse_drl1 6-42 vxse_drl2 6-42 vxse_host 6-42 vxse_mirstripe 6-42 vxse_raid5 6-42 vxse_raid5log1 6-42 vxse_raid5log2 6-42 vxse_raid5log3 6-42 vxse_redundancy 6-43 vxse_rootmir 6-43 vxse_spares 6-43 vxse_stripes1 6-43 vxse_stripes2 6-43 vxse_volplex 6-43 vxsvc 2-43 vxtask 6-25, 6-29 vxtask abort 6-29 vxtask list 6-26, 6-27 vxtask monitor 6-28 vxtask pause 6-29
Index-16
vxtask resume 6-29 vxunreloc 8-19, 8-20 vxunroot 7-19, 7-20, 7-32 vxupgrade 5-39 VxVM configuration backup 8-23 installation methods 2-19 upgrading 7-21 upgrading manually 7-24 upgrading with installvm 7-23 upgrading with scripts 7-25 user interfaces 2-29 versions 2-4 VxVM configuration daemon 3-12 VxVM software packages 2-12 vxvol 3-44 vxvol rdpol prefer 5-14 vxvol rdpol round 5-14 vxvol rdpol select 5-14 vxvol startall 3-39 vxvol stopall 3-35
X XOR 1-20, 4-7
VERITAS Volume Manager 4.0 for UNIX: Operations Copyright © 2004 VERITAS Software Corporation. All rights reserved.