Linux Path Failover V14_en

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Linux Path Failover V14_en as PDF for free.

More details

  • Words: 2,496
  • Pages: 25
SuSE Linux Path Failover

HDLM

MDADM RAIDTOOLS

• HDS Support • Well tested • Stable • Standard interface • Many algorithms • Path Health Checking • Auto Failback • Recognises 9500V Default-Ctlr • Autoscan • Supports Emulex and Qlogic

• Free of charge • Updates from Internet • SAN Boot • Kernel-independent • Distributions-independent • Processor-independent • Supports Emulex and Qlogic

• No SAN Boot • Costs money • Very Kernel-dependent dot-level) • Distribution-dependent • Processor-dependent • No Source Code • Not Up-to-date

• Only Active/Passive • Manual configuration • No Auto Failback • No Path Health Checking • LVM Filter as of Kernel 2.6 • Loss of Linux support with manual driver updates

• SuSE currently favored • Support SLES 9 SP 1 2.6.5-7.139. End July GA • Later no Kernel Dot-Level dependance

• mdadm and raidtools use MD Device Driver • mdadm offers better interface • Auto Failback is scriptable

MULTIPATH TOOLS • SuSE/NOVELL Support as of SLES 9 SP2 for HDS Storage • Free of charge • Updates from Internet • SAN Boot • Several Algorithms • Kernel-independent • Distribution-independent • Processor-independent • Autoscan • Supports Emulex and Qlogic • SuSE Support as of SLES 9 SP 2 • Badly documented • Errors in documentation • Reboot needed after partitioning • Problems with Partitioned Devices

• Stable • No Experience • SuSE Support

Qlogic Driver Failover • SAN Boot • Several Algorithms • Kernel-independent • Distribution-independent • Processor-independent • Autoscan • AutoFailback • LVM transparence

• Only Active/Passive • Path priority only with SANSurfer • Only Qlogic HBA •Static Load-Balancing parameters do not work • A stable path recognition only possible After reboot

• OS transparent • Very stable

Basic Qlogic Setup SLES 9

Module parameters

• Qlogic driver is installed and automatically started • Module parameters: # vi /etc/modprobe.conf.local add: options qla2xxx qlport_down_retry=1 ql2xfailover=0 ql2xretrycount=5 ql2xplogiabsentdevice=1 • in YAST / Hardware / Harddisk-Controller / Qlogic “Load module in initrd” Deactivate • Start module from RAMdisk automatically at boot: # vi /etc/sysconfig/kernel INITRD_MODULES=„mptscsih reiserfs qla2xxx qla2300“ # mkinitrd

# modinfo –p qla2300 -> show possible parameters # cat /proc/scsi/qla2300/1 -> show active parameters

SLES 8

• Qlogic module is installed, but not automatically started • Module parameters: # vi /etc/modules.conf add: options qla2300 qlport_down_retry=1 • Manual Start (check modules.conf): # depmod -a # modprobe qla2300 • Automatic Start (check modules.conf): # vi /etc/init.d/boot.local add: modprobe qla2300 qlport_down_retry=1 • Automatisches Starten in RD (check modules.conf): # vi /etc/sysconfig/kernel INITRD_MODULES=„mptscsih reiserfs qla2300“ # mk_initrd # reboot

Scannen # echo „scsi-qlascan“ > /proc/scsi/driver-name/adapter-id Check in „/var/log/messages“ after „RESCAN“ # cat /proc/scsi/scsi # echo "scsi add-single-device 0 1 2 3" > /proc/scsi/scsi -> scsi mid layer re-scans -> "0 1 2 3" = "HOST CHANNEL ID LUN"

LVM

• MDs and LVM VGs are only started automatically at reboot in SLES 8 and 9 if the Qlogic Driver in RAMDisk is used. Otherwise the MDs and VGs have to be started manually.

HDLM

Installation/Configuration HDLM for Kernel 2.4

Check HDLM Release Notes for supported Kernel versions: # uname -a # rpm –q k_deflt or # rpm –q k_smp # mkdir /etc/opt/DynamicLinkManager # mount /media/cdrom (License CD) # cp /media/cdrom/*.plk /var/tmp/hdlm_license OR # echo "A8GPQRS3CDEIJK012C73" > /etc/opt/DynamicLinkManager/dlm.lic_key # umount /media/cdrom # mount /media/cdrom (Program CD) # cd /media/cdrom # ./installhdlm a) b) c) d)

# # # #

insmod sddlmadrv insmod sddlmfdrv /etc/init.d/DLMManager start /opt/DynamicLinkManager/bin/dlmcfgmgr –r

a) To d) a reboot is better # /opt/DynamicLinkManager/bin/dlnkmgr set –afb on –intvl 5 -> AutoFailback auf 5 min # /opt/DynamicLinkManager/bin/dlnkmgr set –pchk on –invl 5 -> PathHealth Check auf 5 min # /opt/DynamicLinkManager/bin/dlnkmgr set –ellv 2 -> LogLevel auf 2 setzen, sonst zu viele Einträge

LVM Setup

HDLM # fdisk /dev/sddlmad -> Set Linux Partition ID to 0x83 # vi /etc/raidtab raiddev /dev/md0 raid-level chunk-size nr-raid-disks persistent-superblock device raid-disk # # # # # # # # # # #

linear 32 1 1 /dev/sddlmad1 0

mkraid –R /dev/md0 vgscan pvcreate /dev/md0 vgcreate vg01 /dev/md0 vgchange –an vg01 raidstop /dev/md0 raidstart /dev/md0 vgchange –ay vg01 lvcreate –L 1G –n lvol1 vg01 mkfs –t etx3 /dev/vg01/lvol1 mount /dev/vg01/lvol1 /mnt/fs1

Do this after a SLES8 reboot if the Qlogic module is not in the RamDisk RD # raidstart /dev/md0 # vgscan # vgchange –ay vg01 # mount /dev/vg01/lvol1 /mnt/fs1

Administration

HDLM

# dlmcfgmgr -v HDevName /dev/sddlmaa

Management configured

/dev/sddlmac

configured

Device /dev/sdc /dev/sdl /dev/sdk /dev/sdb

Host 0 1 1 0

# dlmcfgmgr -r -> Reconfigure after LUN add # dlmcfgmgr -u all -> Check after LUN delete # dlmcfgmgr -o <device> | all -> exclude # dlmcfgmgr -i <device> | all -> include # cd /opt/DynamicLinkManager/bin # ./dlnkmgr view -drv PathID HDevName Device LDEV 000000 sddlmaa /dev/sdc 9970/9980.50118.0D0F 000001 sddlmab /dev/sdd 9970/9980.50118.0D10 000002 sddlmaa /dev/sde 9970/9980.50118.0D0F 000003 sddlmab /dev/sdf 9970/9980.50118.0D10 # cat /proc/mdstat -> Check, that the

MD Devices for the

LVM are active

Channel 0 0 0 0

Target 0 1 1 0

Lun 2 2 1 1

HDLM

# # # # # #

Deinstallation

umount X vgchange –an vgX raidstop /dev/mdX rpm –e HDLM rpm –e HDLMhelp-en reboot

MDADM

Installation

Contained in the RedHat and SuSE Distribution. Updates in Internet: - http://www.cse.unsw.edu.au/~neilb/source/mdadm/ - download mdadm-1.9.0.tgz # gzip -d mdadm-1.9.0.tgz # tar xvf mdadm-1.9.0.tar # cd mdadm-1.9.0 # make # make install # mdadm -V mdadm - v1.9.0 - 04 February 2005 LVM2 (Kernel 2.6) needs following Filter settings in „/etc/lvm/lvm.conf“: filter = [ "a|/dev/md.*|", "r/.*/" ]

MDADM

Configuration

Display the paths by using the HORCM inqraid command: # ls /dev/sd* | inqraid -CLI | grep CL sdc CL1-A 266 124 - s/s/ss sdd CL1-A 266 125 - s/s/ss sde CL1-A 266 126 - s/s/ss sdf CL1-A 266 127 - s/s/ss sdh CL1-A 50118 769 - s/s/ss sdi CL1-A 50118 770 - s/s/ss sdj CL1-A 50118 771 - s/s/ss sdl CL2-A 50118 769 - s/s/ss sdm CL2-A 50118 770 - s/s/ss sdn CL2-A 50118 771 - s/s/ss sdo CL2-A 266 124 - s/s/ss sdp CL2-A 266 125 - s/s/ss sdq CL2-A 266 126 - s/s/ss sdr CL2-A 266 127 - s/s/ss

0000 0000 0000 0000 9973 9973 9973 9973 9973 9973 0000 0000 0000 0000

5:00-00 5:01-00 5:01-00 5:01-00 5:01-03 5:01-03 5:01-03 5:01-03 5:01-03 5:01-03 5:00-00 5:01-00 5:01-00 5:01-00

DF600F DF600F DF600F DF600F OPEN-9 OPEN-9 OPEN-9 OPEN-9 OPEN-9 OPEN-9 DF600F DF600F DF600F DF600F

# fdisk /dev/sdc -> Set up partition 1 with ID 0xfd # fdisk /dev/sdo -> read and save is enough to set up the alternate path of /dev/sdc # mdadm --create --verbose /dev/md0 --level=multipath --raid-devices=2 /dev/sdc1 /dev/sdo1 -> RAID create # echo 'DEVICE /dev/sd*1' > /etc/mdadm.conf # mdadm --detail --scan | grep UUID >> /etc/mdadm.conf -> create /etc/mdadm.conf, Scanfilter on Partition 1

MDADM

Administration

# mdadm --stop –scan -> Stop all software RAID‘s # mdadm --assemble /dev/md0 -> Start /dev/md0 -> /etc/mdadm.conf entry has to exist # mdadm --assemble /dev/md0 /dev/sdg1 /dev/sdk1 -> Start without /etc/mdadm.conf entry # mdadm --zero-superblock /dev/sdg1 -> Erase Superblock = Erase RAID # mdadm /dev/md0 --remove /dev/sdc1 -> Delete faulty Path # mdadm /dev/md0 --add /dev/sdc1 -> Reactivate deleted path # cat /proc/mdstat -> Status # mdadm --detail /dev/md0 -> Show Status # detect_multipath -> Tool to discover the Paths , only works for Lightning/USP

MULTIPATH TOOLS

General

• Supplied with RedHat and SuSE Distributions • Tested with SuSE Enterprise Server SP 2 • Updates and Information in Internet: http://christophe.varoqui.free.fr • SUSE Support for HDS DF400, DF500 and DF600 ab SLES 9 SP 2 • Setup /etc/multipath.conf to control Lightning and USP Active/Active • Partitions NOT supported,. Only recognised after Reboot

MULTIPATH TOOLS

Activation 1

• for Qlogic 2xxx Adapter set following parameters in „/etc/modprobe.conf.local” : • # vi /etc/modprobe.conf.local • Add: options qla2xxx qlport_down_retry=1 ql2xfailover=0 ql2xretrycount=5 ql2xplogiabsentdevice=1 • The Qlogic driver must be loaded in RamDisk : • # vi /etc/sysconfig/kernel • Add: INITRD_MODULES=„mptscsih reiserfs qla2xxx qla2300“ • # mk_initrd • Lilo needs to be recreated if you use it:: • # lilo • for LVM2 are filter settings neccessary: • # vi „/etc/lvm/lvm.conf • Change to: filter = [ "a|/dev/disk/by-name/.*|", "r|.*|" ] • Change to: types

= [ "device-mapper", 253 ]

• In HotPlug these changes need to be made: • # vi /etc/sysconfig/hotplug • Change to: HOTPLUG_USE_SUBFS=no

MULTIPATH TOOLS

Activation 2

• You need to update /etc/multipath.conf, is you need support for new LUNs (eg. for USP OPEN-V or Command Devices): devnode_blacklist { devnode cciss devnode fd devnode hd devnode md devnode sr devnode scd devnode st devnode ram devnode raw devnode loop devnode sda # internal Bootdisk } devices { device { vendor product path_grouping_policy path_checker } device { vendor product path_grouping_policy path_checker } }

"HITACHI " "DF600F failover tur "HITACHI " "OPEN-9 multibus tur

"

"

MULTIPATH TOOLS

Activation 3

• Start Multipath: • # /etc/init.d/boot.multipath start • # /etc/init.d/multipathd start • Activate automatically during boot: • # insserv boot.multipath multipathd • Note: You may have to activate other things with the RunLevel Editor • (boot.scsidev, boot.udev, boot.device-mapper, boot.lvm, …) • Create virtual devices • # multipath –v2 –d • shows all paths, not activated • # multipath • create the virtual devices in;….. /dev/disk/by-name • There is a bug (bugzilla.novell.com #102937) so you do not have access to partitioned devices after reboot. The reason is that boot.multipath is startend earlier as hotplug manager. For workaround move the hotplug manager in RunLevel B: YAST – System – Runlevel Editor – Expert Mode – Hotplug only to Runlevel „B“

MULTIPATH TOOLS

Administration

• Show Path Status: • # multipath -l • Delete all paths and virtual devices (do not do this online!): • # multipath -F • Check, theat the Multipath Deamon is still running: • # /etc/init.d/multipathd status • Switch the daemon on and off: • # chkconfig multipathd on/off • Show Device Mapper Devices : • # dmsetup ls • Show UDEV Infomation : • # udevinfo -d

MULTIPATH TOOLS

Prioritizer for Thunder

With an active passive system like the Thunder you only usually use the first path. The Second HBA is standby. If you want to do static load balancing you can use Matthias Prioritizer. Copy the Linux HORCM command „inqraid“ and the Perl Script „pp_HDS_ODD_EVEN.sh“ to „/sbin/“ and set the file rights. This priotizer muss be added to „/etc/multipath.conf“ under „prio_callout“ After this change, and the deleting and recreation of the paths with „multipath –F“ ; „multipath“ , You can use „multipath –l“ to see which paths will be used (indicated by „best“).

MULTIPATH TOOLS

Prioritizer Shell Script for Thunder

#! /bin/sh PATH=/bin:/usr/bin:/sbin:/usr/sbin MINOR_MAJOR=$1 MAJOR=$(echo $MINOR_MAJOR | awk -F : '{print $1}') MINOR=$(echo $MINOR_MAJOR | awk -F : '{print $2}') ls -l /dev/sd* | grep $MAJOR | grep $MINOR | { while read LINE do MIN=$(echo $LINE | awk '{print $6}') MAJ=$(echo $LINE | awk '{print $5}' | awk -F , '{print $1}') if [ "$MINOR" = "$MIN" ] && [ "$MAJOR" = "$MAJ" ] then DEVICE=$(echo $LINE | awk '{print $10}') BOOTSHIFT=$(echo $LINE | awk '{print $9}' | awk -F / '{print $2}') if [ "$BOOTSHIFT" = "dev" ] then DEVICE=$(echo $LINE | awk '{print $9}') fi break fi done CTRL=$(inqraid -CLI $DEVICE | sed -n '2,$p' | awk '{print $2}' | awk -F - '{print $1}') LDEV=$(inqraid -CLI $DEVICE | sed -n '2,$p' | awk '{print $4}') if [ "$CTRL" = "CL1" ] && [ "$(($LDEV%2))" = "1" ] then echo 0 exit 0 fi if [ "$CTRL" = "CL1" ] && [ "$(($LDEV%2))" = "0" ] then echo 1 exit 0 fi if [ "$CTRL" = "CL2" ] && [ "$(($LDEV%2))" = "1" ] then echo 1 exit 0 fi if [ "$CTRL" = "CL2" ] && [ "$(($LDEV%2))" = "0" ] then echo 0 exit 0 fi exit 1; }

MULTIPATH TOOLS

Prioritizer Shell Script for Thunder /etc/multipath.conf changes

# cat /etc/multipath.conf devnode_blacklist { devnode cciss devnode fd devnode hd devnode md devnode sr devnode scd devnode st devnode ram devnode raw devnode loop devnode sda # interne Bootdisk } devices { device { vendor product path_grouping_policy prio_callout path_checker } device { vendor product path_grouping_policy path_checker } }

"HITACHI " "DF600F " failover "/sbin/pp_HDS_ODD_EVEN.sh %d" tur "HITACHI " "OPEN-9 multibus tur

"

MULTIPATH TOOLS

Deactivate

• Delete all paths and virtual devices (do not do this online!): • # multipath -F • remove from bootsequence: • # insserv –r boot.multipath multipathd • Stop Deamon : • # chkconfig multipathd off

Qlogic Driver Failover

Storage Configuration for SLES 9 SP 2

Lightning / USP:

- Host Mode „Standard“ - „dmesg“ shows this as „XP device“ (HP sponsored?) - The Driver recognises MultiPathing even with different WWNN

Thander 9500V:

- Host Connection Mode 1 „Standard“ - Host Connection Mode 2 „Same Node Name Mode“ - The driver recognises Multipathing only , if each Storage Port has the same WWNN

Qlogic Driver Failover

Watch it!

•Multipath Tools must be deactivated. • In YAST / Hardware / Hard-Disk-Controller / Qlogic, deactivate “Module load to initrd” and delete Module parameters. • Remove Qlogic driver parameters from „/etc/modprobe.conf“. • The parameter „ql2xlbType=1“ should activate Static Load Balancing for all LUN‘s. Unfortuneately it only works from the SANSurfer GUI. • Only the Module load (qla2xxx, qla2300) from RAMDisk runs fast enough by a boot for an automatic VolumeGroup activation and filecheck to take place. • SANSurfer GUI configurations are not passed on to RAMDisk. • The SANSurfer CLI cannot change multipath settings.

Qlogic Driver Failover

Server Konfiguration for SLES 9 SP 2 ohne SANSurfer GUI/CLI

Set the Module parameters : # vi /etc/modprobe.conf.local -> Add: options qla2xxx qlport_down_retry=1 ql2xretrycount=5 ql2xfailover=1 ql2xlbType=1 ql2xplogiabsentdevice=1 Stop Module : # modprobe –r qla2300 # modprobe –r qla2xxx Create Modul Dependance : # depmod –a Start Module after boot : # modprobe qla2300 Start Module automatically after boot: # vi /etc/sysconfig/kernel -> Edit: MODULES_LOADED_ON_BOOT=„qla2xxx qla2300“ Start Module automatically with boot from RAMdisk : # vi /etc/sysconfig/kernel -> Edit: INITRD_MODULES=„mptscsih reiserfs qla2xxx qla2300“ # mkinitrd Check and mount LVM Filesystem at Boot (RAMDisk only): # vi /etc/fstab -> Add: /dev/vg01/lv01 /mnt/fs01

ext3

defaults

0 2

Qlogic Driver Failover

Driver Check

# ls /proc/scsi/qla2xxx # cat /proc/scsi/qla2xxx/ -> Now shows also . „Driver version 8.00.02-fo“ # tail –f /var/log/messages -> Shows activation and deactivation of paths # modinfo qla2300 # modinfo qla2xxx # modinfo –p qla2xxx -> Shows all possible settings with explanations

Qlogic Driver Failover

SANSurfer GUI Setup

• Download „sansurfer2.0.30b17_linux_install.bin“ von www.qlogic.com # chmod 777 sansurfer2.0.30b17_linux_install.bin -> turn on execute rights • in X, Click on the binary to start the installlation • Choose „ALL GUIs and ALL Agents“ • Choose „Enable QLogic Failover Configuration“ • Start SANSurfer Client with Click on „/opt/Qlogic_Corporation/SANsurfer/SANsurfer“ • Connect „localhost“ • Passwort ist „config“ • Configre both HBAs (Point-to-Point, 2 Gbit/sec, Failover, ….) • Menu „Configure – LUNs – Load Balance – All LUNs“ activates Static Load Balancing •Reboot • SANSurfer addsthe line „ConfigRequired=1“ to „/etc/modprobe.conf.local“ • alle Settings finden sich dann in der Datei „/etc/qla2xxx.conf“ NOTE :

The settings do not go to RAMdisk!!.

TIP:

You can administer Linux from a Windows SANSurfer Client via LAN.

Related Documents

Linux Path Failover V14_en
November 2019 0
Using Pix Firewall Failover
November 2019 3
Path
November 2019 51
Path
April 2020 43
Path
June 2020 39
Path
April 2020 45