Home » Articles » Restore LVM from multipath LUNs after full OS restore

Restore LVM from multipath LUNs after full OS restore

Discovery

Find about Fibre Channel status

In the below example host0 is showing problems and we cannot detect LUNs from that path. We can detect the problem by looking at the port type and speed that show Unknown *unconfigured*. In this case, it was the Fibre Channel cable that was defect and swapping it fixed the issue. Could have been anything hardware related or driver related.

root@lxmq1063:~# systool -c fc_host –v


Class = "fc_host"
Class Device = "host0"
Class Device path = "/sys/class/fc_host/host0"
fabric_name = "0x2000001b3216e0e8"
issue_lip = <store method only>
node_name = "0x2000001b3216e0e8"
port_id = "0x000000"
port_name = "0x2100001b3216e0e8"
port_state = "Online"
port_type = "Unknown"
speed = "Unknown"
supported_classes = "Class 3"
supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
symbolic_name = "QLE2560 FW:v5.03.02 DVR:v8.03.01.04.05.05-k"
system_hostname = ""
tgtid_bind_type = "wwpn (World Wide Port Name)" uevent = <store method only>
Device = "host0"
Device path = "/sys/devices/pci0000:00/0000:00:04.0/0000:13:00.0/host0"
ct =
edc = <store method only>
els =
fw_dump =
nvram = "ISP "
optrom_ctl = <store method only>
optrom =
reset = <store method only>
sfp = ""
uevent = <store method only>
vpd = "¦0"


Class Device = "host1"
Class Device path = "/sys/class/fc_host/host1"
fabric_name = "0x2000001b3281d19b"
issue_lip = <store method only>
node_name = "0x2000001b3281d19b"
port_id = "0x0000e8"
port_name = "0x2100001b3281d19b"
port_state = "Online"
port_type = "LPort (private loop)"
speed = "4 Gbit"
supported_classes = "Class 3"
supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
symbolic_name = "QLE2560 FW:v5.03.02 DVR:v8.03.01.04.05.05-k"
system_hostname = ""
tgtid_bind_type = "wwpn (World Wide Port Name)"
uevent = <store method only>
Device = "host1"
Device path = "/sys/devices/pci0000:00/0000:00:06.0/0000:17:00.0/host1"
ct =
edc = <store method only>
els =
fw_dump =
nvram = "ISP "
optrom_ctl = <store method only>
optrom =
sfp = ""
uevent = <store method only>
vpd = "¦0"

 

Detect if LUNs are visible on the server:

If at least one path of the Fiber Channel or iscsi path is working, you should see your LUNs via fdisk –l or lsscsi

fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc –l
31

fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-'
Disk /dev/cciss/c0d0: 513.6 GB, 513618945024 bytes *** Is a local Raid 5 volume ***
Disk /dev/sda: 268.4 GB, 268435456000 bytes
Disk /dev/sdb: 268.4 GB, 268435456000 bytes
Disk /dev/sdc: 268.4 GB, 268435456000 bytes
Disk /dev/sdd: 268.4 GB, 268435456000 bytes
Disk /dev/sde: 214.7 GB, 214748364800 bytes
Disk /dev/sdf: 214.7 GB, 214748364800 bytes
Disk /dev/sdg: 214.7 GB, 214748364800 bytes
Disk /dev/sdh: 214.7 GB, 214748364800 bytes
Disk /dev/sdi: 214.7 GB, 214748364800 bytes
Disk /dev/sdj: 214.7 GB, 214748364800 bytes
Disk /dev/sdk: 214.7 GB, 214748364800 bytes
Disk /dev/sdl: 214.7 GB, 214748364800 bytes
Disk /dev/sdm: 107.3 GB, 107374182400 bytes
Disk /dev/sdn: 107.3 GB, 107374182400 bytes
Disk /dev/sdo: 107.3 GB, 107374182400 bytes
Disk /dev/sdp: 107.3 GB, 107374182400 bytes
Disk /dev/sdq: 268.4 GB, 268435456000 bytes
Disk /dev/sdr: 268.4 GB, 268435456000 bytes
Disk /dev/sds: 107.3 GB, 107374182400 bytes
Disk /dev/sdt: 107.3 GB, 107374182400 bytes
Disk /dev/sdu: 53.6 GB, 53687091200 bytes
Disk /dev/sdv: 53.6 GB, 53687091200 bytes
Disk /dev/sdw: 53.6 GB, 53687091200 bytes
Disk /dev/sdx: 53.6 GB, 53687091200 bytes
Disk /dev/sdy: 53.6 GB, 53687091200 bytes
Disk /dev/sdz: 53.6 GB, 53687091200 bytes
Disk /dev/sdaa: 268.4 GB, 268435456000 bytes
Disk /dev/sdab: 268.4 GB, 268435456000 bytes
Disk /dev/sdac: 858.9 GB, 858993459200 bytes
Disk /dev/sdad: 214.7 GB, 214748364800 bytes

root@lxmq1063:~# lsscsi
[1:0:0:8] disk HITACHI DF600F 0000 /dev/sda
[1:0:0:9] disk HITACHI DF600F 0000 /dev/sdb
[1:0:0:10] disk HITACHI DF600F 0000 /dev/sdc
[1:0:0:11] disk HITACHI DF600F 0000 /dev/sdd
[1:0:0:12] disk HITACHI DF600F 0000 /dev/sde
[1:0:0:13] disk HITACHI DF600F 0000 /dev/sdf
[1:0:0:14] disk HITACHI DF600F 0000 /dev/sdg
[1:0:0:15] disk HITACHI DF600F 0000 /dev/sdh
[1:0:0:16] disk HITACHI DF600F 0000 /dev/sdi
[1:0:0:17] disk HITACHI DF600F 0000 /dev/sdj
[1:0:0:18] disk HITACHI DF600F 0000 /dev/sdk
[1:0:0:19] disk HITACHI DF600F 0000 /dev/sdl
[1:0:0:20] disk HITACHI DF600F 0000 /dev/sdm
[1:0:0:21] disk HITACHI DF600F 0000 /dev/sdn
[1:0:0:22] disk HITACHI DF600F 0000 /dev/sdo
[1:0:0:23] disk HITACHI DF600F 0000 /dev/sdp
[1:0:0:24] disk HITACHI DF600F 0000 /dev/sdq
[1:0:0:25] disk HITACHI DF600F 0000 /dev/sdr
[1:0:0:26] disk HITACHI DF600F 0000 /dev/sds
[1:0:0:27] disk HITACHI DF600F 0000 /dev/sdt
[1:0:0:28] disk HITACHI DF600F 0000 /dev/sdu
[1:0:0:29] disk HITACHI DF600F 0000 /dev/sdv
[1:0:0:30] disk HITACHI DF600F 0000 /dev/sdw
[1:0:0:31] disk HITACHI DF600F 0000 /dev/sdx
[1:0:0:32] disk HITACHI DF600F 0000 /dev/sdy
[1:0:0:33] disk HITACHI DF600F 0000 /dev/sdz
[1:0:0:34] disk HITACHI DF600F 0000 /dev/sdaa
[1:0:0:35] disk HITACHI DF600F 0000 /dev/sdab
[1:0:0:36] disk HITACHI DF600F 0000 /dev/sdac
[1:0:0:37] disk HITACHI DF600F 0000 /dev/sdad

 

See the status of multipathd daemon

First look if the daemon is running. After issue multipath –ll to list visible LUN that multipath see. If it shows nothing, it means that multipath failed to recognize any drive to handle.
root@lxmq1063:~# service multipathd status
multipathd (pid 4349) is running...

root@lxmq1063:~# multipath –ll

Usually because /etc/multipath.conf has a blacklist line filtering the driver of the drive we try to handle.

root@lxmq1060:~# cat /etc/multipath.conf | egrep -v "^\s*$|^;|^\s*#"
blacklist {
devnode "*"
}
defaults {
user_friendly_names yes
}

Removing the blacklist and writing the device/driver configuration block will make multipath detect the drive and create the mpath device in /dev/mapper/. You will need to restart the daemon between each edit.

root@lxmq1063:~# cat /etc/multipath.conf | egrep -v "^\s*$|^;|^\s*#"
defaults {
user_friendly_names yes
}
devices {
device {
vendor "(HITACHI|HP)"
product "OPEN-.*"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
features "0"
hardware_handler "0"
path_grouping_policy multibus
failback immediate
rr_weight uniform
no_path_retry 18
rr_min_io 1000
path_checker tur
}
}

root@lxmq1063:~# service multipathd restart

Once the daemon restarted, you can issue a multipath command to scan for new device:

Detecting our multipath device /dev/mapper/mpath

root@lxmq1063:~# multipath –v2

This should bring the LUNs in control of device mapper multipath. Check multipath -ll output to get it confirm.

root@lxmq1063:/etc# multipath -ll
mpath2 (360060e80102ab09005119e910000002f) dm-1 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:35 sdab 65:176 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:35 sdbf 67:144 [active][ready]
mpath23 (360060e80102ab09005119e910000001a) dm-22 HITACHI,DF600F
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:26 sdaw 67:0 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:26 sds 65:32 [active][ready]
mpath1 (360060e80102ab09005119e910000002e) dm-0 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:34 sdbe 67:128 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:34 sdaa 65:160 [active][ready]
mpath22 (360060e80102ab09005119e9100000019) dm-21 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:25 sdr 65:16 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:25 sdav 66:240 [active][ready]
mpath19 (360060e80102ab09005119e9100000016) dm-18 HITACHI,DF600F
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:22 sdas 66:192 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:22 sdo 8:224 [active][ready]
mpath21 (360060e80102ab09005119e9100000018) dm-20 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:24 sdau 66:224 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:24 sdq 65:0 [active][ready]
mpath18 (360060e80102ab09005119e9100000015) dm-17 HITACHI,DF600F
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:21 sdn 8:208 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:21 sdar 66:176 [active][ready]
mpath20 (360060e80102ab09005119e9100000017) dm-19 HITACHI,DF600F
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:23 sdp 8:240 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:23 sdat 66:208 [active][ready]
mpath17 (360060e80102ab09005119e9100000014) dm-16 HITACHI,DF600F
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:20 sdaq 66:160 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:20 sdm 8:192 [active][ready]
mpath16 (360060e80102ab09005119e9100000013) dm-15 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:19 sdl 8:176 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:19 sdap 66:144 [active][ready]
mpath9 (360060e80102ab09005119e910000000c) dm-8 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:12 sdai 66:32 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:12 sde 8:64 [active][ready]
mpath15 (360060e80102ab09005119e9100000012) dm-14 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:18 sdao 66:128 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:18 sdk 8:160 [active][ready]
mpath8 (360060e80102ab09005119e910000000b) dm-7 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:11 sdd 8:48 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:11 sdah 66:16 [active][ready]
mpath29 (360060e80102ab09005119e9100000020) dm-28 HITACHI,DF600F
[size=50G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:32 sdbc 67:96 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:32 sdy 65:128 [active][ready]
mpath14 (360060e80102ab09005119e9100000011) dm-13 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:17 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:17 sdan 66:112 [active][ready]
mpath7 (360060e80102ab09005119e910000000a) dm-6 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:10 sdag 66:0 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:10 sdc 8:32 [active][ready]
mpath28 (360060e80102ab09005119e910000001f) dm-27 HITACHI,DF600F
[size=50G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:31 sdx 65:112 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:31 sdbb 67:80 [active][ready]
mpath13 (360060e80102ab09005119e9100000010) dm-12 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:16 sdam 66:96 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:16 sdi 8:128 [active][ready]
mpath30 (360060e80102ab09005119e9100000021) dm-29 HITACHI,DF600F
[size=50G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:33 sdz 65:144 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:33 sdbd 67:112 [active][ready]
mpath6 (360060e80102ab09005119e9100000009) dm-5 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:9 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:9 sdaf 65:240 [active][ready]
mpath27 (360060e80102ab09005119e910000001e) dm-26 HITACHI,DF600F
[size=50G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:30 sdba 67:64 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:30 sdw 65:96 [active][ready]
mpath12 (360060e80102ab09005119e910000000f) dm-11 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:15 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:15 sdal 66:80 [active][ready]
mpath5 (360060e80102ab09005119e9100000008) dm-4 HITACHI,DF600F
[size=250G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:8 sdae 65:224 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:8 sda 8:0 [active][ready]
mpath26 (360060e80102ab09005119e910000001d) dm-25 HITACHI,DF600F
[size=50G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:29 sdv 65:80 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:29 sdaz 67:48 [active][ready]
mpath11 (360060e80102ab09005119e910000000e) dm-10 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:14 sdak 66:64 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:14 sdg 8:96 [active][ready]
mpath4 (360060e80102ab09005119e9100000031) dm-3 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:37 sdad 65:208 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:37 sdbh 67:176 [active][ready]
mpath25 (360060e80102ab09005119e910000001c) dm-24 HITACHI,DF600F
[size=50G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:28 sday 67:32 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:28 sdu 65:64 [active][ready]
mpath10 (360060e80102ab09005119e910000000d) dm-9 HITACHI,DF600F
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:13 sdf 8:80 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:13 sdaj 66:48 [active][ready]
mpath3 (360060e80102ab09005119e9100000030) dm-2 HITACHI,DF600F
[size=800G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:36 sdbg 67:160 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 0:0:0:36 sdac 65:192 [active][ready]
mpath24 (360060e80102ab09005119e910000001b) dm-23 HITACHI,DF600F
[size=100G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:27 sdt 65:48 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:27 sdax 67:16 [active][ready]

If partitions created on multipath devices and that’s not being listed, kpartx needs to be executed for all problematic multipath devices so that partition maps are created. Following is very generic command which will create maps for all multipath devices present on system:

root@lxmq1063:/etc# /sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a -p p"

 

Configuring lvm.conf to replace multipath.conf blacklist feature

/etc/lvm/lvm.conf configuration:

Since previously, we removed the blacklist in /etc/multipath.conf, we need to handle it somewhere else;.
Like in /etc/lvm/lvm.conf.


root@lxmq1063:/etc# cat /etc/lvm/lvm.conf | egrep -v "^\s*$|^;|^\s*#"
devices {
dir = "/dev"
scan = [ "/dev" ]
preferred_names = [ ]
filter = [ "a|sddlm*|", "r|/dev/sd|" ]
cache_dir = "/etc/lvm/cache"
cache_file_prefix = ""
write_cache_state = 1
types = [ "sddlmfdrv", 16 ]
sysfs_scan = 1
md_component_detection = 1
md_chunk_alignment = 1
ignore_suspended_devices = 0
}
log {
verbose = 0
syslog = 1
overwrite = 0
level = 0
indent = 1
command_names = 0
prefix = " "
}
backup {
backup = 1
backup_dir = "/etc/lvm/backup"
archive = 1
archive_dir = "/etc/lvm/archive"
retain_min = 10
retain_days = 30
}
shell {
history_size = 100
}
global {
umask = 077
test = 0
units = "h"
activation = 1
proc = "/proc"
locking_type = 1
fallback_to_clustered_locking = 1
fallback_to_local_locking = 1
locking_dir = "/var/lock/lvm"
}
activation {
missing_stripe_filler = "error"
reserved_stack = 256
reserved_memory = 8192
process_priority = -18
mirror_region_size = 512
readahead = "auto"
mirror_log_fault_policy = "allocate"
mirror_device_fault_policy = "remove"
}
dmeventd {
snapshot_library = "libdevmapper-event-lvm2snapshot.so"
}

Do the following modification to /etc/lvm/lvm.conf

root@lxmq1063:/etc/lvm# diff lvm.conf.orig lvm.conf
29c29
> preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
---
< # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
54c54
> filter = [ "a|/dev/mapper/mpath.*|", "a|sddlm*|", "r|/dev/sd|" ]
---
< filter = [ "a|sddlm*|", "r|/dev/sd|" ]

 

Rebuilding the initrd

When adding new hardware to a system, or after changing configuration files that may be used very early in the boot process, or when changing the options on a kernel module, it may be necessary to rebuild the initial ramdisk (also known as initrd or initramfs) to include the proper kernel modules, files, and configuration directives.

Rebuilding the initrd (RHEL 3, 4, 5)

It is recommended you make a backup copy of the initrd in case the new version has an unexpected problem:

root@lxmq1063:/etc/lvm# cp /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.$(date +%m-%d-%H%M%S).bak

Now build the initrd:

root@lxmq1063:/etc/lvm# mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)

 

Rebuilding the initramfs (RHEL 6, 7)

It is recommended you make a backup copy of the initrd in case the new version has an unexpected problem:

root@lxmq1063:/etc/lvm# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak

Now rebuild the initramfs for the current kernel version:

root@lxmq1063:/etc/lvm# dracut -f -v

 

Working with backups (All RHEL Versions)

As mentioned previously, it is recommended that you take a backup of the previous initrd in case something goes wrong with the new one. If desired, it is possible to create a separate entry in /boot/grub/grub.conf for the backup initial ramdisk image, to conveniently choose the old version at boot time without needing to restore the backup. This example configuration allows selection of either the new or old initial ramdisk image from the grub menu:

root@lxmq1063:~# cat /boot/grub/grub.conf
# GRUB configuration file for Intel-based systems
# Used to boot from disk after system installation
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
 
title Red Hat Enterprise Linux Server (2.6.18-194.26.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.26.1.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet audit=1
initrd /initrd-2.6.18-194.26.1.el5.img
 
# Backup initrd
title Red Hat Enterprise Linux 5 w/ old initrd (2.6.18-194.26.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-194.26.1.el5 ro root=LABEL=/
initrd /initrd-2.6.18-194.26.1.el5.img.bak
title Red Hat Enterprise Linux Server (2.6.18-128.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet audit=1
initrd /initrd-2.6.18-128.el5.img

 

See the status of LVM

From this point, you have your mpath device detected and you can start to recover your LVM.
Before we start, we can notice that LVM doesn’t detect our volume group or logical volume

root@lxmq1063:/etc# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [444.84 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [33.38 GB] inherit

 

1. We need to do a LVM scan of Physical Volume

root@lxmq1063:/etc# pvscan
PV /dev/mpath/mpath20p1 VG datavg1 lvm2 [100.00 GB / 0 free]
PV /dev/mpath/mpath9p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath10p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath11p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath12p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath13p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath14p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath15p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath16p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath17p1 VG datavg1 lvm2 [100.00 GB / 0 free]
PV /dev/mpath/mpath18p1 VG datavg1 lvm2 [100.00 GB / 0 free]
PV /dev/mpath/mpath19p1 VG datavg1 lvm2 [100.00 GB / 0 free]
PV /dev/mpath/mpath23p1 VG datavg1 lvm2 [100.00 GB / 0 free]
PV /dev/mpath/mpath24p1 VG datavg1 lvm2 [100.00 GB / 0 free]
PV /dev/mpath/mpath25p1 VG datavg1 lvm2 [50.00 GB / 0 free]
PV /dev/mpath/mpath26p1 VG datavg1 lvm2 [50.00 GB / 0 free]
PV /dev/mpath/mpath27p1 VG datavg1 lvm2 [50.00 GB / 0 free]
PV /dev/mpath/mpath28p1 VG datavg1 lvm2 [50.00 GB / 0 free]
PV /dev/mpath/mpath29p1 VG datavg1 lvm2 [50.00 GB / 0 free]
PV /dev/mpath/mpath30p1 VG datavg1 lvm2 [50.00 GB / 0 free]
PV /dev/mpath/mpath3p1 VG datavg1 lvm2 [800.00 GB / 2.15 GB free]
PV /dev/mpath/mpath4p1 VG datavg1 lvm2 [200.00 GB / 0 free]
PV /dev/mpath/mpath5p1 VG datavg2 lvm2 [250.00 GB / 0 free]
PV /dev/mpath/mpath6p1 VG datavg2 lvm2 [250.00 GB / 0 free]
PV /dev/mpath/mpath7p1 VG datavg2 lvm2 [250.00 GB / 0 free]
PV /dev/mpath/mpath8p1 VG datavg2 lvm2 [250.00 GB / 0 free]
PV /dev/mpath/mpath21p1 VG datavg2 lvm2 [250.00 GB / 0 free]
PV /dev/mpath/mpath22p1 VG datavg2 lvm2 [250.00 GB / 0 free]
PV /dev/mpath/mpath1p1 VG datavg2 lvm2 [250.00 GB / 0 free]
PV /dev/mpath/mpath2p1 VG datavg2 lvm2 [250.00 GB / 964.00 MB free]
PV /dev/cciss/c0d0p2 VG VolGroup00 lvm2 [478.22 GB / 0 free]
Total: 31 [5.84 TB] / in use: 31 [5.84 TB] / in no VG: 0 [0 ]

 

2. We need to do a LVM scan of Volume Group

root@lxmq1063:/etc# vgscan
Reading all physical volumes. This may take a while...
Found volume group "datavg2" using metadata type lvm2
Found volume group "datavg1" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2

 

3. We need to do a LVM scan of Logical Volume

root@lxmq1063:/etc# lvscan
inactive '/dev/datavg2/oraarchivelv' [1.95 TB] inherit
inactive '/dev/datavg1/sysadminlv' [256.00 MB] inherit
inactive '/dev/datavg1/orau01lv' [769.34 GB] inherit
inactive '/dev/datavg1/orau02lv' [2.64 TB] inherit
inactive '/dev/datavg1/oracle_1120' [20.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol00' [444.84 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [33.38 GB] inherit

Notice that all our Logical Volume are set as inactive. We now need to enable our Logical Volume.

root@lxmq1063:/etc# lvchange -ay /dev/datavg2/oraarchivelv
root@lxmq1063:/etc# lvchange -ay /dev/datavg1/sysadminlv
root@lxmq1063:/etc# lvchange -ay /dev/datavg1/orau01lv
root@lxmq1063:/etc# lvchange -ay /dev/datavg1/orau02lv
root@lxmq1063:/etc# lvchange -ay /dev/datavg1/oracle_1120

root@lxmq1063:~# lvscan –v && lvs
Finding all logical volumes
ACTIVE '/dev/datavg1/sysadminlv' [256.00 MB] inherit
ACTIVE '/dev/datavg1/orau01lv' [769.34 GB] inherit
ACTIVE '/dev/datavg1/orau02lv' [2.64 TB] inherit
ACTIVE '/dev/datavg1/oracle_1120' [20.00 GB] inherit
ACTIVE '/dev/datavg2/oraarchivelv' [1.95 TB] inherit
ACTIVE '/dev/VolGroup00/LogVol00' [444.84 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [33.38 GB] inherit
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LogVol00 VolGroup00 wimao 444.84G
LogVol01 VolGroup00 wimao 33.38G
oracle_1120 datavg1 wi-a- 20.00G
orau01lv datavg1 wi-a- 769.34G
orau02lv datavg1 wi-a- 2.64T
sysadminlv datavg1 wi-a- 256.00M
oraarchivelv datavg2 wi-a- 1.95T

 

Sources:

HTTPS://ACCESS.REDHAT.COM/SOLUTIONS/47894
HTTPS://ACCESS.REDHAT.COM/SOLUTIONS/1958
HTTPS://ACCESS.REDHAT.COM/SUPPORT/CASES/#/CASE/01529861

Leave a Reply