Home » Articles » Discovering new Drive or LUNs on Linux without rebooting

Discovering new Drive or LUNs on Linux without rebooting

Some discovery about the system

When we run on live system, one of our task is to be able to add capacity without impacting production.  It is often by adding storage either via a SAN interface or by adding new hard drive (Physical or Virtual) to a server.

Unfortunately, once we add a new storage, it is not necessarily detected automatically by the Operating System.  Fortunately, it exists many method to manually trigger a discovery of the new available storage without requiring a reboot.

This information is my cheat sheet help reminder when I have to perform such tasks:

Find mapping for scsi device
# cat /proc/scsi/sg/device_hdr /proc/scsi/sg/devices

 host chan id lun type opens qdepth busy online
 1 0 0 0 0 1 32 0 1
 1 0 0 1 0 1 32 0 1
 1 0 0 2 0 1 32 0 1
 1 0 0 3 0 1 32 0 1
Find out how many disk are visible
# fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

3
# lsscsi

[1:0:0:0] disk HITACHI OPEN-V 7303 /dev/sda
[2:0:0:0] disk HITACHI OPEN-V 7303 /dev/sdb
[3:0:0:0] cd/dvd _NEC DVD_RW ND-3500AG 2.06 /dev/sr0
[4:0:6:0] tape HP Ultrium 2-SCSI F63D /dev/st0

 

Find the detail about each host bus adapter configured

# systool -c fc_host -v

Class = "fc_host"
Class Device = "host1"
Class Device path = "/sys/class/fc_host/host1"
fabric_name = "0x1000000533cd73cd"
issue_lip = <store method only>
node_name = "0x500143802422ab0d"
port_id = "0x011200"
port_name = "0x500143802422ab0c"
port_state = "Online"
port_type = "NPort (fabric via point-to-point)"
speed = "8 Gbit"
supported_classes = "Class 3"
supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
symbolic_name = "HPAJ764A FW:v5.06.03 DVR:v8.03.07.15.05.09-k"
system_hostname = ""
tgtid_bind_type = "wwpn (World Wide Port Name)"
uevent = <store method only>

Device = "host1"
Device path = "/sys/devices/pci0000:00/0000:00:06.0/0000:17:00.0/host1"
edc = <store method only>
fw_dump =
nvram = "ISP "
optrom_ctl = <store method only>
optrom =
reset = <store method only>
sfp = ""
uevent = <store method only>
vpd = "?$"
Class Device = "host2"
Class Device path = "/sys/class/fc_host/host2"
fabric_name = "0x1000000533f3f947"
issue_lip = <store method only>
node_name = "0x500143802422ab0f"
port_id = "0x011200"
port_name = "0x500143802422ab0e"
port_state = "Online"
port_type = "NPort (fabric via point-to-point)"
speed = "8 Gbit"
supported_classes = "Class 3"
supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
symbolic_name = "HPAJ764A FW:v5.06.03 DVR:v8.03.07.15.05.09-k"
system_hostname = ""
tgtid_bind_type = "wwpn (World Wide Port Name)"
uevent = <store method only>

Device = "host2"
Device path = "/sys/devices/pci0000:00/0000:00:06.0/0000:17:00.1/host2"
edc = <store method only>
fw_dump =
nvram = "ISP "
optrom_ctl = <store method only>
optrom =
reset = <store method only>
sfp = ""
uevent = <store method only>
vpd = "?$"
Find out how many SCSI controller

# ls -l /sys/class/scsi_host

drwxr-xr-x 2 root root 0 Sep 30 03:04 host0
drwxr-xr-x 2 root root 0 Sep 30 14:25 host1
drwxr-xr-x 2 root root 0 Sep 30 14:25 host2
drwxr-xr-x 2 root root 0 Sep 30 03:04 host3
drwxr-xr-x 2 root root 0 Sep 30 03:04 host4
Find out how many host bus adapter configured


# ls /sys/class/fc_host


host1 host2

 

Current Multipath Situation -- Rerun after scan HBA for new devices steps to see if something new detected


# ls -altr /dev/mapper/mpath*
 

brw-rw---- 1 root disk 253, 0 May 26 07:50 /dev/mapper/mpath1
brw-rw---- 1 root disk 253, 1 May 26 07:50 /dev/mapper/mpath1p1
Show multipath info
 

# multipath -l


mpath1 (360060e80132dab0050202dab00001800) dm-0 HITACHI,OPEN-V
[size=600G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:0 sda 8:0 [active][undef]
\_ 2:0:0:0 sdb 8:16 [active][undef]
 pvs


PV VG Fmt Attr PSize PFree
/dev/cciss/c0d0p2 rootvg lvm2 a-- 409.66G 4.31G
/dev/mapper/mpath1p1 datavg lvm2 a-- 598.09G 7.27G
vgs


VG #PV #LV #SN Attr VSize VFree
datavg 1 1 0 wz--n- 598.09G 7.27G
rootvg 1 18 0 wz--n- 409.66G 4.31G

 

scan HBA for new devices

scan new LUNS (Loop Initialization Protocol (LIP)) 

Reference: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/scanning-storage-interconnects.html

This is asynchronous and will output result in /var/log/messages – tail -f /var/log/messages
Also, we might want to keep track of how many drive are detected before scanning:

 fdisk, watch -n 1 'fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' '
 for i in `seq 0 1`; do echo "1" > /sys/class/fc_host/host${i}/issue_lip; done

 

Scan the for new SCSI disks

Reference: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/adding_storage-device-or-path.html

Also, we might want to keep track of how many drive are detected with fdisk before scanning:

 watch -n 1 'fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' '
 for i in `seq 0 3`; do echo "- - -" > /sys/class/scsi_host/host${i}/scan; done

 

Configure new discovered disk

Create partition on disk via fdisk ( Alternative to parted )

# fdisk /dev/mapper/[newdevicename]
 n,p,1,t,1,8e,w
n = new partition
p = primary partition
1 = partition number
t = Set partition type
1 = partition number
8e = Linux LVM
w = write partition table

Create new partition on disk via parted ( Alternative to fdisk )

# parted /dev/mapper/[newdevicename]
 mklabel msdos
 yes
 mkpart primary ext3 0 -0
 set 1 lvm on
 quit
Another way with parted ( alternative )
# parted /dev/mapper/[newdevicename] mklabel gpt
# parted -a optimal /dev/mapper/[newdevicename] mkpart primary ext4 '0%' '100%'
# parted /dev/mapper/[newdevicename] set 1 lvm on

 

Validate if our partition have been created

Listing all partition on the device

# parted /dev/mapper/[newdevicename] print
# ls -altr /dev/mapper/

Create our LVM structure #

Create our LVM Physical Volume

# pvcreate /dev/mapper/[newdevicename]

If you want to Extend your existing Volume Group by adding the new Physical Volume  ( Alternative to vgcreate )

# vgextend [nameofVG] /dev/mapper/[newdevicename]

Or if you want to Create a new VG  ( Alternative to vgextend )

# vgcreate [VolumeGroupName] [physicaldevice] [physicaldevice2]

If you want to Extend our Logical Volume and at the same time File System ( Alternative to lvcreate )

# lvextend -r -L+100G /dev/mapper/datavg-stlv82 # /install/storix Logical Volume name
 # lvextend -r -l 100%FREE /dev/mapper/datavg-stlv02

Or if you want to Create a new Logical Volume Alternative to lvextend )

# lvcreate -l 20 -n logical_vol1 vol_grp1

Leave a Reply