Increase a LSI MegaRaid RAID0 array with LVM on top

Share

BIG Data is the new bacon, but to manage the new BIG data driven businesses we need a lot of storage and better than a lot of storage is a lot of super fast storage 😉

To achieve this dream the best solution is a RAID0 (stripping) array with super fast SSD’s. As the storage needs increases with the business, additional disks need to be added to the RAID0 array, making it BIGGER and faster… which is really nice… but not very easy and it may be risky if you mess things up.

Here is a tutorial for a MEGARAID LSI Hardware RAID Controller using the storcli utility to increase an existing RAID array with no additional downtime after the drives are added and without moving data or restoring it.

Now we have one big and fast LVM volume made from 4x960GB SSDs disks in a raid0 array, a total of 3839GB. We need more storage so we added another 2x960GB SSDs, bellow are the steps taken to use the new storage space.
Disclaimer: this tutorial has been tested on three different systems without downtime or data loss; however we do not guarantee that this tutorial will work the same for you and that you are not going to loose data, as always a smart thing is to backup your data and train on a non productive system.

I. Increase the size and number of disks of a raid 0 on LSI controller using storcli utility:

We cd into storcli location:

cd /opt/MegaRAID/storcli/

We look for the new disks:

./storcli64 /call show all

We look for the 2 new installed disks (output limited to relevant data):

Physical Drives = 16
PD LIST :
=======
------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
------------------------------------------------------------------------------
..............................................................................
8:8 18 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
8:9 19 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
8:10 20 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
8:11 21 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
..............................................................................
8:14 24 UGood - 893.75 GB SATA SSD N N 512B SDLFGC7R-960G-1HST U
8:15 25 UGood - 893.75 GB SATA SSD N N 512B SDLFGC7R-960G-1HST U
------------------------------------------------------------------------------

New disks are under 8:14 and 8:15 and are in state “unconfigured-good”. (if the disks are in any other state, you have to put them in this state, see additional info here: http://pleiades.ucsc.edu/doc/lsi/MegaRAID_SAS_Software_User_Guide.pdf)

We look for the virtual drive (RAID array that we want to extend, output limited to relevant data):

./storcli64 /c0 /vall show all

In our case this is it:

/c0/v1 :
======
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
1/1 RAID0 Optl RW Yes NRWBD - ON 3.491 TB RAID0-B
----------------------------------------------------------------
PDs for VD 1 :
============
--------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
--------------------------------------------------------------------------
8:8 18 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
8:9 19 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
8:10 20 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
8:11 21 Onln 1 893.75 GB SATA SSD N N 512B SDLFOCAR-960G-1HST U
--------------------------------------------------------------------------

We run the following command to migrate the array from a 4 disk to a 6 disk raid0 array:

./storcli64 /c0 /v1 start migrate type=raid0 option=add drives=8:14-15

We can monitor the status of the migration using:

./storcli64 /c0 /v1 show migrate
VD Operation Status :
===================
-------------------------------------------------------
VD Operation Progress% Status Estimated Time Left
-------------------------------------------------------
1 Migrate 0 In progress 14 Hours 4 Minutes
-------------------------------------------------------

II. Create a new partition with the new storage available from the two new disks, we are going to use parted for this task (fdisk of cfdisk can also be used). There is also the possibility to un-mount the current partition and increase it by removing and recreating it on the entire device, but this is more risky and has the disadvantage of having to un-mount the storage so it can not be done live, as we have an LVM setup above, we went for the simpler and safer option:

But first, we need to re-scan the SCSI bus so that the kernel sees the new storage available on the disk without a reboot:

List the installed SCSI buses we need to re-scan:

ls /sys/class/scsi_device/
0:0:36:0 0:0:37:0 0:2:0:0 0:2:1:0

Run the following command on all the buses listed above (change the bus to your system findings):

echo 1 > /sys/class/scsi_device/0\:0\:36\:0/device/rescan
echo 1 > /sys/class/scsi_device/0\:0\:37\:0/device/rescan
echo 1 > /sys/class/scsi_device/0\:2\:0\:0/device/rescan
echo 1 > /sys/class/scsi_device/0\:2\:1\:0/device/rescan

Enter parted:

parted

Select the device for the raid0 array:

(parted) select /dev/sdb

List free space on the device (The ERROR and WARNING messages are expected as the device has been increased. Just fix it :p )

(parted) print free
Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix,
by moving the backup to the end (and removing the old backup)?
Fix/Ignore/Cancel? F
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 3748659200 blocks) or continue
with the current setting?
Fix/Ignore? F
Model: AVAGO MR9361-8i (scsi)
Disk /dev/sdb: 5758GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 3839GB 3839GB Linux LVM lvm
3839GB 5758GB 1919GB Free Space

Create a new partition:

(parted) mkpart
Partition name? []?
File system type? [ext2]?
Start? 3839GB
End? 5758GB

List the new configuration of the disk

(parted) print free
Model: AVAGO MR9361-8i (scsi)
Disk /dev/sdb: 5758GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 3839GB 3839GB Linux LVM lvm
3839GB 3839GB 16.9kB Free Space
2 3839GB 5758GB 1919GB
5758GB 5758GB 1032kB Free Space

Set partition flag to lvm

(parted) set 2 lvm
New state? [on]/off? on

List partitions :

(parted) print
Model: AVAGO MR9361-8i (scsi)
Disk /dev/sdb: 5758GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: pmbr_boot
Number Start End Size File system Name Flags
1 1049kB 3839GB 3839GB primary lvm
2 3839GB 5758GB 1919GB xfs lvm
(parted) quit

III. Extending a LVM mount point with the new partition:

Listing the current Physical volumes:

[root@xxxxx ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name fastVolume
PV Size 3.49 TiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 915199
Free PE 0
Allocated PE 915199
PV UUID Lq8Rgq-2JU2-4JXu-p6a7-OcRV-EYvJ-T9jgoj

Creating a new Physical Volume:

[root@xxxxx ~]# pvcreate /dev/sdb2
WARNING: xfs signature detected on /dev/sdb2 at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/sdb2.
Physical volume "/dev/sdb2" successfully created.

Listing current Volume Groups:

[root@xxxxx ~]# vgdisplay
--- Volume group ---
VG Name fastVolume
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.49 TiB
PE Size 4.00 MiB
Total PE 915199
Alloc PE / Size 915199 / 3.49 TiB
Free PE / Size 0 / 0
VG UUID ogdQOR-tQf2-Vlvp-IGIA-6LyL-MTdx-iUEKmJ

Add the new PV to the existing VG:

[root@xxxxx ~]# vgextend fastVolume /dev/sdb2
Volume group "fastVolume" successfully extended

Now we can see the new free storage on the mount point:

[root@xxxxx ~]# vgdisplay
--- Volume group ---
VG Name fastVolume
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size <5.24 TiB
PE Size 4.00 MiB
Total PE 1372798
Alloc PE / Size 915199 / 3.49 TiB
Free PE / Size 457599 / <1.75 TiB
VG UUID ogdQOR-tQf2-Vlvp-IGIA-6LyL-MTdx-iUEKmJ

Now the new disks are added to the already mounted location and usable. I hope that this has helped.

Additional info can be found here:

MegaRAID_SAS_Software_User_Guide.pdf

https://communities.vmware.com/thread/492752

https://ma.ttias.be/increase-a-vmware-disk-size-vmdk-formatted-as-linux-lvm-without-rebooting/

https://www.thomas-krenn.com/en/wiki/StorCLI#Incorporating_an_improperly_removed_device

By continuing to use the site, you agree to the use of cookies. More information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close