Increase a LSI MegaRaid RAID0 array with LVM on top


BIG Data is the new bacon, but to manage the new BIG data driven businesses we need a lot of storage and better than a lot of storage is a lot of super fast storage ;)

To achieve this dream the best solution is a RAID0 (stripping) array with super fast SSD’s. As the storage needs increases with the business, additional disks need to be added to the RAID0 array, making it BIGGER and faster… which is really nice… but not very easy and it may be risky if you mess things up.

Here is a tutorial for a MEGARAID LSI Hardware RAID Controller using the storcli utility to increase an existing RAID array with no additional downtime after the drives are added and without moving data or restoring it.

Now we have one big and fast LVM volume made from 4x960GB SSDs disks in a raid0 array, a total of 3839GB. We need more storage so we added another 2x960GB SSDs, bellow are the steps taken to use the new storage space.
Disclaimer: this tutorial has been tested on three different systems without downtime or data loss; however we do not guarantee that this tutorial will work the same for you and that you are not going to loose data, as always a smart thing is to backup your data and train on a non productive system.

I. Increase the size and number of disks of a raid 0 on LSI controller using storcli utility:

We cd into storcli location:

We look for the new disks:

We look for the 2 new installed disks (output limited to relevant data):

New disks are under 8:14 and 8:15 and are in state “unconfigured-good”. (if the disks are in any other state, you have to put them in this state, see additional info here:

We look for the virtual drive (RAID array that we want to extend, output limited to relevant data):

In our case this is it:

We run the following command to migrate the array from a 4 disk to a 6 disk raid0 array:

We can monitor the status of the migration using:

II. Create a new partition with the new storage available from the two new disks, we are going to use parted for this task (fdisk of cfdisk can also be used). There is also the possibility to un-mount the current partition and increase it by removing and recreating it on the entire device, but this is more risky and has the disadvantage of having to un-mount the storage so it can not be done live, as we have an LVM setup above, we went for the simpler and safer option:

But first, we need to re-scan the SCSI bus so that the kernel sees the new storage available on the disk without a reboot:

List the installed SCSI buses we need to re-scan:

Run the following command on all the buses listed above (change the bus to your system findings):

Enter parted:

Select the device for the raid0 array:

List free space on the device (The ERROR and WARNING messages are expected as the device has been increased. Just fix it :p )

Create a new partition:

List the new configuration of the disk

Set partition flag to lvm

List partitions :

III. Extending a LVM mount point with the new partition:

Listing the current Physical volumes:

Creating a new Physical Volume:

Listing current Volume Groups:

Add the new PV to the existing VG:

Now we can see the new free storage on the mount point:

Now the new disks are added to the already mounted location and usable. I hope that this has helped.

Additional info can be found here:


By continuing to use the site, you agree to the use of cookies. More information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.