Raven - Blog
June 3, 2022

MDADM - RAID1 expansion

Posted on June 3, 2022  •  5 minutes  • 964 words  • Other languages:  Français
WARNING: Throughout this note it is essential to identify your disks, your partitions, your mounts, the construction of the RAID. Your backups must be completed, tested and operational before starting such an operation. Your disaster recovery must be established and ready. You must not copy any command without understanding it and you must be able to anticipate its consequences.

Today I’m doing a tutorial for SysAdmins who want to increase the volume of a RAID level 1 under MDADM on a bare metal server. My article is made under the Debian GNU/Linux distribution, without graphical interface.

Context of this tutorial

We will take for example a hypothetical RAID1 /dev/md0 composed of 2 disks (/dev/sda and /dev/sdb) of 1TB (MBR partition table) that we want to replace by 2 disks of 3TB (GPT partition table).

If your motherboard has a hotswap functionality and this is activated on the SATA ports, it is not necessary to plan a server shutdown.

As mentioned at the top of this note, it is important to keep track of your disks, partitions, mounts, and raid construction. To do this, you can use various tools such as :

cat /proc/mdstat
mdadm --detail /dev/md0
cat /etc/fstab
df -h
mount
blkid

A - Checking the GRUB

Before starting, we check that Grub is installed on both disks. To do this, run a Grub installation on all the disks in your cluster. For our example, we will do :

grub-install /dev/sda
grub-install /dev/sdb

B - Removing the first disk

First we need to remove a disk from the RAID1 in place in mdadm. To do this, provoke a failure on a disk otherwise it will not be able to be removed cleanly, as it will be in use.

Provoke a failure of a first disk:

mdadm --manage /dev/md0 --set-faulty /dev/sda1

We check that the state of the raid is on [U_] instead of [UU] with :

cat /proc/mdstat

And the cluster detail RAID with :

mdadm --detail /dev/md0

Your first disk is now detached from the RAID array, you verified that your system was booting on the other disk. Shut down the machine to unplug the 1TB drive and insert the 3TB drive instead (or do it hot swapping).

C - Partitioning

With your new larger disk inserted and the server restarted you will need to :

For partitioning : disks with a MBR partition table can be managed with the fdisk utility. For disks with a GPT partition table it is necessary to use gdisk.

In our example, we use GPT and I do the following partitioning:

gdisk -l /dev/sdb
---
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02  BIOS boot partition
   2            4096          208895   100.0 MiB   EF00  EFI System
   3          208896         8597503   4.0 GiB     FD00  Linux RAID
   4         8597504      5860533134   2.7 TiB     FD00  Linux RAID

D - Notes on UEFI and BIOS

There are two main types of partition table: MBR and GPT

As a reminder, the BIOS and UEFI (BIOS replacement) are the intermediaries between the hardware and the operating system. It is the BIOS or the UEFI that will allow you to boot your machine and launch your OS.

As far as partitioning is concerned, please note that :

In the case of this tutorial, we assume that we have old hardware and a BIOS motherboard that does not support UEFI.

I prefer to ensure the future by creating a EF02 partition and a EF00 partition (which we will not use of course). Thus, in case of hardware failure in the future, you will be able to reintegrate these disks on a UEFI motherboard because the partition will already be in place and you will only have to install the Grub package in UEFI version. Very convenient!

E - Reintegration into the cluster

We will reintegrate the disk into the RAID1 array so that the data on this new 3TB disk synchronizes with the 1TB disk.

mdadm --manage /dev/md0 --add /dev/sdbX

We let the synchronization happen. You can follow it with the usual command :

cat /proc/mdstat

E - GRUB installation

WARNING, the simple command grub-install /dev/sdb will not work, you will not get an error, BUT the system will not reboot for all that.

The disk having been changed it is necessary to install Grub on it via the command :

grub-install --recheck /dev/sdb

F- Installation of the second disk

Once the first synchronization is complete, repeat steps B, C, E.

G - RAID and FileSystem extension

Once the two disks are in place and the RAID sync is complete, the space is not immediately available.

It is necessary to expand the RAID with the following command:

mdadm --grow --size max /dev/md0

Finally it is necessary to re-extend the file system (in our case ext4) :

resize2fs /dev/md0

If you use LVMs, resize them as well.

You can now let your RAID rebuild itself and the 3TB space is usable now 🎉 .

Follow me

Subscribe to my RSS feed !