This article explains how to rebuild a software RAID after replacing a defective hard disk.

Attention.

This guide is valid only for Dedicated Servers that use UEFI as the interface between the hardware and the operating system. If you are using a dedicated server that uses BIOS as the interface between the hardware and the operating system, see the following article for information about how to rebuild software RAID.

Rebuilding Software RAID (Linux/Dedicated Server with BIOS)

Check if a Dedicated Server uses UEFI or BIOS

To check whether your server uses BIOS or UEFI as the interface between the hardware and the operating system, issue the following command:

[root@localhost ~]# [ -d /sys/firmware/efi ] && echo UEFI || echo BIOS

More information about UEFI

For more information about UEFI, see the following article:

General Information About UEFI

Important information about partitioning your dedicated server

On Dedicated Servers that are managed in the Cloud Panel, as of 10/20/2021, only one partition is set up during setup and when the operating system is reinstalled. On Dedicated Servers that were set up before this date and on Dedicated Servers that are purchased as part of a Server Power Deal, the operating system images are equipped with the Logical Volume Manager (LVM). The Logical Volume Manager places a logical layer between the file system and the partitions of the disks used. This makes it possible to create a file system that spans multiple partitions and/or disks. In this way, for example, the storage space of several partitions or disks can be combined.

Determining the information needed to rebuild the software RAID

List existing hard disks and partitions

To list the existing disks and partitions, do the following:

  • Log in to the server with your root account.

  • To list the existing disks and partitions, enter the command fdisk -l. fdisk is a command line program for partitioning disks. With this program, you can view, create, or delete partitions.

    [root@localhost ~]# fdisk -l

  • Note the existing disks, partitions and the paths of the swap files.

Please Note

After the hard disk has been replaced, it will possibly be recognized as sdc. This always happens when the hard disk is exchanged via hot swap. Only a reboot helps here, so that the hard disk is recognized as sda or sdb again.

Getting the Mount Points
  • To view the mount points of the devices and partitions you are using, enter the following command:

    [root@localhost ~]# lsblk

    The following information is then displayed, for example:

    [root@2A2E3A1 ~]# lsblk
    NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
    sda       8:0    0  1.8T  0 disk
    ├─sda1    8:1    0  511M  0 part
    │ └─md1   9:1    0  511M  0 raid1 /boot/efi
    ├─sda2    8:2    0  1.4G  0 part
    │ └─md2   9:2    0  1.4G  0 raid1 /boot
    ├─sda3    8:3    0  3.7G  0 part  [SWAP]
    └─sda4    8:4    0  1.8T  0 part
      └─md4   9:4    0  1.8T  0 raid1 /
    sdb       8:16   0  1.8T  0 disk
    ├─sdb1    8:17   0  511M  0 part
    │ └─md1   9:1    0  511M  0 raid1 /boot/efi
    ├─sdb2    8:18   0  1.4G  0 part
    │ └─md2   9:2    0  1.4G  0 raid1 /boot
    ├─sdb3    8:19   0  3.7G  0 part  [SWAP]
    └─sdb4    8:20   0  1.8T  0 part
      └─md4   9:4    0  1.8T  0 raid1 /

  • Note the devices and partitions and their mount points.

Example Scenario

This guide assumes the following configuration:

[root@2A2E3A1 ~]# cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sdb4[2] sda4[0]
      1947653952 blocks super 1.0 [2/1] [U_]
 
md2 : active raid1 sdb2[2] sda2[0]
      1428416 blocks super 1.0 [2/2] [UU]
 
md1 : active raid1 sdb1[1] sda1[0]
      523200 blocks [2/2] [UU]
 
unused devices: <none>

In this example, there are 3 arrays:

/dev/md1 with mountpoint /boot/efi
/dev/md2 with mountpoint /boot
/dev/md4 with mountpoint /

In addition, there are two swap partitions that are not part of the RAID. In this example, they are sda3 and sdb3.

Restoring the RAID

The rest of the procedure depends on whether hard disk 1 (sda) or hard disk 2 (sdb) was replaced:

Hard disk 1 (sda) was replaced

If hard disk 1 (sda) was replaced, you must check whether it was recognized correctly. You may need to reboot. Then boot the server into the rescue system and perform the steps listed below.

  • First copy the partition tables to the new (empty) hard disk:

    [root@host ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda

    (If necessary you have to use the --force option)

  • Add the partitions to the RAID:

    [root@host ~]# mdadm /dev/md1 -a /dev/sda1
    [root@host ~]# mdadm /dev/md2 -a /dev/sda2
    [root@host ~]# mdadm /dev/md4 -a /dev/sda4

    You can now follow the rebuild of the RAID with cat /proc/mdstat.

  • Then mount the partitions:

    [root@host ~]# mount /dev/md1 /boot/efi
    [root@host mount /dev/md2 /boot
    [root@host ~]# mount /dev/md4 /

  • After mounting the partitions, jump into the chroot environment and install the grub bootloader:

    [root@host ~]# chroot /mnt
    [root@host ~]# grub-install --efi-directory=/boot/efi /dev/sda

  • Exit Chroot with Exit and unmount all disks again:

    [root@host ~]# umount -a

    Wait until the rebuild process is complete and then boot the server back into the normal system.

  • Finally, you must now enable the swap partition using the following commands:

    [root@host ~]# mkswap /dev/sda3
    [root@host ~]# swapon -a

Hard disk 2 (sdb) was exchanged

If hard disk 2 (sdb) has been replaced, proceed as follows:

  • Perform a reboot so that hard disk 2 (sdb) is displayed.

  • In the local system, copy the partition tables to the new (empty) hard disk:

    [root@host ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb

    If necessary, you must use the --force option.

  • Add the partitions to the RAID:

    [root@host ~]# mdadm /dev/md1 -a /dev/sdb1
    [root@host ~]# mdadm /dev/md2 -a /dev/sdb2
    [root@host ~]# mdadm /dev/md4 -a /dev/sdb4

    You can now follow the rebuild of the RAID with cat /proc/mdstat.

  • Finally, you must now enable the swap partition using the following commands:

    [root@host ~]# mkswap /dev/sdb3
    [root@host ~]# swapon -a