I have installed Debian with software RAID1 (and installed Proxmox on it) on 2 256GB SSD and I now want to move to 2 500GB SSD, how do I proceed?
Edit: the RAID is of the OS disks.
Replace one disk, let the raid rebuild. Do the same with the other disk. Do an mdadm grow, then maybe fdisk / lvm / resize fs depending on your setup. Don’t forget to install a bootloader when you put a new disk in.
Making a new array and migrating data is for chumps.
Oh, so easy?! If the RAID rebuilds the disk, why should I install a bootloader? Isn’t it already in the rebuilt disk? By the way…hemmm…how do I install the bootloader? 😣
Yes it’s really that easy. Raid in Linux is usually at the partition level, not the whole device. The bootloader resides in the first few blocks of the disk before your partitions, and isn’t included in the raid.
Use grub-install on the new disk device, ie /dev/sda
Got it, thanks a lot!
If you have enough drive bays, I’d probably shutdown the server, live boot into any linux distro without mounting the drives, then use
dd
to copy from 1st 256GB to 1st 500GB, from 2nd 256GB to 2nd 500GB, then boot the system, and useresize2fs
to expand the file system to fill the partition.Since RAID1 is just a mirror, the more adventurous type might say you can just hot swap one drive, let it rebuild, then hot swap the other, let it rebuild again, and then expand the file system all online live. Given it is only 256GB of data max, on a pair of SSD, it shouldn’t take too long, but I’m more inclined to do it safely.
I’ve recently done this on one of my machines, these were the steps I as far as i remember:
- backup!
- add the new disks to the machine
- partition them like the smaller disks
- add the new disks to the array(s) (as members, not just spares)
- take care of bootloader things while the raid is syncing
- your raid should have 4 devices in sync now
- fail and remove the smaller disks from the raid one by one
- resize the partitions on the bigger disks to maximum
- grow the MD raid to the new size
If you use luks and/or lvm some additional steps will be needed to grow them as well.
If your system can hotplug disks there’s a chance to pull this off with little to no downtime. :)
You create a new raid array with the two new disks and move the data there? I fear you’ll have to be more specific about what doesn’t add up for you…
I have the whole OS on that RAID. Should I just create the new MD and copy everything there? I guess that I need to copy data when the OS is shut down (so with another PC), correct?
In your shoes I’d do just that (booting from a usb stick and creating/mounting the appropriate partitions in the new drives)… but you might find resilvering or resizing partitions easier if you are more familiar with those operations that I am.
It must be said that actually copying the files rather than working with block devices will let you switch to a different filesystem (or take advantage of updates/optimizations recently introduced by your filesystem) or use different mount options (eg. add compression) and should in theory lead to better performance (eg. less fragmentation). In a homelab the performance difference will probably be unnoticeable anyways so… just go with the method you are most comfortable with :)
Thanks!