Hi.
After a series of hardware upgrades, I finally have 2x 2TB SDD for data on my NAS! However, they are still configured with ~900GiB data partitions, and today it is time to expand it!
Initial state
The current disk layout is:
# fdisk -l /dev/sda
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: PNY CS900 2TB SS
Sector size (logical/physical): 512 bytes / 512 bytes
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 499711 497664 243M 83 Linux
/dev/sda2 499712 43468799 42969088 20.5G 83 Linux
/dev/sda3 43468800 1918322687 1874853888 894G 83 Linux
# fdisk -l /dev/sdb
Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 870
Sector size (logical/physical): 512 bytes / 512 bytes
Device Start End Sectors Size Type
/dev/sdb1 2048 1001471 999424 488M Linux filesystem
/dev/sdb2 1001472 98656255 97654784 46.6G Linux filesystem
/dev/sdb3 98656256 1973510143 1874853888 894G Linux filesystem
Reading the output above, the data partitions can be found at /dev/sd*3
, both sized at 894GiB with the exact same sector size (512 bytes
) and sector count (1874853888
).
Both partitions are LUKS encrypted, and I’m using crypt keys to avoid typing passwords all the time. They are declared in crypttab to be auto-mounted on boot:
# cat /etc/crypttab | grep data
data1 /dev/sda3 /etc/cryptkey-cs900 luks,discard,timeout=10
data3 /dev/sdb3 /etc/cryptkey-evo870 luks,discard,timeout=10
And finally, both encrypted partitions are used to create a single BTRFS filesystem in RAID1 mode:
# btrfs filesystem show /data
Label: none uuid: 079c6185-4a18-4f8f-8a62-bb741aabb758
Total devices 2 FS bytes used 661GiB
devid 1 size 894GiB used 661GiB path /dev/mapper/data1
devid 2 size 894GiB used 661GiB path /dev/mapper/data3
The challenge
The disks have between 920GiB and 946GiB of free space that I want to reclaim. For consistency, I’ll resize them both to reclaim 920GiB, keeping the number of sectors identical.
Note: BTRFS’ RAID1 does not need identical sizes for each partition, it will keep allocating blocks into its devices as needed, and you could use uneven disk sizes (i.e.: 1x2TB + 2x2TB) in RAID1 mode. As long as it can write each block to two devices, it doesn’t matter which ones they are. Read more.
So I am dealing with three layers: disk partitions, LUKS and BTRFS. I’ll have to resize them all, starting from the outer one.
Step 0: backup
Make sure you have backups, and you’re fairly confident you can restore your data from them.
Step 1: resize the partitions
I didn’t spend much time investigating if I would be able to do an online resize, I went ahead and booted in the rescue mode I had in GRUB.
I’ve used cfdisk, but any partition manager should be able to do the job. The only important thing (for me), was to keep the same number of sectors for both partitions. I first resized the disk with less free space and rebooted into the system to make sure it was still working. Had I screwed it up, I could recover by dd
-ing the second disk back into the first. Thankfully it wasn’t needed.
Once both partitions were resized, I’ve booted back into the regular mode.
Step 2: resize the LUKS container
This step was fairly straight-forward, I ran cryptsetup resize once for each LUKS container. By default, it will expand to use the entire partition.
# cryptsetup --key-file=/etc/cryptkey-cs900 resize data1
# cryptsetup --key-file=/etc/cryptkey-evo870 resize data3
Step 3: resize the BTRFS devices
Also fairly simple, I ran btrfs filesystem resize. The caveat is that you have to run it for each device in your filesystem (devid column in the btrfs filesystem show
output).
# btrfs filesystem resize 1:max /data
# btrfs filesystem resize 2:max /data
Conclusion
My system can now store 1.77TiB of data, and I’ve already put it in use.
# btrfs filesystem show /data
Label: none uuid: 079c6185-4a18-4f8f-8a62-bb741aabb758
Total devices 2 FS bytes used 1.16TiB
devid 1 size 1.77TiB used 1.17TiB path /dev/mapper/data1
devid 2 size 1.77TiB used 1.17TiB path /dev/mapper/data3
Thank you.