Cloning a Drive in Linux via Commands

Revision as of 21:24, 22 February 2026 by Root (talk | contribs)

It's been a long journey. A lot of software and methods have been tried over the years. All of them work, but with varying degrees of difficulty.

Wait, let's back up (so to speak) and lay out the objective: A Bare Metal Recovery Methodology that allows for the complete backup of an entire Rocky Linux OS based on a 'live / running' Source Drive that can then be restored on a Destination or Target Drive.

Here we go...

Premise and Circumstances

None of this works unless the drive structure has been set up properly. In this instance, things are about as plain and simple as it gets. A single GPT Partitioned Drive with the ROOT File System as EXT4 on an LVM Partition (has to be on an LVM Partition to allow for 'SnapShots').

Below is the Example Drive;

nvme1n1                     259:2    0 238.5G  0 disk 
├─nvme1n1p1                 259:3    0     1G  0 part /boot/efi
├─nvme1n1p2                 259:4    0     4G  0 part /boot
├─nvme1n1p3                 259:5    0   128G  0 part 
│ └─VG.NVMe.P3-LV.ROOT      253:0    0    64G  0 lvm  /
├─nvme1n1p4                 259:6    0    80G  0 part 
│ └─VG.NVMe.P4-LV.Storage   253:1    0    60G  0 lvm  /mnt/NVMEx1
└─nvme1n1p5                 259:7    0    20G  0 part [SWAP]

And just for the below examples, /dev/sdc will be the Destination Drive. Be really careful to identify the drive you want as the source, because all the restoration stuff is destructive to what's on the drive.

The Steps

Step(s) 1 - BackUp

  • BackUp the Source Drive Structure: sgdisk --backup=nvme1n1.bak /dev/nvme1n1
    • Optional, just FYI for 'human readable' information instead of the above binary .bak file: sgdisk --print /dev/nvme1n1 > nvme1n1.txt
  • BackUp each of the Partitions;
    • nvme1n1p1:dd if=/dev/nvme1n1p1 of=/BackUps/nvme1n1p1.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync
    • nvme1n1p2: dd if=/dev/nvme1n1p2 of=/BackUps/nvme1n1p2.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync
    • nvme1n1p3;
      • LVM Configuration: dd if=/dev/nvme1n1p3 of=/BackUps/nvme1n1p3.LVM bs=512 count=2048 iflag=fullblock status=progress conv=noerror,sync
      • Create a Snapshot: lvcreate --snapshot -l 100%FREE --name "LV.ROOT.SnapShot" "/dev/VG.NVMe.P3/LV.ROOT"
      • File System: dd if=/dev/VG.NVMe.P3/LV.ROOT.SnapShot bs=64M iflag=fullblock status=progress conv=noerror,sync | pigz -1 -c > /BackUps/LV.ROOT.SnapShot.gz
      • Delete the SnapShot: lvremove -f /dev/VG.NVMe.P3/LV.ROOT.SnapShot
    • nvme1n1p4;
      • LVM Configuration: dd if=/dev/nvme1n1p4 of=/BackUps/nvme1n1p4.LVM bs=512 count=2048 iflag=fullblock status=progress conv=noerror,sync
      • Create a Snapshot: lvcreate --snapshot -l 100%FREE --name "LV.Storage.SnapShot" "/dev/VG.NVMe.P4/LV.Storage"
      • File System: dd if=/dev/VG.NVMe.P4/LV.Storage.SnapShot bs=64M iflag=fullblock status=progress conv=noerror,sync | pigz -1 -c > /BackUps/LV.Storage.SnapShot.gz
      • Delete the SnapShot: lvremove -f /dev/VG.NVMe.P4/LV.Storage.SnapShot
    • nvme1n1p5: dd if=/dev/nvme1n1p5 of=/BackUps/nvme1n1p5.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync

Step(s) 2 - Restoration

  • Restore to the Destination Drive: sgdisk --load-backup=nvme1n1.bak /dev/sdc
    • ...but first, if it isn't a new Drive: wipefs -a /dev/sdc
    • ...and remember, use the same size or larger drive (this is the message when using a larger drive): Creating new GPT entries in memory. Warning! Current disk size doesn't match that of the backup! Adjusting sizes to match, but subsequent problems are possible! The operation has completed successfully.
  • Re-Read Drive Information on the Destination System: partprobe /dev/sdc
    • And if this error occurs: "Not all of the space available to /dev/sdc appears to be used, you can fix the GPT to use all of the space (an extra 2014 blocks) OR continue with the current setting?", it is probably because of a mismatch in size between the Source and Destination Drives Storage Capacity. Easy to fix with: parted /dev/sdc, then type: print, then type: Fix, then type: quit
  • Confirmation (should see the exact Partition layout as the Source Drive, but of course with no File System Data): lsblk
  • Restore each of the Partitions;
    • nvme1n1p1: dd if=nvme1n1p1.img of=/dev/sdc1 bs=64M status=progress conv=fsync
    • nvme1n1p2: dd if=nvme1n1p2.img of=/dev/sdc2 bs=64M status=progress conv=fsync
    • nvme1n1p3;
      • Restore the LVM Configuration: dd if=nvme1n1p3.LVM of=/dev/sdc3 bs=512 count=2048 iflag=fullblock status=progress conv=fsync
      • NOTE: At this point, neither PARTPROBE, PVSCAN, VGSCAN, nor LVSCAN will make any LVM Partitions appear with the LSBLK Command. Even worse, some of the LVM Commands will actually show the restored LVM Partition as Active. And yet... But this works: vgchange --refresh VG.NVMe.P3, and may actually mount it. But there isn't any information there yet, so if it does mount: umount /mnt/sdc3(or whatever the Mount Name is). And this may need to be run too, to clear out 'bad information': pvscan --cache
      • Restore the File System: pigz -dc LV.ROOT.SnapShot.gz | dd of=/dev/VG.NVMe.P3/LV.ROOT bs=64M iflag=fullblock oflag=direct status=progress conv=fsync
      • Make sure the File System is OK: fsck.ext4 -f /dev/VG.NVMe.P3/LV.ROOT* (see the Check everything section below as VGCHANGE may need to be run again)
    • nvme1n1p4;
      • Restore the LVM Configuration: dd if=nvme1n1p4.LVM of=/dev/sdc4 bs=512 count=2048 iflag=fullblock status=progress conv=fsync
      • NOTE: At this point, neither PARTPROBE, PVSCAN, VGSCAN, nor LVSCAN will make any LVM Partitions appear with the LSBLK Command. Even worse, some of the LVM Commands will actually show the restored LVM Partition as Active. And yet... But this works: vgchange --refresh VG.NVMe.P4, and may actually mount it. But there isn't any information there yet, so if it does mount: umount /mnt/sdc4(or whatever the Mount Name is). And this may need to be run too, to clear out 'bad information': pvscan --cache
      • Restore the File System: pigz -dc LV.Storage.SnapShot.gz | dd of=/dev/VG.NVMe.P4/LV.Storage bs=64M iflag=fullblock oflag=direct status=progress conv=fsync
      • Make sure the File System is OK: fsck.ext4 -f /dev/VG.NVMe.P4/LV.Storage* (see the Check everything section below as VGCHANGE may need to be run again)
    • nvme1n1p5: dd if=nvme1n1p5 of=/dev/sdc5 bs=64M status=progress conv=fsync
    • *Check everything: lsblk
      • If the LVM information isn't displayed, run the vgchange commands again, and running lsblk again, should show them;
        • vgchange --refresh VG.NVMe.P3
        • vgchange --refresh VG.NVMe.P4
    • Before the 'new' Drive is installed, make sure to check the /etc/fstab file and make sure all the other expected Drives are there, otherwise comment them out.
    • And finally, run this just to make sure everything is written to the Source Drive (mostly for the sake of USB mounted Drives): sync

Scripting

Of course all this will be scripted in the future. So basically Acronis for Linux.

And for the Critics

nvme1n1p1 and nvme1n1p2: Yes it would be great if a SnapShot could be taken of these too, but as far as research and experiments have shown, UEFI will not tolerate that, so it won't work. Plus, realistically, what's gonna change on those Partitions in the brief time DD is running for them? Hint: NOTHING!

SWAP: Yes, the SWAP partition could just be recreated, and it won't be consistent, etc. when cloned with DD, but it doesn't make a difference at all because when the cloned system boots, the SWAP Partition is essentially reset. Interesting question on which is faster, and debatable whether one could run the various commands to create a new SWAP Partition, with the same parameters, UUID, etc. VS running the DD restore command, with the former being faster than the latter.