Cloning a Drive in Linux via Commands: Difference between revisions

wiki.TerraBase.info
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
 
(7 intermediate revisions by the same user not shown)
Line 16: Line 16:
├─nvme1n1p4                259:6    0    80G  0 part  
├─nvme1n1p4                259:6    0    80G  0 part  
│ └─VG.NVMe.P4-LV.Storage  253:1    0    60G  0 lvm  /mnt/NVMEx1
│ └─VG.NVMe.P4-LV.Storage  253:1    0    60G  0 lvm  /mnt/NVMEx1
└─nvme1n1p5                259:7    0    20G  0 part [SWAP]</syntaxhighlight>
└─nvme1n1p5                259:7    0    20G  0 part [SWAP]</syntaxhighlight>And just for the below examples, /dev/sdc will be the Destination Drive.  Be really careful to identify the drive you want as the source, because all the restoration stuff is destructive to what's on the drive.
 
=== Before ===
Again, this assumes the same size or larger drive.  See the Resizing Drives Section at the end for information on expanding LVs and File Systems AND the appropriate time to do those resizings.


=== The Steps ===
=== The Steps ===
Line 23: Line 26:


* BackUp the Source Drive Structure: <code>sgdisk --backup=nvme1n1.bak /dev/nvme1n1</code>
* BackUp the Source Drive Structure: <code>sgdisk --backup=nvme1n1.bak /dev/nvme1n1</code>
** Optional, just FYI for 'human readable' information instead of the above binary .bak file: <code>sgdisk --print /dev/nvme1n1 > nvme1n1.txt</code>
* BackUp each of the Partitions;
* BackUp each of the Partitions;
** nvme1n1p1:<code>dd if=/dev/nvme1n1p1 of=/BackUps/nvme1n1p1 bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync</code>
** nvme1n1p1:<code>dd if=/dev/nvme1n1p1 of=/BackUps/nvme1n1p1.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync</code>
** nvme1n1p2: <code>dd if=/dev/nvme1n1p2 of=/BackUps/nvme1n1p2.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync</code>
** nvme1n1p3;
*** LVM Configuration: <code>dd if=/dev/nvme1n1p3 of=/BackUps/nvme1n1p3.LVM bs=512 count=2048 iflag=fullblock status=progress conv=noerror,sync</code>
*** Create a Snapshot: <code>lvcreate --snapshot -l 100%FREE --name "LV.ROOT.SnapShot" "/dev/VG.NVMe.P3/LV.ROOT"</code>
*** File System: <code>dd if=/dev/VG.NVMe.P3/LV.ROOT.SnapShot bs=64M iflag=fullblock status=progress conv=noerror,sync | pigz -1 -c > /BackUps/LV.ROOT.SnapShot.gz</code>
*** Delete the SnapShot: <code>lvremove -f /dev/VG.NVMe.P3/LV.ROOT.SnapShot</code>
** nvme1n1p4;
*** LVM Configuration: <code>dd if=/dev/nvme1n1p4 of=/BackUps/nvme1n1p4.LVM bs=512 count=2048 iflag=fullblock status=progress conv=noerror,sync</code>
*** Create a Snapshot: <code>lvcreate --snapshot -l 100%FREE --name "LV.Storage.SnapShot" "/dev/VG.NVMe.P4/LV.Storage"</code>
*** File System: <code>dd if=/dev/VG.NVMe.P4/LV.Storage.SnapShot bs=64M iflag=fullblock status=progress conv=noerror,sync | pigz -1 -c > /BackUps/LV.Storage.SnapShot.gz</code>
*** Delete the SnapShot: <code>lvremove -f /dev/VG.NVMe.P4/LV.Storage.SnapShot</code>
** nvme1n1p5: <code>dd if=/dev/nvme1n1p5 of=/BackUps/nvme1n1p5.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync</code>
==== Step(s) 2 -  Restoration ====
 
* Restore to the Destination Drive: <code>sgdisk --load-backup=nvme1n1.bak /dev/sdc</code>
** ...but first, if it isn't a new Drive: <code>wipefs -a /dev/sdc</code>
** ...and remember, use the same size or larger drive (this is the message when using a larger drive): Creating new GPT entries in memory.  Warning! Current disk size doesn't match that of the backup!  Adjusting sizes to match, but subsequent problems are possible!  The operation has completed successfully.
* Re-Read Drive Information on the Destination System: <code>partprobe /dev/sdc</code>
** And if this error occurs: "Not all of the space available to /dev/sdc appears to be used, you can fix the GPT to use all of the space (an extra 2014 blocks) OR continue with the current setting?", it is probably because of a mismatch in size between the Source and Destination Drives Storage Capacity.  Easy to fix with: <code>parted /dev/sdc, then type: print, then type: Fix, then type: quit</code>
* Confirmation (should see the exact Partition layout as the Source Drive, but of course with no File System Data): <code>lsblk</code>
* Restore each of the Partitions;
** nvme1n1p1: <code>dd if=nvme1n1p1.img of=/dev/sdc1 bs=64M status=progress conv=fsync</code>
** nvme1n1p2: <code>dd if=nvme1n1p2.img of=/dev/sdc2 bs=64M status=progress conv=fsync</code>
** nvme1n1p3;
*** Restore the LVM Configuration: <code>dd if=nvme1n1p3.LVM of=/dev/sdc3 bs=512 count=2048 iflag=fullblock status=progress conv=fsync</code>
*** NOTE: At this point, neither PARTPROBE, PVSCAN, VGSCAN, nor LVSCAN will make any LVM Partitions appear with the LSBLK Command.  Even worse, some of the LVM Commands will actually show the restored LVM Partition as Active. And yet...  But this works: <code>vgchange --refresh VG.NVMe.P3</code>, and may actually mount it.  But there isn't any information there yet, so if it does mount: <code>umount /mnt/sdc3</code>(or whatever the Mount Name is). And this may need to be run too, to clear out 'bad information': <code>pvscan --cache</code>
*** Restore the File System: <code>pigz -dc LV.ROOT.SnapShot.gz | dd of=/dev/VG.NVMe.P3/LV.ROOT bs=64M iflag=fullblock oflag=direct status=progress conv=fsync</code>
*** Make sure the File System is OK: <code>fsck.ext4 -f /dev/VG.NVMe.P3/LV.ROOT</code>* (see the Check everything section below as VGCHANGE may need to be run again)
** nvme1n1p4;
*** Restore the LVM Configuration: <code>dd if=nvme1n1p4.LVM of=/dev/sdc4 bs=512 count=2048 iflag=fullblock status=progress conv=fsync</code>
*** NOTE: At this point, neither PARTPROBE, PVSCAN, VGSCAN, nor LVSCAN will make any LVM Partitions appear with the LSBLK Command.  Even worse, some of the LVM Commands will actually show the restored LVM Partition as Active. And yet...  But this works: <code>vgchange --refresh VG.NVMe.P4</code>, and may actually mount it.  But there isn't any information there yet, so if it does mount: <code>umount /mnt/sdc4</code>(or whatever the Mount Name is). And this may need to be run too, to clear out 'bad information': <code>pvscan --cache</code>
*** Restore the File System: <code>pigz -dc LV.Storage.SnapShot.gz | dd of=/dev/VG.NVMe.P4/LV.Storage bs=64M iflag=fullblock oflag=direct status=progress conv=fsync</code>
*** Make sure the File System is OK: <code>fsck.ext4 -f /dev/VG.NVMe.P4/LV.Storage</code>* (see the Check everything section below as VGCHANGE may need to be run again)
** nvme1n1p5: <code>dd if=nvme1n1p5 of=/dev/sdc5 bs=64M status=progress conv=fsync</code>
** *Check everything: <code>lsblk</code>
*** If the LVM information isn't displayed, run the vgchange commands again, and running lsblk again, should show them;
**** <code>vgchange --refresh VG.NVMe.P3</code>
**** <code>vgchange --refresh VG.NVMe.P4</code>
** Before the 'new' Drive is installed, make sure to check the /etc/fstab file and make sure all the other expected Drives are there, otherwise comment them out.
** And finally, run this just to make sure everything is written to the Source Drive (mostly for the sake of USB mounted Drives): <code>sync</code>
 
=== Scripting ===
Of course all this will be scripted in the future.  So basically Acronis for Linux.
 
=== Resizing Drives ===
Use CFDISK to resize a Partition
 
Use the lvextend -l +100%FREE /dev/VG.NVMe.P4/LV.Storage or lvreduce -L 300G /dev/VG.NVMe.P4/LV.Storage, to enlarge or reduce the size of an LV
 
Use resize2fs /dev/VG.NVMe.P4/LV.Storage to expand a File System
 
=== Renaming a VG or LV ===
vgrename OldName NewName
 
...but first: vgchange -an /dev/VG.NVMe.P3 (-ay enables it)
 
And then change the name in a bunch of different places like: /etc/kernel/cmdline, /etc/fstab, /etc/default/grub/, /boot/loader/entries/WhatEverEntryName, /boot/grub2/grub.cfg (only if on the booted OS: sudo grub2-mkconfig -o /boot/grub2/grub.cfg, otherwise edit the dang file)


=== IDs for Drives ===
Tune2fs for UUID:


==== Step(s) 2 -  Restoration ====
PARTUUID: sfdisk --disk-id /dev/sda 0x40dae42c (remember, the 0x is sort of a static thing that has to preceed whatever ever value is to be changed, plus the partition number will be tacked onto the end.  IE, if one has a PARTUUID Number as 55fa144a-01 and wants to change a PARTUUID number to this, 40dae42c, add 0x, that's a zero, in front of 40dae42c, and the -01 will be tacked onto the end of it when viewing with blkid)
 
=== Drive Creation and LVMs (Logical Volume Management SYSTEMs) ===
cfdisk
 
pvcreate /dev/nvme2n1p1
 
vgcreate VG.NVMEx1.P1 /dev/nvme2n1p1


lvcreate -L 460G -n LV.Storage VG.NVMEx1.P1


Restore to the Destination Drive: <code>sgdisk --load-backup=WhatEverFileName.bak /dev/WhatEverSDy</code>
mkfs.ext4 -L Storage_NVMEx1 -v /dev/VG.NVMEx1.P1/LV.Storage


Re-Read Drive Information on the Destination System: partprobe /dev/WhatEverSDy
pvs, vgs, lvs AND pvdisplay, vgdisplay, lvdisplay


If this error occurs: "Not all of the space available to /dev/sdc appears to be used, you can fix the GPT to use all of the space (an extra 2014 blocks) or
=== And for the Critics ===
nvme1n1p1 and nvme1n1p2: Yes it would be great if a SnapShot could be taken of these too, but as far as research and experiments have shown, UEFI will not tolerate that, so it won't work.  Plus, realistically, what's gonna change on those Partitions in the brief time DD is running for them?  Hint: NOTHING!


continue with the current setting?", it is probably because of a mismatch in size between the Source and Destination Drives Storage CapacityEasy to fix with: <code>parted /dev/SDy, print, Fix</code>
SWAP: Yes, the SWAP partition could just be recreated, and it won't be consistent, etc. when cloned with DD, but it doesn't make a difference at all because when the cloned system boots, the SWAP Partition is essentially resetInteresting question on which is faster, and debatable whether one could run the various commands to create a new SWAP Partition, with the same parameters, UUID, etc. VS running the DD restore command, with the former being faster than the latter.

Latest revision as of 18:10, 23 February 2026

It's been a long journey. A lot of software and methods have been tried over the years. All of them work, but with varying degrees of difficulty.

Wait, let's back up (so to speak) and lay out the objective: A Bare Metal Recovery Methodology that allows for the complete backup of an entire Rocky Linux OS based on a 'live / running' Source Drive that can then be restored on a Destination or Target Drive.

Here we go...

Premise and Circumstances

None of this works unless the drive structure has been set up properly. In this instance, things are about as plain and simple as it gets. A single GPT Partitioned Drive with the ROOT File System as EXT4 on an LVM Partition (has to be on an LVM Partition to allow for 'SnapShots').

Below is the Example Drive;

nvme1n1                     259:2    0 238.5G  0 disk 
├─nvme1n1p1                 259:3    0     1G  0 part /boot/efi
├─nvme1n1p2                 259:4    0     4G  0 part /boot
├─nvme1n1p3                 259:5    0   128G  0 part 
│ └─VG.NVMe.P3-LV.ROOT      253:0    0    64G  0 lvm  /
├─nvme1n1p4                 259:6    0    80G  0 part 
│ └─VG.NVMe.P4-LV.Storage   253:1    0    60G  0 lvm  /mnt/NVMEx1
└─nvme1n1p5                 259:7    0    20G  0 part [SWAP]

And just for the below examples, /dev/sdc will be the Destination Drive. Be really careful to identify the drive you want as the source, because all the restoration stuff is destructive to what's on the drive.

Before

Again, this assumes the same size or larger drive. See the Resizing Drives Section at the end for information on expanding LVs and File Systems AND the appropriate time to do those resizings.

The Steps

Step(s) 1 - BackUp

  • BackUp the Source Drive Structure: sgdisk --backup=nvme1n1.bak /dev/nvme1n1
    • Optional, just FYI for 'human readable' information instead of the above binary .bak file: sgdisk --print /dev/nvme1n1 > nvme1n1.txt
  • BackUp each of the Partitions;
    • nvme1n1p1:dd if=/dev/nvme1n1p1 of=/BackUps/nvme1n1p1.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync
    • nvme1n1p2: dd if=/dev/nvme1n1p2 of=/BackUps/nvme1n1p2.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync
    • nvme1n1p3;
      • LVM Configuration: dd if=/dev/nvme1n1p3 of=/BackUps/nvme1n1p3.LVM bs=512 count=2048 iflag=fullblock status=progress conv=noerror,sync
      • Create a Snapshot: lvcreate --snapshot -l 100%FREE --name "LV.ROOT.SnapShot" "/dev/VG.NVMe.P3/LV.ROOT"
      • File System: dd if=/dev/VG.NVMe.P3/LV.ROOT.SnapShot bs=64M iflag=fullblock status=progress conv=noerror,sync | pigz -1 -c > /BackUps/LV.ROOT.SnapShot.gz
      • Delete the SnapShot: lvremove -f /dev/VG.NVMe.P3/LV.ROOT.SnapShot
    • nvme1n1p4;
      • LVM Configuration: dd if=/dev/nvme1n1p4 of=/BackUps/nvme1n1p4.LVM bs=512 count=2048 iflag=fullblock status=progress conv=noerror,sync
      • Create a Snapshot: lvcreate --snapshot -l 100%FREE --name "LV.Storage.SnapShot" "/dev/VG.NVMe.P4/LV.Storage"
      • File System: dd if=/dev/VG.NVMe.P4/LV.Storage.SnapShot bs=64M iflag=fullblock status=progress conv=noerror,sync | pigz -1 -c > /BackUps/LV.Storage.SnapShot.gz
      • Delete the SnapShot: lvremove -f /dev/VG.NVMe.P4/LV.Storage.SnapShot
    • nvme1n1p5: dd if=/dev/nvme1n1p5 of=/BackUps/nvme1n1p5.img bs=64M iflag=fullblock status=progress conv=noerror,sync,fsync

Step(s) 2 - Restoration

  • Restore to the Destination Drive: sgdisk --load-backup=nvme1n1.bak /dev/sdc
    • ...but first, if it isn't a new Drive: wipefs -a /dev/sdc
    • ...and remember, use the same size or larger drive (this is the message when using a larger drive): Creating new GPT entries in memory. Warning! Current disk size doesn't match that of the backup! Adjusting sizes to match, but subsequent problems are possible! The operation has completed successfully.
  • Re-Read Drive Information on the Destination System: partprobe /dev/sdc
    • And if this error occurs: "Not all of the space available to /dev/sdc appears to be used, you can fix the GPT to use all of the space (an extra 2014 blocks) OR continue with the current setting?", it is probably because of a mismatch in size between the Source and Destination Drives Storage Capacity. Easy to fix with: parted /dev/sdc, then type: print, then type: Fix, then type: quit
  • Confirmation (should see the exact Partition layout as the Source Drive, but of course with no File System Data): lsblk
  • Restore each of the Partitions;
    • nvme1n1p1: dd if=nvme1n1p1.img of=/dev/sdc1 bs=64M status=progress conv=fsync
    • nvme1n1p2: dd if=nvme1n1p2.img of=/dev/sdc2 bs=64M status=progress conv=fsync
    • nvme1n1p3;
      • Restore the LVM Configuration: dd if=nvme1n1p3.LVM of=/dev/sdc3 bs=512 count=2048 iflag=fullblock status=progress conv=fsync
      • NOTE: At this point, neither PARTPROBE, PVSCAN, VGSCAN, nor LVSCAN will make any LVM Partitions appear with the LSBLK Command. Even worse, some of the LVM Commands will actually show the restored LVM Partition as Active. And yet... But this works: vgchange --refresh VG.NVMe.P3, and may actually mount it. But there isn't any information there yet, so if it does mount: umount /mnt/sdc3(or whatever the Mount Name is). And this may need to be run too, to clear out 'bad information': pvscan --cache
      • Restore the File System: pigz -dc LV.ROOT.SnapShot.gz | dd of=/dev/VG.NVMe.P3/LV.ROOT bs=64M iflag=fullblock oflag=direct status=progress conv=fsync
      • Make sure the File System is OK: fsck.ext4 -f /dev/VG.NVMe.P3/LV.ROOT* (see the Check everything section below as VGCHANGE may need to be run again)
    • nvme1n1p4;
      • Restore the LVM Configuration: dd if=nvme1n1p4.LVM of=/dev/sdc4 bs=512 count=2048 iflag=fullblock status=progress conv=fsync
      • NOTE: At this point, neither PARTPROBE, PVSCAN, VGSCAN, nor LVSCAN will make any LVM Partitions appear with the LSBLK Command. Even worse, some of the LVM Commands will actually show the restored LVM Partition as Active. And yet... But this works: vgchange --refresh VG.NVMe.P4, and may actually mount it. But there isn't any information there yet, so if it does mount: umount /mnt/sdc4(or whatever the Mount Name is). And this may need to be run too, to clear out 'bad information': pvscan --cache
      • Restore the File System: pigz -dc LV.Storage.SnapShot.gz | dd of=/dev/VG.NVMe.P4/LV.Storage bs=64M iflag=fullblock oflag=direct status=progress conv=fsync
      • Make sure the File System is OK: fsck.ext4 -f /dev/VG.NVMe.P4/LV.Storage* (see the Check everything section below as VGCHANGE may need to be run again)
    • nvme1n1p5: dd if=nvme1n1p5 of=/dev/sdc5 bs=64M status=progress conv=fsync
    • *Check everything: lsblk
      • If the LVM information isn't displayed, run the vgchange commands again, and running lsblk again, should show them;
        • vgchange --refresh VG.NVMe.P3
        • vgchange --refresh VG.NVMe.P4
    • Before the 'new' Drive is installed, make sure to check the /etc/fstab file and make sure all the other expected Drives are there, otherwise comment them out.
    • And finally, run this just to make sure everything is written to the Source Drive (mostly for the sake of USB mounted Drives): sync

Scripting

Of course all this will be scripted in the future. So basically Acronis for Linux.

Resizing Drives

Use CFDISK to resize a Partition

Use the lvextend -l +100%FREE /dev/VG.NVMe.P4/LV.Storage or lvreduce -L 300G /dev/VG.NVMe.P4/LV.Storage, to enlarge or reduce the size of an LV

Use resize2fs /dev/VG.NVMe.P4/LV.Storage to expand a File System

Renaming a VG or LV

vgrename OldName NewName

...but first: vgchange -an /dev/VG.NVMe.P3 (-ay enables it)

And then change the name in a bunch of different places like: /etc/kernel/cmdline, /etc/fstab, /etc/default/grub/, /boot/loader/entries/WhatEverEntryName, /boot/grub2/grub.cfg (only if on the booted OS: sudo grub2-mkconfig -o /boot/grub2/grub.cfg, otherwise edit the dang file)

IDs for Drives

Tune2fs for UUID:

PARTUUID: sfdisk --disk-id /dev/sda 0x40dae42c (remember, the 0x is sort of a static thing that has to preceed whatever ever value is to be changed, plus the partition number will be tacked onto the end. IE, if one has a PARTUUID Number as 55fa144a-01 and wants to change a PARTUUID number to this, 40dae42c, add 0x, that's a zero, in front of 40dae42c, and the -01 will be tacked onto the end of it when viewing with blkid)

Drive Creation and LVMs (Logical Volume Management SYSTEMs)

cfdisk

pvcreate /dev/nvme2n1p1

vgcreate VG.NVMEx1.P1 /dev/nvme2n1p1

lvcreate -L 460G -n LV.Storage VG.NVMEx1.P1

mkfs.ext4 -L Storage_NVMEx1 -v /dev/VG.NVMEx1.P1/LV.Storage

pvs, vgs, lvs AND pvdisplay, vgdisplay, lvdisplay

And for the Critics

nvme1n1p1 and nvme1n1p2: Yes it would be great if a SnapShot could be taken of these too, but as far as research and experiments have shown, UEFI will not tolerate that, so it won't work. Plus, realistically, what's gonna change on those Partitions in the brief time DD is running for them? Hint: NOTHING!

SWAP: Yes, the SWAP partition could just be recreated, and it won't be consistent, etc. when cloned with DD, but it doesn't make a difference at all because when the cloned system boots, the SWAP Partition is essentially reset. Interesting question on which is faster, and debatable whether one could run the various commands to create a new SWAP Partition, with the same parameters, UUID, etc. VS running the DD restore command, with the former being faster than the latter.