So after a reboot, the motherboard of my proxmox server died. Or rather, it refused boot to the bios. It refused to show the boot menu. And it refused to show output on the screen when plugged into a screen.
I did not have spare hardware to just move proxmox install over, so here’s a few rough notes on how I recovered a few files to kickstart a temporary solution on a raspberry pi; from a nixos machine that did not already have zfs installed - from inside a .raw image, from a docker container, from inside a proxmox backup, from two drives in a zfs mirror.
Help, I don’t know how to mount zfs drives on nixos
I added this to make nixos understand zfs.
boot.supportedFilesystems = [ "cifs" "zfs" ];
It failed because networking needs a “hostId”.
cat /etc/machine-id
So I added this, containing the 8 first chars from the machine-id
networking.hostId = "61433039";
zfs and zstd package was nescessary to have zpool and zstd commands available
environment.systemPackages = with pkgs; [
zfs
zstd
Add the zfs pool to hardware-configuration.nix, for fsType zfs you can just put the pool name in the device, and it will mount it automatically
fileSystems."/fast" = {
device = "fast";
fsType = "zfs";
options = [ "ro" ];
};
Rebuild, or reboot to test that it imports automatically. In my case it did not, so my nixos machine was taken to a recovery shell as importing the pool after boot did not work.
I just rebooted the machine, and choose the previously installed nixos system from the boot menu, and try again from there. God I love that feature! 😍
The first time around it did not import the pool automatically, as it needed -f to force it, as it came from another machine. So I did this manually.
> sudo zpool import -f fast
It worked!
> zfs list
NAME USED AVAIL REFER MOUNTPOINT
fast 688G 2.49T 128K /fast
fast/backup 67.4G 433G 67.4G /fast/backup
fast/iso 7.61G 42.4G 7.61G /fast/iso
fast/shared 545G 991G 545G /fast/shared
fast/subvol-100-disk-0 343M 7.67G 343M /fast/subvol-100-disk-0
fast/subvol-101-disk-0 877M 7.14G 877M /fast/subvol-101-disk-0
fast/vm-102-disk-0 66.0G 2.51T 43.6G -
fast/vm-200-disk-0 1.16G 2.49T 40.5M -
Then find your files
ls /fast/backup
Help, my files are in a proxmox vm backup
I found them, but they were in a proxmox vm backup file.
├── vzdump-qemu-102-2025_02_16-21_30_07.log
├── vzdump-qemu-102-2025_02_16-21_30_07.vma.zst
└── vzdump-qemu-102-2025_02_16-21_30_07.vma.zst.notes
Create a place to hold them
cd ../somewhere/..
mkdir pve
mv vzdump-qemu-102-2025_02_16-21_30_07.* pve
cd pve
unzstd vzdump-qemu-102-2025_02_16-21_30_07.vma.zst
This unzipped the backup to the custom proxmox vm backup format
vzdump-qemu-102-2025_02_16-21_30_07.vma
This needs proxmox scripts to read it, which is not available on nixos. You could use docker, like this
> vim Dockerfile
FROM debian:bullseye
WORKDIR /tmp
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y wget zstd libglib2.0-0 libiscsi7 librbd1 libaio1 lzop glusterfs-common libcurl4-gnutls-dev liburing-dev libnuma-dev && \
echo "deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription" >> /etc/apt/sources.list && \
wget http://download.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg && \
chmod +r /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg && \
apt-get update && \
apt-get install -y libproxmox-backup-qemu0-dev && \
apt-get download pve-qemu-kvm && \
dpkg --fsys-tarfile ./pve-qemu-kvm*.deb | tar xOf - ./usr/bin/vma > ./vma && \
chmod u+x ./vma && \
mv ./vma /usr/local/bin && \
mkdir -p /backup && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /backup
VOLUME /backup
Build it and run it, mounting the current folder inside it, to reach the backup
docker build . -t pve-vma-extract
docker run -it -v .:/backup pve-vma-extract:latest
Inside here you can extract the .raw image
vma extract ./<file>.vma -v ./extracted
Exit back out of the docker image, mount the obtained .raw file as a loopback device so you can mount it afterwards
cd extracted
sudo losetup -Pf --show disk-drive-scsi0.raw
sudo mkdir /mnt/pve
Help, my disk is a .raw image
Lets check what that did
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 64G 0 loop
├─loop0p1 259:3 0 1M 0 part
├─loop0p2 259:4 0 2G 0 part
└─loop0p3 259:5 0 62G 0 part
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 3.6T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 3.6T 0 part
└─sdb9 8:25 0 8M 0 part
zd0 230:0 0 1.1G 0 disk
├─zd0p1 230:1 0 16M 0 part
└─zd0p2 230:2 0 1.1G 0 part
zd16 230:16 0 64G 0 disk
├─zd16p1 230:17 0 1M 0 part
├─zd16p2 230:18 0 2G 0 part
└─zd16p3 230:19 0 62G 0 part
└─ubuntu--vg-ubuntu--lv 254:1 0 62G 0 lvm /mnt/pve
zram0 253:0 0 31.4G 0 disk [SWAP]
nvme0n1 259:0 0 1.8T 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot
└─nvme0n1p2 259:2 0 1.8T 0 part
└─crypted 254:0 0 1.8T 0 crypt /home
I blindly tried a bunch of mount commands to no avail. Turns out this is not the way to do it for lvm partitions
sudo mount /dev/loop0 /mnt/pve
lsblk
sudo mount /dev/loop0p3 /mnt/pve
sudo mount -oloop /dev/loop0p3 /mnt/pve
lsblk
sudo mount /dev/zd16p3 /mnt/pve
sudo mount /dev/zd16 /mnt/pve
I think this is what made it work.
sudo lvs
sudo mount /dev/ubuntu-vg/ubuntu-lv /mnt/pve
sudo ls /mnt/pve/root
Files! Fetch what you where looking for
mkdir setup
sudo ls -la /mnt/pve/root
sudo cp -r /mnt/pve/root/homeassistant setup
sudo cp -r /mnt/pve/root/portainer setup
sudo cp -r /mnt/pve/root/monitoring setup
sudo cp -r /mnt/pve/root/traefik setup
Have fun.
Lessons
Legend has it, you don’t have a backup until you tried restoring from it - I guess now I have a backup.
This was painful. But it worked. I will keep stuff I think I’d like to recover someday outside of my VMs and containers; on a network share, an outside folder mounted inside an lxc container, or similarly with docker, or back them up outside the VM next time.
This stuff was inside a VM - at least the VM was backed up - which made it an extra challenge.
Also, zfs pools can be imported with little fuzz on a new system by moving the mirrored disks and passing the -f parameter to zpool import. I’m still satisfied, if not more satistfied than before this whole endevour, with choosing to keep the files I do care about on a zfs pool with two mirrored 4TB ssd drives.
🤞
Resources
- zfs needs hostid: https://discourse.nixos.org/t/how-to-set-the-hostid-when-migrating-to-flakes/25607
- a true hero of the internet, describing someonelses work: https://github.com/ganehag/pve-vma-docker/pull/5/commits/cf02b19987c165c973f525b7aa92598e94a9ad12?short_path=b335630#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5
- another example: https://github.com/AenonDynamics/vma-backup-extractor?tab=readme-ov-file