cross-posted from: https://lemmy.buddyverse.net/post/5454
Hello everyone, I’m fairly new to Proxmox and struggling with my homelab setup. I have two machines running Proxmox 9: an HP EliteDesk 800 G5 Mini (Core i7-9700) and a Dell OptiPlex 7070 Micro (Core i3 9th gen). I’m running into several issues and would appreciate your insights.
Networking Issue on EliteDesk: I have two VMs (both Ubuntu Server 24.04 LTS) on the same bridge. If I stop or shut down one VM, the other loses internet connectivity. Local access to applications still works. Any ideas on why the bridge is behaving this way?
Backup Setup on OptiPlex: I’m running a Proxmox Backup Server VM with Backblaze B2 as an S3 datastore. This is working fine so far.
Backup Problems on EliteDesk: I’m using default LVM-thin for VMs. Backups take a very long time and often freeze at 1-2%. Shutting down the VM cleanly afterward is nearly impossible. I’ve tried both Stop and Snapshot modes, but the issue persists. When a VM becomes unresponsive, it triggers the networking issue above. Would switching to ZFS help? If so, how can I migrate without losing any data?
Hardware Acceleration for Jellyfin: On the EliteDesk, I’d like to enable hardware acceleration for a VM running Jellyfin (in Docker) using the i7-9700’s UHD 630 iGPU. Can anyone recommend a clear guide specific to this CPU? The Proxmox documentation isn’t very detailed for Intel GPUs.
The networking issue is the most frustrating. Has anyone encountered similar bridge problems? Any advice on fixes or next steps would be greatly appreciated. Thank you!
Hardware Acceleration for Jellyfin: On the EliteDesk, I’d like to enable hardware acceleration for a VM running Jellyfin (in Docker) using the i7-9700’s UHD 630 iGPU. Can anyone recommend a clear guide specific to this CPU? The Proxmox documentation isn’t very detailed for Intel GPUs.
I feel like I’ve done this, but it was a VERY long time ago. It certainly wasn’t from a guide specific for this, but from adapting other instructions. Whole idea with a home lab - learn stuff, break stuff, figure stuff out! :-)
Wish I could be more helpful! But iirc, once you understand the gist of passing the hardware through, blocking kernel models on the host, and installing the required drivers in the guest, it’s applicable to basically everything.
As for Backblaze for ‘home lab’ backups, that sounds expensive? I run PBS on a container on my NAS for my backups - keeps it all local and effectively ‘free’. Only the things I REALLY care about - like my git server with all the code I’ve written for the lab, and even some of the more complex/outside the box configurations get backed up to the public cloud. Simple ‘cattle’ VMs do not justify additional expenses for me.
It’s fun as hell! I’ve been running Proxmox for many years now and still enjoy it VERY much. I’ve recently added 3x 12GB bus-powered A2000s to my Dell workstations. Having oodles of fun running things like piper, whisper, ollama and frigate models on them in a new k8s cluster I spun up just for ML workloads.
- This sounds like a weird one. It would be helpful to have some more info about your network. Would you share your PVE host’s
/etc/network/interfaces
file and the config file for the VMs (from/etc/pve/qemu-server
)? - Excellent
- I think ZFS would likely help since it can make use of block level snapshots. I think the way to move things over would be to create a ZFS datastore in Proxmox and then just migrate each VM’s disk.
- Personally I think this is a bit simpler in an LXC container and there are a bunch of tutorials to help. These two are similar to my own setup:
- https://blog.bekh.fr/jellyfin-lxc-with-hardware-transcoding-igpu-passtrough/
- https://www.wundertech.net/installing-jellyfin-on-proxmox/
Hope some of that helps
- This sounds like a weird one. It would be helpful to have some more info about your network. Would you share your PVE host’s