I currently have a server running Arch Linux and Jellyfin, one Raspberry Pi 4 running NextCloudPi and one Raspi running Pi-hole. Eventually I want to host all and more services on one maschine.
I thought about using Proxmox and Docker, but I’m not sure what the ideal setup would look like. For now I thought I use Proxmox and a simple Debian VM which I run Docker on and running Portainer, Pi-hole, Nextcloud, a reverse proxy and Jellyfin as Docker containers?
Is that a smart setup? It gives me the ease of using Docker and a easy way of creating backups of single applications or the whole VM, leaving me with the possibility to add container or VMs for various other services, for testing etc.
Or should I just use LXC for said applications?
Any guidance would be appreciated!
EDIT: In case my comment was overlooked. Thanks for all your comments, I’ll see how I implement things when I get the time to reinstall my server.
Proxmox, Nextcloud, and Jellyfin user here. My setup separates groups of services into their own VMs. Docker is just another way to package and deploy applications by simplifying the process.
So Nextcloud and Jellyfin get their own VMs, and I deploy the applications via Docker on the separate VMs. If you want to utilize Portainer, you can deploy an agent to each of these VMs.
Lightweight applications I typically deploy to separate LXC containers. Portainer, Pi-hole, NGINX would all get separate LXC containers. You can connect to the other VM Portainer agents from the LXC Portainer server.
Second this - i tend to follow the same scheme
I run NextCloud and Jellyfin (more specifically, TurnKey Nextcloud and TurnKey MediaServer) on Proxmox as LXC containers. I don’t know if that’s good (I’m a noob too), but it seems reasonable?
This might also be a misunderstanding on my part, but the way I see it, if you’re going to run exclusively Docker containers, do you really need Proxmox (as opposed to Docker just running directly on a physical machine, or a Kubernetes cluster or something)?
Don’t get me wrong: I do have a Docker VM in Proxmox; it’s just that my order of preference for how to run any particular service goes LXC -> VM -> Docker instead of the other way around. LXCs come first because they’re lighter than VMs, and both come before Docker containers because they can be managed directly in the Proxmox UI instead of having to use a different tool. I use Docker only for software like Traefik, where the documentation makes it clear that Docker is the preferred/best-supported deployment method.
After spending a week working through the intricacies of running it in a vm, lxc, I settled on a privileged LXC container
It was so much simpler to get the quick sync hardware transcoding working, and it just seems so much faster in LXC. Also, the host GPU can be shared across multiple LXC containers
I just run a weekly backup for the LXC using Proxmox backup to an NFS share on the NAS
I ran this way forever. I also did this but with kube. Works super well, you can pass through any hardware you need (GPU for Plex, for example). It also allows you to control allocation of resources if you’ve got other things on the server and don’t want Docker running wild for any reason (transcoding Plex, for example).
I use a mix debain VM with portainer and some lxc to basically do what your asking…works great ``
I put just about everything I can in docker containers, running on a VM in proxmox. However, I do run pi-hole in its own VM, for some reason it just kept stopping when in Docker, did not run very well. But I also don’t use LXC containers. It’s either docker or VMs.
I’d recommend against Jellyfin in a VM if you need hardware transcoding, although YMMV GPU wise.
I don’t think it’s unreasonable to run your docker containers on your Proxmox host or you can use LXC, since neither of these need to deal with GPU passthrough. Yeah, you’ll lose the full VM backup stuff with docker but containers are easy to backup if you configure them properly.
Don’t install docker in a LXC when proxmox runs on ZFS (which it has to when you want to set up a HA cluster with VM migration capabilities). Also you’ll run into problems with file access rights when using NFSv4 ACLs (instead of chmod) on the datasets. If you wanna store and share a lot of data, maybe look into using TrueNAS SCALE as the Hypervisor.
Thanks for all your comments. Seems like either way is fine. :)