this post was submitted on 01 Oct 2025
54 points (95.0% liked)

Selfhosted

51901 readers
591 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

How do y'all manage all these Docker compose apps?

First I installed Jellyfin natively on Debian, which was nice because everything just worked with the normal package manager and systemd.

Then, Navidrome wasn't in the repos, but it's a simple Go binary and provides a systemd unit file, so that was not so bad just downloading a new binary every now and then.

Then... Immich came... and forced me to use Docker compose... :|

Now I'm looking at Frigate... and it also requires Docker compose... :|

Looking through the docs, looks like Jellyfin, Navidrome, Immich, and Frigate all require/support Docker compose...

At this point, I'm wondering if I should switch everything to Docker compose so I can keep everything straight.

But, how do folks manage this mess? Is there an analogue to apt update, apt upgrade, systemctl restart, journalctl for all these Docker compose apps? Or do I have to individually manage each app? I guess I could write a bash script... but... is this what other people do?

top 47 comments
sorted by: hot top controversial new old
[–] possiblylinux127@lemmy.zip 3 points 2 hours ago* (last edited 2 hours ago)

Why wouldn't you just use Docker compose? It has NFS support build in and there are Ansible playbooks for it

I personally moved to podman since it integrates with systemd but it is a bit harder.

[–] Passerby6497@lemmy.world 2 points 2 hours ago

Docker compose pull; docker compose down;docker compose up -d

Pulls an update for the container, stops the container and then restarts it in the background. I've been told that you don't need to bring it down, but I do it so that even if there isn't an update, it still restarts the container.

You need to do it in each container's folder, but it's pretty easy to set an alias and just walk your running containers, or just script that process for each directory. If you're smarter than I am, you could get the list from running containers (docker ps), but I didn't name my service folders the same as the service name.

[–] ikidd@lemmy.world 4 points 4 hours ago* (last edited 4 hours ago)

I never, ever use a bare docker run command unless it's for a one-off, never used it again container. Other than actively working on a project, I can't see why anyone would use that.

Docker compose for every stack, watchtower for the containers I'm not too worried about breaking changes on update.

[–] suicidaleggroll@lemmy.world 9 points 11 hours ago* (last edited 11 hours ago)

Docker is far cleaner than native installs once you get used to it. Yes native installs are nice at first, but they aren't portable, and unless the software is built specifically for the distro you're running you will very quickly run into dependency hell trying to set up your system to support multiple services that all want different versions of libraries. Plus what if you want or need to move a service to another system, or restore a single service from a backup? Reinstalling a service from scratch and migrating over the libraries and config files in all of their separate locations can be a PITA.

It's pretty much a requirement to start spinning up separate VMs for each service to get them to not interfere with each other and to allow backup and migration to other hosts, and managing 50 different VMs is much more involved and resource-intensive than managing 50 different containers on one machine.

Also you said that native installs just need an apt update && apt upgrade, but that's not true. Services that are built into your package manager sure, but most services do not have pre-built packages for all distros. For the vast majority, you have to git clone the source, then build from scratch and install. Updating those services is not a simple apt update && apt upgrade, you have to cd into the repo, git pull, then recompile and reinstall, and pray to god that the dependencies haven't changed.

docker compose pull/up/down is pretty much all you need, wrap it in a small shell script and you can bring up/down or update every service with a single command. Also if you use bind mounts and place them in the directory for the service along side the compose file, now your entire service is self-contained in one directory. To back it up you just "docker compose down", rsync the directory to the backup location, then "docker compose up". To restore you do the exact same thing, just reverse the direction of the rsync. To move a service to a different host, you do the exact same thing, just the rsync and docker compose up are now being run on another system.

Docker lets you compact an entire service with all of its dependencies, databases, config files, and data, into a single directory that can be backed up and/or moved to any other system with nothing more than a "down", "copy", and "up", with zero interference with other services running on your system.

I have 158 containers running on my systems at home. With some wrapper scripts, management is trivial. The thought of trying to manage native installs on over a hundred individual VMs is frightening. The thought of trying to manage this setup with native installs on one machine, if that was even possible, is even more frightening.

[–] hperrin@lemmy.ca 20 points 17 hours ago (3 children)

Don’t auto update. Read the release notes before you update things. Sometimes you have to do some things manually to keep from breaking things.

[–] possiblylinux127@lemmy.zip 1 points 2 hours ago

Autoupdate is fine for personal stuff. Just set a specific date so that you know if something breaks. Rollbacks are easy and very rarely needed.

[–] suicidaleggroll@lemmy.world 5 points 12 hours ago* (last edited 12 hours ago)

Pretty much guaranteed you'll spend an order of magnitude more time (or more) doing than than just auto-updating and fixing things on the rare occasion that they break. If you have a service that likes to throw out breaking changes on a regular basis, it might make sense to read the release notes and manually update that one, but not everything.

[–] zingo@sh.itjust.works 6 points 15 hours ago (1 children)

Politically correct of course.

But from my own experience using Watchtower for over 7 years is that I can count on one hand when it actually broke something. Most of the time it was database related.

But you can put apps on the watchtower ignore list (looking a you Immich!), which clear that out fairly quick.

[–] WhatAmLemmy@lemmy.world 1 points 13 hours ago

And if you roll all your dockers on ZFS as datasets + sanoid you can just rollback to the last snapshot, if that ever does happen.

[–] hoppolito@mander.xyz 32 points 20 hours ago

But, how do folks manage this mess?

I generally find it less of a mess to have everything encapsulated in docker deployments for my server setups. Each application has its own environment (i.e. I can treat each container as its own ‘Linux machine’ which has only the stuff installed that’s important) and they can all be interfaced with through the same cli.

Is there an analogue to apt update, apt upgrade, systemctl restart, journalctl?

Strictly speaking docker pull <image>, docker compose up, docker restart <container>, and docker logs <container>. But instead of finding direct equivalents to a package manager or system service supervisor, i would suggest reading up on

  1. the docker command line, with its simple docker run command and the (in the beginning) super important docker ps
  2. The concept of Dockerfiles and what exactly they encapsulate - this will really help understand how docker abstracts from single app messiness
  3. docker-compose to find the equivalent of service supervision in the container space

Applications like immich are multi-item setups which can be made much easier while maintaining flexibility with docker-compose. In this scenario you switch from worrying about updating individual packages, and instead manage ‘compose files’, i.e. clumps of programs that work together to provide a specific service.

Once you grok the way compose files make that management easier - since they provide the same isolation and management regardless of any outer environment, you have a plethora of tools that make manual maintenance easy (dockge, portainer,…) or, more often, make manual maintenance less necessary through automation (watchtower, ansible, komodo,…).

I realise this can be daunting in the beginning but it is the exact use case for never having to think about downloading a new Go binary and setting up a manual unit file again.

[–] slazer2au@lemmy.world 8 points 16 hours ago

Each app has a folder and then I have a bash script that runs

Docker compose up -d 

In each folder of my containers to update them. It is crude and will break something at some stage but meh jellyseer or TickDone being offline for a bit is fine while I debug.

[–] tehWrapper@lemmy.world 4 points 14 hours ago (1 children)

I have finally had to switch to using docker for several things I use to just install manually (ttrss being the main one). It sure feels dirty when i use to just apt update and know everything was updated.

I can see the draw for docker but feel it's way over used right now.

[–] Jakeroxs@sh.itjust.works 4 points 13 hours ago

Just replace Apt update with docker pull 🤷‍♂️

[–] GreenKnight23@lemmy.world 11 points 18 hours ago

docker compose CLI.

KISS, never did me wrong.

[–] eager_eagle@lemmy.world 7 points 16 hours ago

Yeah, I have everything as compose.yaml stacks and those stacks + their config files are in a git repo.

[–] frongt@lemmy.zip 20 points 20 hours ago (1 children)

I just use watchtower to update automatically.

Docker has a logs command.

[–] antifa_ceo@lemmy.ml 9 points 20 hours ago

And being able to opt in just with a container label is super convenient

[–] communism@lemmy.ml 7 points 17 hours ago (1 children)

Watchtower for automated updates. For containers that don't have a latest tag to track, editing the version number manually and then docker compose pull && docker compose up -d is simple enough.

[–] kata1yst@sh.itjust.works 6 points 15 hours ago

Adding here. Most docker containers support semver pinning! It's a great balance between automated updates and advoiding breakage.

[–] gravitywell@sh.itjust.works 9 points 18 hours ago (4 children)
[–] village604@adultswim.fan 2 points 4 hours ago

Hmm, I wonder if I can use this on my Synology to manage things until I get around to finishing my proxmox setup.

[–] Kyle@lemmy.ca 2 points 10 hours ago

Wow thank you for this. This looks so much nicer than portainer.

Subscribing to these communities is so helpful because of discovery like this.

[–] lka1988@sh.itjust.works 2 points 11 hours ago* (last edited 11 hours ago)

Same here. Dockge is also developed by the Watchtower dev.

It's so much easier to use than Portainer: no weird licensing shit, uses standard Docker locations, and works even with existing stacks. Also helps me keep Docker stacks organized - each compose.yaml lives in it's own folder under /opt/stacks/.

I have 4 VMs on my cluster specifically for Docker, each with it's own Dockge instance, which can be linked together so that any Dockge instance in my cluster can access all Docker stacks over all the VMs.

[–] princessnorah@lemmy.blahaj.zone 3 points 18 hours ago

+1 for Dockge.

[–] non_burglar@lemmy.world 2 points 13 hours ago

I didn't see ansible as a solution here, which I use. I run docker compose only. Each environment is backed up nightly and monitored. If a docker compose pull/up and then image clean breaks a service, I restore from a backup that works and see what went wrong.

[–] HelloRoot@lemy.lol 4 points 16 hours ago* (last edited 16 hours ago)

I manage them with dokploy.com

I update them manually after checking if the update is beneficial to me.

If not then why touch a running system?

[–] JASN_DE@feddit.org 13 points 21 hours ago* (last edited 21 hours ago) (1 children)

Check out Dockge. It provides a simple yet very usable and useful web UI for managing Docker compose stacks.

[–] ook@discuss.tchncs.de 5 points 18 hours ago (1 children)

Was looking if anyone mentioned it!

I started with portainer but it was way too complex for my small setup. Dockge works super well, starting, stopping, updating containers in a simple web interface.

Just updating Dockge itself from inside Dockge does not seem to work but to be fair I didn't look into it that much yet.

[–] mbirth@lemmy.ml 5 points 17 hours ago (1 children)

Can Dockge manage/cleanup unused images and containers by now? That’s the only reason I keep using Portainer - because it can show all the other stuff and lets me free up space.

[–] wreckedcarzz@lemmy.world 7 points 16 hours ago* (last edited 16 hours ago)

No, not through the dockge UI. You can do it manually with standard docker commands (I have a cron task for this) but if you want to visualize things, dockge won't do that (yet?).

[–] Jakeroxs@sh.itjust.works 2 points 13 hours ago

It's really nice once it's going, especially if you link them together in a compose and farm out all the individual ymls for each service, or use something like dockage to do it.

[–] SnotFlickerman@lemmy.blahaj.zone 10 points 20 hours ago* (last edited 20 hours ago) (1 children)

In the docker folder with the docker-compose.yml of whatever docker container you want to upgrade:

docker compose pull && docker compose up -d

As others have said, for large groups of containers it's helpful to use Watchtower.

Immich in particular warns to backup your database before an upgrade. Also be on the lookout for breaking changes which require you to alter your docker-compose.yml file before an upgrade.

Oh and after upgrades to remove any dangling images which sometimes take up a lot of space:

docker image prune

Also if you want services to be interoperable, learn about docker networking now not later and remember for static IPs you must create a user defined bridge.

[–] sobchak@programming.dev 3 points 19 hours ago (1 children)

I think compose files are usually pinned to a version, or use a .env file that needs to be changed to update to a new version.

I personally don't update very often; usually not until I'm forced to for some reason. I find that just checking the documentation for any upgrade/migration guides, and doing it manually is sufficient. I don't expose this kind of stuff publicly; if I did, I'd probably update regularly.

[–] SnotFlickerman@lemmy.blahaj.zone 3 points 19 hours ago* (last edited 19 hours ago)

Immich is a more touchy beast because it includes a mobile app and the mobile app and the docker container need to generally be either the same version, or within a few versions of one another. There was a while where I forgot to update the server for a while and the mobile app kept being updated on my phone and stopped backing up photos because it could no longer communicate with the server.

I don't expose services to the outside world either, but I still enjoy keeping things up to date. Gives me something to do.

[–] synae@lemmy.sdf.org 2 points 14 hours ago (1 children)

I use k3s and argocd to keep everything synced to the configuration checked into a git repo. But I wouldn't recommend that to someone just getting started with containers; kubernetes is a whole new beast to wrestle with.

[–] possiblylinux127@lemmy.zip 2 points 2 hours ago (1 children)

Kubernetes is the Arch of Containers except way more confusing

[–] synae@lemmy.sdf.org 2 points 1 hour ago

I use it for work so it felt natural to do it at home too. If anyone has time to learn it as a hobby and doesn't mind a challenge, I recommend it. But IMO you need to already be familiar with a lot of containerization concepts

[–] mhzawadi@lemmy.horwood.cloud 5 points 18 hours ago

So I have a git repo with all my compose files in it, some of the stacks are version pinned some are latest. With the git repo I get versioning of the files and a way to get compose files on remote servers, in the repo is a readme with all the steps needed to start each stack and in what order.

I use portainer to keep an eye on things and check logs, but starting is always from the cli

[–] LycaKnight@infosec.pub 5 points 18 hours ago

I use dockge. 1-2 years ago i started to selfhost everything with Docker. I have now 30+ Container and Dockge is absolut fantastic. I host all my stuff on a root Server from Hetzner and if they reveal a cheaper Server i switch. Since all my Stuff is hosted on Docker i can simple copy it to the new Server and start the Docker Containers and it runs.

https://github.com/louislam/dockge

[–] tofu@lemmy.nocturnal.garden 6 points 19 hours ago

I'd suggest to put the compose stacks in git and then clone them either manually or with some tool.

For fully automated gitops with docker compose, check this blogpost

[–] Dagnet@lemmy.world 4 points 18 hours ago (1 children)

Strongly recommend komodo. I tried dockge and portainer but komodo was easy to install and has great features from scheduled updates and using compose files from a git repo. Also you can migrate existing apps to it without too much work

[–] Bakkoda@sh.itjust.works 3 points 16 hours ago

I just made the switch to Komodo last week. Komodo lxc managing 4 VMs across two proxmox hosts. Easily added all the existing servers and it just worked. I think any of these systems are probably overkill for my needs but Komodo had the nicest "fresh out the box, find the important stuff right away" feel to it. My two cents.

[–] rozodru@piefed.social 1 points 14 hours ago

I run Akkoma, Navidrome, Searx, valutwarden, RomM, Forgejo, wireguard, RDP, and a few other things all via docker. Honestly I just keep everything in their own dir and just have Yazi on my server to make it easier to manage. I don't auto update anything, it's all manual updates.

I'm probably going to slap Watchtower in there to just make things easier. don't really need to over think it in all honesty.

[–] eksb@programming.dev 3 points 20 hours ago* (last edited 20 hours ago) (2 children)

I have 5 docker-compose-based services. I wrote a shell script:

#!/usr/bin/env bash
for y in $(find /etc/ -name docker-compose.yml); do
  cd $(dirname $y)
  docker compose pull
  systemctl restart $y
done

(edit: spelling; thanks Unquote0270)

[–] qqq@lemmy.world 2 points 12 hours ago* (last edited 12 hours ago)

For loops with find are evil for a lot of reasons, one of which is spaces:

$ tree
.
├── arent good with find loops
│   ├── a
│   └── innerdira
│       └── docker-compose.yml
└── dirs with spaces
    ├── b
    └── innerdirb
        └── docker-compose.yml

3 directories, 2 files
$ for y in $(find .); do echo $y; done
.
./are
t good with fi
d loops
./are
t good with fi
d loops/i

erdira
./are
t good with fi
d loops/i

erdira/docker-compose.yml
./are
t good with fi
d loops/a
./dirs with spaces
./dirs with spaces/i

erdirb
./dirs with spaces/i

erdirb/docker-compose.yml
./dirs with spaces/b

You can kinda fix that with IFS (this breaks if newlines are in the filename which would probably only happen in a malicious context):

$ OIFS=$IFS
$ IFS=$'\n'
$ for y in $(find .); do echo "$y"; done
.
./arent good with find loops
./arent good with find loops/innerdira
./arent good with find loops/innerdira/docker-compose.yml
./arent good with find loops/a
./dirs with spaces
./dirs with spaces/innerdirb
./dirs with spaces/innerdirb/docker-compose.yml
./dirs with spaces/b
$ IFS=$OIFS

But you can also use something like:

find . -name 'docker-compose.yml' -printf '%h\0' | while read -r -d $'\0' dir; do
      ....
done

or in your case this could all be done from find alone:

find . -name 'docker-compose.yml' -execdir ...

-execdir in this case is basically replacing your cd $(dirname $y), which is also brittle when it comes to spaces and should be quoted: cd "$(dirname "$y")".

[–] Unquote0270@programming.dev 5 points 20 hours ago

I hope the real version doesn't have the spelling problem!

[–] TakenDistance@sh.itjust.works 1 points 20 hours ago

I use Portainer (portainer.io) - it’s a prett nice WebUI which lets me manage the whole set of services and easily add new ones as you can edit the compose yaml right in the browser.

There’s no substitute for knowing all the docker commands to help you get around but if you are looking for something to help with management then this might be the way to go.

Watchtower also recommended is probably a good shout just be warned about auto upstating past your config - it’s super easy for the next image you pull to just break your setup because of new ENV or other dependency you’re not aware of.