My friend just shared the existence of this software with me, and it looks perfect for me. His setup uses docker though, and mine doesn’t. The non-docker instructions seem simple enough, but I can’t figure out how to install the requirements. The “pip install” complains and says I should use apt, but I can’t find most of those requirements in my sources. For example “python3-verboselogs” isn’t found. Can someone help? I’d love to get this running!
I know it’s not helpful or what you’re asking for but honestly, just learn docker and all of these kinds of problems just go away.
Once you learn to spin up one docker container, you can spin up nearly any of them (there’s a couple of extra steps if you need GPU acceleration, things like that).
You’ll be kicking yourself that you didn’t hadn’t the jump earlier. Sounds like you’re already using Linux too, so it’s really not that big a leap.
I’ve tried a few times, and have yet to fundamentally understand it. Been using Linux since 2007!
What is it you’re struggling to understand? Like is there a concept or command or something that just isn’t clicking?
Personally, I’m mostly struggling with what data is stored where and how persistence is achieved. I have no issues getting services up and running with docker, but I’m paranoid about messing up my data.
If the service plainly stores content to a database, I have no problem backing up the database. But the variety of services offered via “simply run this docker command” come, as diverse the contents are. Between stored content (e.g. photos, documents, …), metadata and configuration, I feel lost on how to handle this, when it comes to updates.
In comparison, when I set up an LXC container, where I take care of each dependency and step of the setup myself, I’m more aware on what is stored where and can take care of my data.
Okay, so I think I can help with this a little.
The “secret sauce” of Docker / containers is that they’re very good at essentially lying to the contents of the container and making it think it has a whole machine to itself. By that I mean the processes running in a container will write to say
/config
and be quite content to write to that directory but docker is secretly redirecting that write to somewhere else. Where that “somewhere else” is, is known as a “volume” in docker terminology and you can tell it exactly where you want that volume to be. When you see a command with-v
in it, that’s a volume map - so if you see something like-v /mnt/some/directory:/config
in there - that’s telling docker "when this container tries to write to/config
, redirect it to/mnt/some/directory
instead.That way you can have 10 containers all thinking they’re each writing to their own special
/config
folder but actually they can all be writing to somewhere unique that you specify. That’s how you get the container to read and write to files in specific locations you care about, that you can backup and access. That’s how you get persistence.There’s other ways of specifying “volumes”, like named volumes and such but don’t worry too much about those, the good ol’ host path mapping is all you need in 99% of cases.
If you don’t specify a volume, docker will create one for you so the data can be written somewhere but do not rely on this - that’s how you lose data, because you’ll invariably run some docker clean command to recover space and delete an unused unnamed volume that had some important data in it.
It’s exactly the same way docker does networking, around port mapping - you can map any port on your host to the port the container cares about. So a container can be listening on port 80 but actually it’s being silently redirected by the docker engine to port 8123 on your host using the
-p 8123:80
argument.Now, as for updates - once you’ve got your volumes mapped (and the number and location of them will depend on the container itself - but they’re usually very well documented), the application running in the container will be writing whatever persistence data it needs to those folders. To update the application, you just need to pull a newer version of the docker container, then stop the old one and start it again - it’ll start up using the “new” container. How well updates work really depends on the application itself at this point, it’s not really something docker has any control over but the same would be if you were running via LXC or apt-get or whatever - the application will start up, read the files and hopefully handle whatever migrations and updates it needs to do.
It’s worth knowing that with docker containers, they usually have labels and tags that let you specify a specific version if you don’t want it updating. The default is an implied
:latest
tag but for something like postgress which has a slightly more involved update process you will want to use a specific tag likepostgres:14.3
or whatever.Hope that helps!