My friend just shared the existence of this software with me, and it looks perfect for me. His setup uses docker though, and mine doesn’t. The non-docker instructions seem simple enough, but I can’t figure out how to install the requirements. The “pip install” complains and says I should use apt, but I can’t find most of those requirements in my sources. For example “python3-verboselogs” isn’t found. Can someone help? I’d love to get this running!
This is for Gentoo but very applicable to your case as well:
It would be easier for you to just learn docker. It’s not complicated. Just do it.
Not OP. I’m willing to learn Docker, but I just can’t get it going. My machine is permanently connected through nordvpn, I use meshnet to access it from other devices if I’m not on local wifi. The docker + VPN doesn’t work. Docker fails to download anything from docker hub, every request times out. Any instructions to bypass it looked very difficult, manually editing subnets or something, I feel uncomfortable with. Disabling services, then disabling VPN, then installing docker stuff and then restarting services and VPN also seems silly. I got lots going throught dietpi (many nicely embedded services), though it’s holding me back for example from running jellyseerr
I guess I must be stupid, because I’ve tried a few times and never understood it. I tried projects like DockStarter.
Take it from someone who is a Linux noob and Googles for terminal commands every time, and whose most used keys are ctrl c, ctrl v…
- Go to official docker documentation, copy paste the commands to install docker.
- go to Portainer documentation, copy paste the commands to install Portainer Community Edition
- Find a service you want to install, copy the ‘docker compose’ text. (A good first service to install is Watchtower which takes care of updating other containers)
- go to Portainer, find the ‘stacks’ tab, paste, click ‘deploy’
Don’t do this on your main server. Use some old hardware or a cheap VPS to practise on.
The main skill I need is googling and asking AI. It’s that easy.
You don’t need to build a docker container from scratch, you just need to run one. It’s infinitely less complicated.
I have a system that’s been working well (except for this new thing I’m trying to add) for a couple years now. I am not looking yo replace it with docker (something that I have failed with in the past). Maybe next time my system breaks I’ll take another look at docker.
I think your failings with docker stem from a complete lack of understanding. It takes little to no effort and will replace nothing. You’re causing yourself a lot more work, not just now but also I. The future, trying to do things the way you are. It shouldn’t take more than 10 minutes to install docker, docker compose, and research the setting you need to add to a compose file before running it. As t that point you’re done, no dependencies, no maintenance, need to update? Pull the new image and relaunch the container. Takes seconds.
I can’t imagine trying to wrap my head around python dependencies and venv when docker seems a bridge too far.
Yes, complete lack of understanding. That is a problem when it comes to working with something. I don’t understand python and venv either, but I got it working anyway in about 10 minutes. My experience with docker is that it had too many moving parts, particularly when it came to networking. It obviously seems easy to you and lots of other people, but it hasn’t come easily to me. I’ll probably need someone in the room with me to ever understand it.
This just tells me you must be trying to build a container instead of just running a ready made one. That’s what I mean by the complete lack of understanding. And in response to your previous comment, I doubt you’re stupid. You’re just clearly misguided on this subject. Save yourself all the headache and learn how to launch a docker container. It’s even easier than what you’ve just tackled. I believe in you.
I appreciate your confidence (and your docker evangelism), but I don’t have the bandwidth to tackle a docker project right now. I don’t believe I was ever building containers, as I was leveraging projects like DockStarter designed to make things more painless. I’m sure I’ll try again sometime, but that time isn’t right now.
Docker doesnt replace your current system. It just runs containers (which act like a separate system)
You can also try podman which wont silently rewrite your firewall rules without telling you… I’ll never forgive docker for doing that
Especially with anything requiring Python. I also isolate anything using node.
I know it’s not helpful or what you’re asking for but honestly, just learn docker and all of these kinds of problems just go away.
Once you learn to spin up one docker container, you can spin up nearly any of them (there’s a couple of extra steps if you need GPU acceleration, things like that).
You’ll be kicking yourself that you didn’t hadn’t the jump earlier. Sounds like you’re already using Linux too, so it’s really not that big a leap.
I’ve tried a few times, and have yet to fundamentally understand it. Been using Linux since 2007!
What is it you’re struggling to understand? Like is there a concept or command or something that just isn’t clicking?
Personally, I’m mostly struggling with what data is stored where and how persistence is achieved. I have no issues getting services up and running with docker, but I’m paranoid about messing up my data.
If the service plainly stores content to a database, I have no problem backing up the database. But the variety of services offered via “simply run this docker command” come, as diverse the contents are. Between stored content (e.g. photos, documents, …), metadata and configuration, I feel lost on how to handle this, when it comes to updates.
In comparison, when I set up an LXC container, where I take care of each dependency and step of the setup myself, I’m more aware on what is stored where and can take care of my data.
Okay, so I think I can help with this a little.
The “secret sauce” of Docker / containers is that they’re very good at essentially lying to the contents of the container and making it think it has a whole machine to itself. By that I mean the processes running in a container will write to say
/config
and be quite content to write to that directory but docker is secretly redirecting that write to somewhere else. Where that “somewhere else” is, is known as a “volume” in docker terminology and you can tell it exactly where you want that volume to be. When you see a command with-v
in it, that’s a volume map - so if you see something like-v /mnt/some/directory:/config
in there - that’s telling docker "when this container tries to write to/config
, redirect it to/mnt/some/directory
instead.That way you can have 10 containers all thinking they’re each writing to their own special
/config
folder but actually they can all be writing to somewhere unique that you specify. That’s how you get the container to read and write to files in specific locations you care about, that you can backup and access. That’s how you get persistence.There’s other ways of specifying “volumes”, like named volumes and such but don’t worry too much about those, the good ol’ host path mapping is all you need in 99% of cases.
If you don’t specify a volume, docker will create one for you so the data can be written somewhere but do not rely on this - that’s how you lose data, because you’ll invariably run some docker clean command to recover space and delete an unused unnamed volume that had some important data in it.
It’s exactly the same way docker does networking, around port mapping - you can map any port on your host to the port the container cares about. So a container can be listening on port 80 but actually it’s being silently redirected by the docker engine to port 8123 on your host using the
-p 8123:80
argument.Now, as for updates - once you’ve got your volumes mapped (and the number and location of them will depend on the container itself - but they’re usually very well documented), the application running in the container will be writing whatever persistence data it needs to those folders. To update the application, you just need to pull a newer version of the docker container, then stop the old one and start it again - it’ll start up using the “new” container. How well updates work really depends on the application itself at this point, it’s not really something docker has any control over but the same would be if you were running via LXC or apt-get or whatever - the application will start up, read the files and hopefully handle whatever migrations and updates it needs to do.
It’s worth knowing that with docker containers, they usually have labels and tags that let you specify a specific version if you don’t want it updating. The default is an implied
:latest
tag but for something like postgress which has a slightly more involved update process you will want to use a specific tag likepostgres:14.3
or whatever.Hope that helps!
Ahh so u can’t install packages into system python unless u use apt. What u need to do is create a virtual environment (venve) then u can source that venve and install packages into that.
Edit: docker is simple just use docker compose files. The compose file outlines how to run a prebuilt docker image (basically just a virtual machine).
Not being a Python developer myself I’d almost go the Docker route simply to avoid the hell that is Python package management.
While I can’t suggest anything specifically helpful (I’ve forgotten) I’d say check the project’s Dockerfile. It’ll give you an idea of how they’re handling it in Docker and it should give you an idea of what to do.
Every couple of years I try docker again. I just fail to wrap my head around it. I have a local friend who got it up and running though, so maybe I’ll have him hold my hand through it.
That’s an Ubuntu thing. They prefer you use apt to install deps, but you don’t absolutely need to. The PROPER way to work around this is starting a localized python virtual environment for your project directory. That is essentially sandboxing a python environment that only installs dependencies into the project directory and doesn’t alter your system globally.
Lots of instructions on the steps to do this out there that should get you going in just a few minutes.
OK thanks. I will look into that. The fact that they are installed in a “sandbox”… will that prevent them from being accessible to the decluttarr script?
This happens inside whatever directory you have Decluttar in, and then the local venv runs decluttar
I don’t understand a bit of this, but I got everything installed and running. Seems I have to ‘activate’ the venv and run the script from within it. Not sure how this works with the script auto running itself periodically, but I guess I will find out! Thanks for pointing me in the right direction.
A virtual environment is just a copy of the python and pip binaries. When you activate the venv, the venv dirs temporarily get added to your path, so your regular python alias points to the binary in the venv (run
which python
with venv active to verify). Pip will install modules to a subdir of your venv. It basically works like npm and the node_modules dir.On second read, maybe you already knew that.
https://stackoverflow.com/questions/3287038/cron-and-virtualenv
I haven’t looked at the particulars of this applications, but if you path the python binary you use to run the application, it should use the environment that’s with it, without being activated. Activate just prepends the path for that venv to every command you give from then on when you’re working in a shell. And as noted in there, make sure you specify /bin/bash as your shell in cronjobs since it uses sh by default so you might run into issues in that context.