Skip to content

Docker configuration#261

Open
vorezal wants to merge 4 commits intomag37:mainfrom
vorezal:docker
Open

Docker configuration#261
vorezal wants to merge 4 commits intomag37:mainfrom
vorezal:docker

Conversation

@vorezal
Copy link
Copy Markdown
Contributor

@vorezal vorezal commented Feb 2, 2026

I've seen a few requests to run Dockcheck within Docker. This could use a lot more testing than I was able to do with only my use cases, but here is what it should provide:

  • Dockerfile for image building. Anyone can use this to build one themselves, and we can also publish it in Dockerhub. I think it would make the most sense to do this under the mag37/dockcheck namespace, but I would be happy to do it under vorezal/dockcheck and maintain it if that's easier.
  • Example compose.yaml file which can be modified for individual use cases.
  • Manages schedules and command line arguments as environment variables passed to the container.
  • Documentation not yet written. I'd like feedback and testing to confirm it is fully functional, then I'll write up documentation.
  • I think the container can even update itself, but it could short circuit an execution and I still need to test that case.

Limitations I can think of at present:

  • DSM notification won't work if running on DSM as a container
  • The files snooze.list and updates_available.txt won't live beyond the container unless added to a volume. This may be fine as is, or may need additional documentation and examples.
  • Apprise limited to curl configuration unless we add the package to the container image, which would increase its size.
  • Rootless docker installation not supported and this container runs as root to access docker.sock. I think this is ok since privilege escalation is pretty trivial for anything that can run arbitrary containers.
  • Compose projects path must exactly match host system path and every compose file must be added to a volume to be discovered.

As always, open to thoughts and improvements!

@mag37
Copy link
Copy Markdown
Owner

mag37 commented Feb 3, 2026

Impressive work @vorezal - as always!

I'm quite overwhelmed with other things currently and can't find time to properly review and feedback this, so it'll be a couple of weeks most likely before this would hit the main branch and a proper release. I'll try to squeeze in a read-through and hopefully run some tests so I can give some feedback!

Just a thought about the limitation

Compose projects path must exactly match host system path and every compose file must be added to a volume to be discovered.

Isn't it enough to bind the "root" of the docker project - just exactly to what it is outside of the container?
So if I keep all my docker projects in this structure:

/opt/projects/containers/
├── apache
│   ├── data
│   └── docker-compose.yml
├── homer
│   ├── data
│   └── compose.yml
└── whoogle
     └── docker-compose.yml

Wouldn't this suffice as a bind mount? - /opt/projects/containers:/opt/projects/containers:ro
Or are you required to individually add every compose?

And about namespace - I'm fine with either! If you'd like proper credits and host it under your name, or if you think it's more sensible to use mag37/dockcheck. Though might need to name it mag37/dockcheck-docker if so to keep it apart and/or change current label-setup as that might conflict.

@yoyoma2
Copy link
Copy Markdown
Contributor

yoyoma2 commented Feb 3, 2026

I've also seen multiple requests for this so some users will be very happy. 👍 I have DSM and some rootless setups so it's not for me personally.

@vorezal
Copy link
Copy Markdown
Contributor Author

vorezal commented Feb 3, 2026

Yes, sorry my statement about adding compose files was a little unclear. If you only have one root, you only need to add that folder as a volume. I was trying to cover the case where someone could have compose files under multiple different folders for some reason. What I was trying to say is that all compose files for containers you want to manage must be available to the Docker container at the same path as on the host file system, however many volumes you have to add to do it. You could include every single docker file individually, one primary root (if you only have one), or each root if you have more than one.

I don't think the Docker namespaces will conflict with Github project names, so we should be able to use mag37/dockcheck for the image without a problem. As for mag37 vs vorezal, I think it makes the most sense to keep it the same as the Github project for discovery and consistency. People will know it is official that way. The issue there is you would have to build and publish a new Docker image with each release or bake the build/publish into the Github project itself using Github actions and GHCR. Adding the build would be nice, but it only takes a few seconds to build and publish the image manually.

@mag37
Copy link
Copy Markdown
Owner

mag37 commented Feb 3, 2026

Ah then I get what you mean! So let's say I've got this layout (for some reason):

/opt/projects/containers/prod
├── container_one
│   ├── data
│   └── docker-compose.yml
└── container_two
     └── docker-compose.yml


/home/user_bob/containers/test
├── container_t1
│   ├── data
│   └── docker-compose.yml
└── container_t2
     └── docker-compose.yml

I'd need to bind both of them with:

    volumes:
      - /home/user_bob/containers/test:/home/user_bob/containers/test:ro
      - /opt/projects/containers/prod:/opt/projects/containers/prod:ro

Or as you state, bind more granular with individual containers etc.


namespace

I'd be fine running it under mag37/dockcheck as long as you think it's fair/sensible. I'll probably stay away from using Github actions for now and rather build+publish manually on new releases.

(I'm considering if a migration to Codeberg is sensible in the future - though that's a later discussion/project when time allows)

@vorezal
Copy link
Copy Markdown
Contributor Author

vorezal commented Feb 3, 2026

Yes that volume structure looks correct! And sounds good on the docker image publishing. I think keeping mag37/dockcheck is quite sensible and the level of effort is pretty minimal.

@mag37
Copy link
Copy Markdown
Owner

mag37 commented Mar 13, 2026

Sorry I havn't got around to properly test this.. Wanted to merge as it dosn't affect the rest of the project - so I ran through a quick test. Some points for now:

urls.list errors the build if missing.

Currently the way to "activate" the urls in the notification is to just have the file present alongside the script. This might need to be taken into consideration, sure we could just include the urls.list always, but that'd also force it being active. It's already in the build within the notify_templates/ directory - so maybe that can be dynamically used / toggled on somehow. Either with an env-var or something else.

Hostname will be the containers hostname

When notifications getting sent, they'll report the containers hostname as "$FromHost".
Don't know the best way to solve this, but one way is adding hostname: XYZ to the composefile.

I'll need to do some more testing, but other than this it seemed to work great! Did some notifications to a file. Will need to test some more edge cases and also try to let the container update itself - sounds like it could be breakage.

I'm considering these changes to the docker-compose example:

services:
  dockcheck:
    container_name: dockcheck
    hostname: dockerHostXYZ
    build: .

(I can do the changes btw, just want it to be open for discussion/feedback/brainstorming)

@vorezal
Copy link
Copy Markdown
Contributor Author

vorezal commented Mar 14, 2026

Ah! I completely forgot that urls.list would become required when added to the build. We can add that as a volume (optional) in the example docker compose file as well. Adding the container_name and hostname also makes sense. If we are taking on building and publishing the image though, I'm not sure we need the build directive. Unless we expect people to build a local image themselves. Which is also fine really, but I was writing it as if the user would pull from the dockerhub repository.

I can take care of the urls.list fix and add some comments to the compose-example.yaml file. I'll hold off on the container_name, hostname, and build directives in case you want to talk about those a little more.

@vorezal
Copy link
Copy Markdown
Contributor Author

vorezal commented Mar 14, 2026

There we go. Really, anything you could normally do to override functionality by copying files into the root (like your own copy of notify_v2.sh for example) you could accomplish by making a volume for it from your local repo. We can probably call that out somewhere when we actually write a readme section for this. Or even just as a comment in the compose-example.yaml file.

@mag37
Copy link
Copy Markdown
Owner

mag37 commented Mar 14, 2026

Ah! I completely forgot that urls.list would become required when added to the build. We can add that as a volume (optional) in the example docker compose file as well.

Wth, I was distracted when I wrote the comment 😅 totally forgot I actually tried adding it as a volume. Thats clearly the best and straight forward solution.

Adding the container_name and hostname also makes sense. If we are taking on building and publishing the image though, I'm not sure we need the build directive.

Agreed, we'll build it - but for now (until properly released as a built image) I think it's good having it being built locally for people to test.
And I think we should have both hostname and container_name in the compose example, with a comment for the hostname saying something like # the hostname to be used in notifications


Further on I think we should discuss some security concerns. (This is new ground for me).

We mount the docker socket to allow for docker control - maybe we should look into docker socket proxy of some sort.

We also build the image as everything done as root within the container, which could be considered bad practice. Maybe we could look at adding a service user in the Dockerfile and use uid+gid env vars in the compose.
I've not built any proper containers for public releases myself so I need to read up on this and do some testing.

@vorezal
Copy link
Copy Markdown
Contributor Author

vorezal commented Mar 15, 2026

All good! Feel free to add those fields then, or I can get to it in a little bit.

On the security side of things, I completely understand your concerns. I did actually go down a little bit of a rabbit hole trying to figure out how to avoid running as root, but everything I could think to do seemed trivial to circumvent just due to the permissions required by the script. Even if we ran as a non-root user and set up a socket proxy, the script/container needs to be able to be able to pull new images and run arbitrary containers with volumes attached. If someone gained access to or compromised the container, all they would have to do to gain effective root access to the host would be to run a command through a container that does run as root and mount the root file system as a volume. I can't think of a way to counter that, at least on our side. We could certainly call out the security risk and recommend that users install Docker in rootless mode, but that would be a host management thing.

Thinking about it, we could at least mitigate part of the risk by minimizing possible attack vectors. If someone compromised the code repo and a user had auto updates turned on, that would be a problem no matter what. But if we required users to build their own containers and didn't publish it ourselves, we would eliminate the possibility of someone taking over the image on Dockerhub. As I've written things now, that would require users to update their local repo and rebuild the container whenever there was an update, but we might be able to avoid that with a change in approach. Maybe we don't include any of the script files during the container build? We could have the compose file add the entire local project directory as a volume and then updates to the files from within the Docker container or the Host would affect both.

But if we did the above, we would just be creating an image that's only purpose is to have the libraries necessary to run the script and is configured to run anything named dockcheck.sh on a schedule. I think I still like the idea of building the script files into the container and publishing it ourselves so it is fully encapsulated and the version of the container matches the version of the script within it. I do think the chance of an image takeover is quite low, or at least about the same as Github getting taken over.

@mag37
Copy link
Copy Markdown
Owner

mag37 commented Mar 18, 2026

I can't think of a way to counter that, at least on our side. We could certainly call out the security risk and recommend that users install Docker in rootless mode, but that would be a host management thing.

I agree, from what I can think of it's hard to work around as those levels of access is needed to do the docker handling.
Though I think we should write a warning/disclaimer of some sort - at least in the readme but maybe also in the compose file.
Maybe if users only want to use notifications they can have a stricter socket permission. Sadly mounting the docker.sock as :ro only makes the user unable to manipulate the device (which is probably sensible anyway but might also lead to a false sense of security) but does not affect actual docker permissions. To adjust that it has to be run with a socket proxy of some sort which will block specific API calls.

I think I still like the idea of building the script files into the container and publishing it ourselves so it is fully encapsulated and the version of the container matches the version of the script within it. I do think the chance of an image takeover is quite low, or at least about the same as Github getting taken over.

I agree - the security risk is probably still the same either way and the alternative is not as clean as including everything needed within the container.

It's a hard thing to swallow though, the security risk. But at the same time - running it "bare metal" currently is as much of a risk anyway so it probably just that we need to be clear that it's more or less host level access that people can mitigate by using rootless docker and other things manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Idea: package the repo into an container

3 participants