Conversation
|
Impressive work @vorezal - as always! I'm quite overwhelmed with other things currently and can't find time to properly review and feedback this, so it'll be a couple of weeks most likely before this would hit the main branch and a proper release. I'll try to squeeze in a read-through and hopefully run some tests so I can give some feedback! Just a thought about the limitation
Isn't it enough to bind the "root" of the docker project - just exactly to what it is outside of the container? /opt/projects/containers/
├── apache
│ ├── data
│ └── docker-compose.yml
├── homer
│ ├── data
│ └── compose.yml
└── whoogle
└── docker-compose.ymlWouldn't this suffice as a bind mount? And about namespace - I'm fine with either! If you'd like proper credits and host it under your name, or if you think it's more sensible to use mag37/dockcheck. Though might need to name it |
|
I've also seen multiple requests for this so some users will be very happy. 👍 I have DSM and some rootless setups so it's not for me personally. |
|
Yes, sorry my statement about adding compose files was a little unclear. If you only have one root, you only need to add that folder as a volume. I was trying to cover the case where someone could have compose files under multiple different folders for some reason. What I was trying to say is that all compose files for containers you want to manage must be available to the Docker container at the same path as on the host file system, however many volumes you have to add to do it. You could include every single docker file individually, one primary root (if you only have one), or each root if you have more than one. I don't think the Docker namespaces will conflict with Github project names, so we should be able to use mag37/dockcheck for the image without a problem. As for mag37 vs vorezal, I think it makes the most sense to keep it the same as the Github project for discovery and consistency. People will know it is official that way. The issue there is you would have to build and publish a new Docker image with each release or bake the build/publish into the Github project itself using Github actions and GHCR. Adding the build would be nice, but it only takes a few seconds to build and publish the image manually. |
|
Ah then I get what you mean! So let's say I've got this layout (for some reason): /opt/projects/containers/prod
├── container_one
│ ├── data
│ └── docker-compose.yml
└── container_two
└── docker-compose.yml
/home/user_bob/containers/test
├── container_t1
│ ├── data
│ └── docker-compose.yml
└── container_t2
└── docker-compose.ymlI'd need to bind both of them with: volumes:
- /home/user_bob/containers/test:/home/user_bob/containers/test:ro
- /opt/projects/containers/prod:/opt/projects/containers/prod:roOr as you state, bind more granular with individual containers etc.
I'd be fine running it under (I'm considering if a migration to Codeberg is sensible in the future - though that's a later discussion/project when time allows) |
|
Yes that volume structure looks correct! And sounds good on the docker image publishing. I think keeping mag37/dockcheck is quite sensible and the level of effort is pretty minimal. |
|
Sorry I havn't got around to properly test this.. Wanted to merge as it dosn't affect the rest of the project - so I ran through a quick test. Some points for now:
|
|
Ah! I completely forgot that urls.list would become required when added to the build. We can add that as a volume (optional) in the example docker compose file as well. Adding the container_name and hostname also makes sense. If we are taking on building and publishing the image though, I'm not sure we need the build directive. Unless we expect people to build a local image themselves. Which is also fine really, but I was writing it as if the user would pull from the dockerhub repository. I can take care of the urls.list fix and add some comments to the compose-example.yaml file. I'll hold off on the container_name, hostname, and build directives in case you want to talk about those a little more. |
…to compose-example.yaml.
|
There we go. Really, anything you could normally do to override functionality by copying files into the root (like your own copy of notify_v2.sh for example) you could accomplish by making a volume for it from your local repo. We can probably call that out somewhere when we actually write a readme section for this. Or even just as a comment in the compose-example.yaml file. |
Wth, I was distracted when I wrote the comment 😅 totally forgot I actually tried adding it as a volume. Thats clearly the best and straight forward solution.
Agreed, we'll build it - but for now (until properly released as a built image) I think it's good having it being built locally for people to test. Further on I think we should discuss some security concerns. (This is new ground for me). We mount the docker socket to allow for docker control - maybe we should look into docker socket proxy of some sort. We also build the image as everything done as root within the container, which could be considered bad practice. Maybe we could look at adding a service user in the Dockerfile and use uid+gid env vars in the compose. |
|
All good! Feel free to add those fields then, or I can get to it in a little bit. On the security side of things, I completely understand your concerns. I did actually go down a little bit of a rabbit hole trying to figure out how to avoid running as root, but everything I could think to do seemed trivial to circumvent just due to the permissions required by the script. Even if we ran as a non-root user and set up a socket proxy, the script/container needs to be able to be able to pull new images and run arbitrary containers with volumes attached. If someone gained access to or compromised the container, all they would have to do to gain effective root access to the host would be to run a command through a container that does run as root and mount the root file system as a volume. I can't think of a way to counter that, at least on our side. We could certainly call out the security risk and recommend that users install Docker in rootless mode, but that would be a host management thing. Thinking about it, we could at least mitigate part of the risk by minimizing possible attack vectors. If someone compromised the code repo and a user had auto updates turned on, that would be a problem no matter what. But if we required users to build their own containers and didn't publish it ourselves, we would eliminate the possibility of someone taking over the image on Dockerhub. As I've written things now, that would require users to update their local repo and rebuild the container whenever there was an update, but we might be able to avoid that with a change in approach. Maybe we don't include any of the script files during the container build? We could have the compose file add the entire local project directory as a volume and then updates to the files from within the Docker container or the Host would affect both. But if we did the above, we would just be creating an image that's only purpose is to have the libraries necessary to run the script and is configured to run anything named dockcheck.sh on a schedule. I think I still like the idea of building the script files into the container and publishing it ourselves so it is fully encapsulated and the version of the container matches the version of the script within it. I do think the chance of an image takeover is quite low, or at least about the same as Github getting taken over. |
I agree, from what I can think of it's hard to work around as those levels of access is needed to do the docker handling.
I agree - the security risk is probably still the same either way and the alternative is not as clean as including everything needed within the container. It's a hard thing to swallow though, the security risk. But at the same time - running it "bare metal" currently is as much of a risk anyway so it probably just that we need to be clear that it's more or less host level access that people can mitigate by using rootless docker and other things manually. |
I've seen a few requests to run Dockcheck within Docker. This could use a lot more testing than I was able to do with only my use cases, but here is what it should provide:
Limitations I can think of at present:
As always, open to thoughts and improvements!