This repository uses both neurodocker and tcy to create a standardized Docker-Image for the Complex Systems in Psychiatry Lab. It includes most of the software that the CSP-members need (A conda environment with Python & R and a bunch of of cool libraries, SPM, Freesurfer, etc.)
If you just want to use the docker image, just can pull the latest version from Docker Hub. You can pull the image by running:
docker pull johanneswiesner/csp:x.x.x
(where you replace x.x.x with latest currently available version).
Neurodocker is able to create .sif files. However, you can also convert the Docker image to a .sif file on the fly by running:
singularity pull csp.sif docker://johanneswiesner/csp:x.x.x
- Clone this repository to your machine using
git clone --recurse-submodules https://github.com/JohannesWiesner/csp_docker.git. This will automatically include thetcyrespository as a submodule. - Run
bash generate_dockerfile.shto create a Dockerfile using neurodocker. By default this will first run thetcysubmodule to create anenvironment.ymlfile. This file will then be used to create acondaenvironment within the Docker image with the standard packages for the CSP-members. - Build the image through
docker build -t xxx:xxx . - Run image as a container using
docker run -t -i --rm -p 8888:8888 xxx:xxx
Because it can be tedious to always execute steps 2-4 while developing and because the creation of conda environments can take quite long, we included two more options:
- It is possible to provide a
.ymlfile of your choice using the-yoption (e.g.bash generate_dockerfile.sh -y path/to/your/file.yml). We included atest.ymlfile within this repository with a couple of packages that are mostly needed to run nipype-analyses serving as a MVP. - It is possible to run steps 2-4 in one go using the
-toption (e.g.bash generate_dockerfile.sh -t). This will generate the Dockerfile, build the image and run it as a container while also mounting the subfolders of the included/testingdirectory to it.
- This repository contains a bash script
download_test_data.shthat you can use to download a functional and anatomical image from openneuro.org usingopenneuro-py. Note that you must installopenneuro-pybeforehand by following the installation instructions. - Make sure you run
generate_dockerfile.shanddocker buildon a regular basis (preferably after every single edit). This is tedious but in our experience, too many edits at once make it hard to debug what went wrong. The neurodocker image is still under heavy development which means that it is not guaranteed that every combination of arguments that you pass todocker run -i --rm repronim/neurodocker:x.x.x generate dockerwill lead to a bug-free Dockerfile. - The currently used base-image
neurodebian:stretch-non-freeis quite old and we would wish to switch to a newer version of neurodebian. However, with newer base images a lot of bugs happen and software like SPM12 could not be installed using the neurodocker flags. (This is also tightly related to the first point, so make sure the image can be built and the container runs error free when using a different base image). - Generally, there are two options to include neuroimaging software within the docker image. You can either use neurodebian as a base image and its included APT package manager to install software or you use the included flags of neurodocker (e.g.
--spm12, which in theory should enable you to use any base image that you want). We are currently using a mixture of both options as we were unable to install everything with just neurodocker. The long-term goal is to switch to a newer (and slimmer) base image and to install everything what we need with only using the neurodocker flags.
The manually_created directory contains (as the name suggests) Docker files that were not created with neurodocker, but were written manually to bypass current issues that come with neurodocker.
-
In case you are working at the CIMH and you get SSL-Errors, reach out to us via e-mail.
-
In case you run into file-permission errors (e.g. can't create files in your mounted directories), it makes sense to pass your host id and group to the docker container. This can be done by adding the
-uoption todocker run. E.g.:docker run ... -u $(id -u):$(id -g)Using this option makes sure, that the user "inside the container" has the same user and group id as the host user. So whatever directories or files you created outside the container, they can now be manipulated by the container user.