In the previous article, I discussed the preprocessing of diffusion data. In this article, I’ll demonstrate how to use bedpostx with Docker and Singularity!
Tools and files used in this article:
The version of FSL used in the docker is 5.0.10 and it has the cuda 8 version of bedpostx_gpu from here. PREPROCESSED.zip is preprocessed diffusion data using my dtiQA Singularity/Docker image.
The input to the docker/singularity container is a folder containing the diffusion data, a binary mask, and a corresponding config file named bedpostx.conf
. The options to the config file are explained below:
- bedpostx_name – must be either “bedpostx” or “bedpostx_gpu”
- bedpostx_params – input parameters to bedpost; typically the defaults are used and this is left blank
Getting bedpostx to run is relatively simple, but having a prepackaged container in Docker and Singularity (plus a bonus QA PDF) just makes life so much easier. To run the example, do the following:
wget http://justinblaber.org/downloads/articles/bedpostx/PREPROCESSED.zip unzip PREPROCESSED.zip rm PREPROCESSED.zip mkdir OUTPUTS mv PREPROCESSED INPUTS vim INPUTS/bedpostx.conf
Put this inside bedpostx.conf
:
bedpostx_name = bedpostx_gpu bedpostx_params =
and then save it. Next, run the docker with:
sudo docker run --rm \ --runtime=nvidia \ -v $(pwd)/INPUTS/:/INPUTS/ \ -v $(pwd)/OUTPUTS:/OUTPUTS/ \ --user $(id -u):$(id -g) \ justinblaber/bedpostx:1.0.0
Or, if you don’t have sudo, run the singularity container with:
singularity run -e \ --nv \ -B INPUTS/:/INPUTS \ -B OUTPUTS/:/OUTPUTS \ shub://justinblaber/bedpostx_app:1.0.0
If you don’t have a gpu (highly, highly recommended with bedpostx), you can set:
bedpostx_name = bedpostx
And then remove the --runtime=nvidia
or --nv
flag. If all goes well, the output PDF should look like:
There you go! Sooooo simple.
Wow! What a useful tool! Thanks so much!
You’re welcome Dr. Hansen