In the previous article, I discussed the general processing flow for DWMRI data. In this article, I’ll go into more detail on the preprocessing (and Quality Assurance) of diffusion data, and we’ll do it with Docker and Singularity!
Tools and files used in this article:
Some things: the documentation on topup and eddy is very good and anyone can invest the time and get them both working. Instead, I’ve opted to simply dockerize/singularize the entire process into a pipeline so that you can get it up and running quickly. The pipeline is called “dtiQA” and uses topup/eddy to preprocess the data and then runs a QA using DTI-related statistics. The version of FSL used in the container is 5.0.10 and has the 5.0.11 eddy patch.
scans.zip contains four DWMRI scans acquired in this order:
- 1000 b-value x 32 gradient directions (1000_32_1)
- 1000 b-value x 6 gradient directions (1000_6_rev)
- 2000 b-value x 60 gradient directions (2000_60)
- 1000 b-value x 32 gradient directions (1000_32_2)
All scans are hemispherical acquisitions. Scans 1, 3 and 4 have the same phase-encoding (they are “blip up”). Scan 2 is basically just an additional b0, except the phase encoding has been reversed (it is “blip down”). The additional 6 gradient directions in scan 2 were only acquired because it made it easier to acquire the b0 on our scanner. This acquisition was specifically acquired so topup
could be used for susceptibility distortion correction.
The DWMRI preprocessing pipeline I use in the docker/singularity image is shown below:
The input to the docker/singularity container is a folder containing the diffusion data and a corresponding config file named “dtiQA.conf”. The options to the config file are explained below:
- dwmri_info_base_path – must be of the form “/INPUTS/<dwmri_basename>”
- dwmri_info_pe_dir – can either be “A”, “P”, “L”, “R”, “S”, or “I”
- dwmri_info_scan_descrip – (optional) can either be “scan” or “b0”
- dwmri_info_readout_time – (optional) readout time for the scan
- bet_params – parameters used for brain extraction tool from FSL
- ADC_fix – if set to “true”, this will remove the ADC volume in a Philips scan if it is present
- zero_bval_thresh – will threshold bvals lower than this number to zero
- prenormalize – performs “b0 intensity normalization” which will divide each scan by its mean b0 value before calling topup/eddy
- use_all_b0s_topup – if set to “false” this will only use the first b0s with a unique phase encoding and readout time; otherwise all b0s will be used in topup
- topup_params – parameters used in topup. Generally “–subsamp=1,1,1,1,1,1,1,1,1” is used, as this guarantees scans with odd dimensions can be processed
- eddy_name – name of eddy executable. Will either be “eddy_openmp” or “eddy_cuda8.0”
- use_b0s_eddy – if set to “false”, scans marked as “b0” will not be included in the call to eddy
- eddy_params – parameters used in eddy. Generally “–repol” is used which is used for outlier replacement
- normalize – if set to “true”, scans will get divided by their own b0 and then multiplied through by an average b0 after topup/eddy; this ensures a single b0 remains after normalization is done (in the case of multiple scans)
- sort_scans – if set to “true”, scans will get sorted by b-value
- OMP_NUM_THREADS – if “eddy_openmp” is used, then this will determine the number of threads to use
If one of the options doesn’t make sense for now, just follow the example and things should be clearer.
Processing with Docker
First, install docker (and nvidia-docker if you have an nvidia GPU). Next, get the data, set up the inputs and outputs directories, and create the config file:
wget http://justinblaber.org/downloads/articles/dwmri_preprocessing/scans.zip unzip scans.zip rm scans.zip mkdir OUTPUTS mv scans INPUTS vim INPUTS/dtiQA.conf
Put this inside dtiQA.conf
:
dwmri_info_base_path = /INPUTS/1000_32_1 dwmri_info_pe_dir = A dwmri_info_scan_descrip = scan dwmri_info_base_path = /INPUTS/1000_6_rev dwmri_info_pe_dir = P dwmri_info_scan_descrip = b0 dwmri_info_base_path = /INPUTS/2000_60 dwmri_info_pe_dir = A dwmri_info_scan_descrip = scan dwmri_info_base_path = /INPUTS/1000_32_2 dwmri_info_pe_dir = A dwmri_info_scan_descrip = scan bet_params = -f 0.3 -R ADC_fix = true zero_bval_thresh = 50 prenormalize = true use_all_b0s_topup = false topup_params = --subsamp=1,1,1,1,1,1,1,1,1 --miter=10,10,10,10,10,20,20,30,30 --lambda=0.00033,0.000067,0.0000067,0.000001,0.00000033,0.000000033,0.0000000033,0.000000000033,0.00000000000067 eddy_name = eddy_openmp use_b0s_eddy = false eddy_params = --repol normalize = true sort_scans = true OMP_NUM_THREADS = 1
and then save it. Next, run the docker with:
sudo docker run --rm \ -v $(pwd)/INPUTS/:/INPUTS/ \ -v $(pwd)/OUTPUTS:/OUTPUTS/ \ --user $(id -u):$(id -g) \ justinblaber/dtiqa:1.0.0
If this is the first time you’ve run this docker, it will pull it auto-magically from my docker hub. If things go right, you should see an output similar to:
Unable to find image 'justinblaber/dtiqa:1.0.0' locally
latest: Pulling from justinblaber/dtiqa
1be7f2b886e8: Pull complete
6fbc4a21b806: Pull complete
c71a6f8e1378: Pull complete
4be3072e5a37: Pull complete
06c6d2f59700: Pull complete
9b6c44f1e172: Pull complete
4497e01cef02: Pull complete
36381ea8469e: Pull complete
1230d9961bb1: Pull complete
a4e3f9ad1375: Pull complete
b018435570f0: Pull complete
Digest: sha256:90249c3d0cce6cb0a77633905f04ad9b63dce5fa2468f90fba8efc9f375952fe
Status: Downloaded newer image for justinblaber/dtiqa:1.0.0
INPUTS directory is correct!
Warning: /OUTPUTS already exists. Files in this directory may get modified in-place.
[12-Mar-2018 05:56:51] /extra/fsl_5_0_10_eddy_5_0_11/bin/fslorient -getorient /OUTPUTS/SCANS/1000_32_1.nii.gz
RADIOLOGICAL
[12-Mar-2018 05:56:51] /extra/fsl_5_0_10_eddy_5_0_11/bin/fslorient -getorient /OUTPUTS/SCANS/1000_6_rev.nii.gz
RADIOLOGICAL
[12-Mar-2018 05:56:51] /extra/fsl_5_0_10_eddy_5_0_11/bin/fslorient -getorient /OUTPUTS/SCANS/2000_60.nii.gz
RADIOLOGICAL
[12-Mar-2018 05:56:52] /extra/fsl_5_0_10_eddy_5_0_11/bin/fslorient -getorient /OUTPUTS/SCANS/1000_32_2.nii.gz
RADIOLOGICAL
...
If you want to run the cuda version (highly recommended). Simply change:
eddy_name = eddy_openmp
to
eddy_name = eddy_cuda8.0
And then run docker with nvidia-docker like so:
sudo docker run --rm \ --runtime=nvidia \ -v $(pwd)/INPUTS/:/INPUTS/ \ -v $(pwd)/OUTPUTS:/OUTPUTS/ \ --user $(id -u):$(id -g) \ justinblaber/dtiqa:1.0.0
The only difference is the --runtime=nvidia
flag. But, wait! What if you don’t have sudo access?
Processing with Singularity
Not a problem! Hopefully Singularity is already installed (if not, ask your administrator). Using the same inputs and outputs directory setup as above, simply use:
singularity run -e \ -B INPUTS/:/INPUTS \ -B OUTPUTS/:/OUTPUTS \ shub://justinblaber/dtiQA_app:1.0.0
Or
singularity run -e \ --nv \ -B INPUTS/:/INPUTS \ -B OUTPUTS/:/OUTPUTS \ shub://justinblaber/dtiQA_app:1.0.0
If you have access to an Nvidia gpu. The Singularity image will pull automagically from Singularity Hub and then the processing will run. I would suggest moving the pulled image to a shared location to use again, as the image is ~2 gigs and may be time consuming if it’s constantly pulled every time.
Anyway, if things go right you should get a quality assurance pdf located at OUTPUTS/PDF/dtiQA.pdf
which looks like:
The PDF contains:
- Page 1 – The first three rows of images shows the susceptibility correction that
topup
performed. The “preprocessed” rows (3rd onwards) contain preprocessed data and show the relative alignment of the b0 volume with the diffusion shells. - Page 2 – shows a chi-squared plot which can be indicative of bad slices (i.e. high chi-squared values). A mostly red chi-squared plot is a “red flag” for bad data. The bval vs signal intensity graph should be inversely correlated and is useful to determine if the b-values are matched with the proper scans.
- Page 3 – shows diffusion tensor glyphs which makes it easy to discern if there are any flips in the input b-vectors (a common problem, especially with FSL).
The preprocessed data is in the OUTPUTS/PREPROCESSED
folder which contains dwmri.bval
, dwmri.bvec
and dwmri.nii.gz
.
Some questions:
How is this magically running on my computer!?
This is the magic of docker/singularity. It allows you to package configurations, scripts, and data and *should* run consistently on any system.
Why not just use Singularity all the time!?
In my honest opinion the technology is not as “mature” as Docker, so I feel more comfortable creating the Docker image first and then bootstrapping later to Singularity.
Why didn’t you use full sphere acquisitions?
Truthfully, we couldn’t get this to work on our scanner… But eddy
actually works quite well with just hemispherical acquisitions.
Why didn’t you do an entire reverse phase-encoded acquisition and use the least squares reconstruction option?
Truthfully, again, we could not get this to work on our scanner, but in addition to that, it will also double your scan time. So a business decision has to be made whether or not this is worth it. Our choice of just acquiring an additional single reverse phase encoded b0 only took an additional minute of scan time and still allowed us to use both topup
and eddy
.
What’s next?
Do some processing! I recommend looking at mrtrix, camino, and FSL.