-
Notifications
You must be signed in to change notification settings - Fork 6
DEPRECATED: CIVET 2.1.0 on Niagara
Please see https://github.com/CobraLab/documentation/wiki/Cortical-Thickness-&-Surface-Analysis-with-CIVET
CIVET is software for computing cortical surfaces and thickness maps from T1w images.
For the tutorial go to: Tutorial.
module load gcc/7.3.0 boost/1.66.0 hdf5/1.8.20 gnu-parallel gnuplot/5.2.2 CIVET/2.1.0 qbatch
CIVET has a required folder structure and naming convention for your input files. Make a folder in which you will run CIVET. cd
into your project folder and make input and output folders.
mkdir mycivet
cd mycivet
mkdir input
Your input folder is where you will place all your images. Populate your input folder with the images you wish to process.
CIVET requires the image files to follow a strict naming convention. The native MRI files must have the following filename structure:
$source_t1 = <prefix>_<id>_t1.mnc
$source_t2 = <prefix>_<id>_t2.mnc
$source_pd = <prefix>_<id>_pd.mnc
$source_mask = <prefix>_<id>_mask.mnc
For example, my prefix is ed
, so my files will be ed_sub-001_t1.mnc, ed_sub-002_t1.mnc
.
If you ran the in-house minc-bpipe-library, your T1w images are the ones named *.cutneckapplyautocrop.mnc
as CIVET prefers images with skull.
You can also add the brain mask if you already have it (not a necessity). In bpipe-outputs
, they are named *.cutneckapplyautocrop.beastmask.mnc
. Place them in the same input folder.
In the end you must have your input/
folder populated with something like this: ed_sub-001_t1.mnc, ed_sub-001_mask.mnc, etc.
- MAKE SURE YOU ARE RUNNING NATIVE SPACE IMAGES AND NOT THE NORMALISED ONES (or Mallar's wrath will descend upon you and your generation).
- Do not use long names, try shortening them.
- If you have to mass rename files, learn loops in bash, such as
for file in *.cutneckapplyautocrop.mnc; do mv ${file} ed_${file%%.*}_t1.mnc; done
.
First, choose the configuration you want for your subjects, you can find full details at http://www.bic.mni.mcgill.ca/ServicesSoftware/CIVET-2-1-0-Basic-Usage-of-CIVET. Once you have it, you can run it on the command line, though it would be very long to run each subject by hand, it can be done.
Gabriel recommends the following configuration:
CIVET_Processing_Pipeline -sourcedir input -targetdir output -lsq12 -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -model icbm152nl_09s -surface-atlas AAL -interp sinc -template 0.50 -id-file ids.txt -thickness tlaplace:tfs:tlink 0:30:20 -granular -run
New in CIVET 2.1 is the ability to run multiple thickness blurs and methods, as done above using -thickness tlaplace:tfs:tlink 0:30:20
. This is useful for checking how your results hold up across different blur settings. The default recommendation is to use the tlaplace setting at 30mm blur (https://www.ncbi.nlm.nih.gov/pubmed/15588607). Another new feature is the ability to compute high resolution surfaces (by adding the -hi-res-surfaces
option), although in the current version it is still recommended to use the lower resolution surfaces.
You just need to run this line for example:
CIVET_Processing_Pipeline -sourcedir $MYPWD/mycivet/input -targetdir $MYPWD/mycivet/output -prefix ed -lsq12 -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -model icbm152nl_09s -surface-atlas AAL -interp sinc -template 0.50 -thickness tlaplace:tfs:tlink 0:30:20 -granular -run sub-101_t1
However, this is a waste of time and you really want to use the processing nodes.
Niagara nodes have 40 cores, and so you can run 40 processes in each job by setting chunksize to 40. Setting -j to 40 allows all 40 processes to run in parallel.
If you want to make a job file of all your subjects, then you can use the script below which will loop through the contents of your input folder and create a command for each input. Save it as civet_run
or whatever name you want (e.g. nano civet_run
) and copy the contents. Make sure to modify the script as needed, particularly to change the prefix to the prefix you are using:
CIVET_FOLDER=$PWD
PREFIX=ed #MODIFY: your input file name prefix
ID_LIST=${CIVET_FOLDER}/id_list
# generate a list of IDs from the files in the input/ folder
for i in input/*t1.mnc; do
bn=$(basename $i);
pf=${bn##${PREFIX}_};
id=${pf%%_t1.mnc};
cat <<EOF
CIVET_Processing_Pipeline \
-sourcedir ${CIVET_FOLDER}/input \
-targetdir ${CIVET_FOLDER}/output \
-prefix ${PREFIX} \
-lsq12 \
-mask-hippocampus \
-resample-surfaces \
-N3-distance 0 \
-correct-pve \
-model icbm152nl_09s \
-surface-atlas AAL \
-interp sinc \
-template 0.50 \
-thickness tlaplace:tfs:tlink 0:30:20 \
-granular \
-run \
$@ \
${id}
EOF
echo $id >> ids
done > job_list
echo
echo "File job_list contains CIVET jobs. Run the following command to run it:"
echo "qbatch -N civet --chunksize 40 --walltime=24:00:00 job_list"
echo
Then run in your mycivet
folder ./civet_run
This should create an ids
and a job_list
file.
The ids
file contains the id of each subject per line.
The job_list
file contains a line of commands per subject.
Make sure to check them before running.
After that just run:
qbatch -N civet -c 40 --walltime=24:00:00 job_list
Check your progress using:
squeue -u username
- You can use the civet_run options as default, but I suggest you dig well on the details that may improve your particular study.
- Remember to modify the prefix in the civet_run file.
- 24 hours is the minimum time to run it in Niagara.
The AAL atlas does not include areas 61 and 62 now (IPL.L and IPL.R), even though the README says it does.
To get the Average Cortex with Civet 2.1.0 use this script instead:
#!/bin/bash
output=$1
if [[ -s $output ]]
then
echo "Output file $output exists"
exit 1
fi
echo "subject,PreCG.L_SA,SFGdor.L_SA,ORBsup.L_SA,MFG.L_SA,ORBmid.L_SA,IFGoperc.L_SA,IFGtriang.L_SA,ORBinf.L_SA,ROL.L_SA,SMA.L_SA,OLF.L_SA,SFGmed.L_SA,ORBsupmed.L_SA,REC.L_SA,INS.L_SA,ACG.L_SA,DCG.L_SA,PCG.L_SA,PHG.L_SA,CAL.L_SA,CUN.L_SA,LING.L_SA,SOG.L_SA,MOG.L_SA,IOG.L_SA,FFG.L_SA,PoCG.L_SA,SPG.L_SA,SMG.L_SA,ANG.L_SA,PCUN.L_SA,PCL.L_SA,HES.L_SA,STG.L_SA,TPOsup.L_SA,MTG.L_SA,TPOmid.L_SA,ITG.L_SA,Total.L_SA,PreCG.R_SA,SFGdor.R_SA,ORBsup.R_SA,MFG.R_SA,ORBmid.R_SA,IFGoperc.R_SA,IFGtriang.R_SA,ORBinf.R_SA,ROL.R_SA,SMA.R_SA,OLF.R_SA,SFGmed.R_SA,ORBsupmed.R_SA,REC.R_SA,INS.R_SA,ACG.R_SA,DCG.R_SA,PCG.R_SA,PHG.R_SA,CAL.R_SA,CUN.R_SA,LING.R_SA,SOG.R_SA,MOG.R_SA,IOG.R_SA,FFG.R_SA,PoCG.R_SA,SPG.R_SA,SMG.R_SA,ANG.R_SA,PCUN.R_SA,PCL.R_SA,HES.R_SA,STG.R_SA,TPOsup.R_SA,MTG.R_SA,TPOmid.R_SA,ITG.R_SA,Total.R_SA,PreCG.L_CT,SFGdor.L_CT,ORBsup.L_CT,MFG.L_CT,ORBmid.L_CT,IFGoperc.L_CT,IFGtriang.L_CT,ORBinf.L_CT,ROL.L_CT,SMA.L_CT,OLF.L_CT,SFGmed.L_CT,ORBsupmed.L_CT,REC.L_CT,INS.L_CT,ACG.L_CT,DCG.L_CT,PCG.L_CT,PHG.L_CT,CAL.L_CT,CUN.L_CT,LING.L_CT,SOG.L_CT,MOG.L_CT,IOG.L_CT,FFG.L_CT,PoCG.L_CT,SPG.L_CT,SMG.L_CT,ANG.L_CT,PCUN.L_CT,PCL.L_CT,HES.L_CT,STG.L_CT,TPOsup.L_CT,MTG.L_CT,TPOmid.L_CT,ITG.L_CT,Mean.L_CT,PreCG.R_CT,SFGdor.R_CT,ORBsup.R_CT,MFG.R_CT,ORBmid.R_CT,IFGoperc.R_CT,IFGtriang.R_CT,ORBinf.R_CT,ROL.R_CT,SMA.R_CT,OLF.R_CT,SFGmed.R_CT,ORBsupmed.R_CT,REC.R_CT,INS.R_CT,ACG.R_CT,DCG.R_CT,PCG.R_CT,PHG.R_CT,CAL.R_CT,CUN.R_CT,LING.R_CT,SOG.R_CT,MOG.R_CT,IOG.R_CT,FFG.R_CT,PoCG.R_CT,SPG.R_CT,SMG.R_CT,ANG.R_CT,PCUN.R_CT,PCL.R_CT,HES.R_CT,STG.R_CT,TPOsup.R_CT,MTG.R_CT,TPOmid.R_CT,ITG.R_CT,Mean.R_CT,PreCG.L_CV,SFGdor.L_CV,ORBsup.L_CV,MFG.L_CV,ORBmid.L_CV,IFGoperc.L_CV,IFGtriang.L_CV,ORBinf.L_CV,ROL.L_CV,SMA.L_CV,OLF.L_CV,SFGmed.L_CV,ORBsupmed.L_CV,REC.L_CV,INS.L_CV,ACG.L_CV,DCG.L_CV,PCG.L_CV,PHG.L_CV,CAL.L_CV,CUN.L_CV,LING.L_CV,SOG.L_CV,MOG.L_CV,IOG.L_CV,FFG.L_CV,PoCG.L_CV,SPG.L_CV,SMG.L_CV,ANG.L_CV,PCUN.L_CV,PCL.L_CV,HES.L_CV,STG.L_CV,TPOsup.L_CV,MTG.L_CV,TPOmid.L_CV,ITG.L_CV,Total.L_CV,PreCG.R_CV,SFGdor.R_CV,ORBsup.R_CV,MFG.R_CV,ORBmid.R_CV,IFGoperc.R_CV,IFGtriang.R_CV,ORBinf.R_CV,ROL.R_CV,SMA.R_CV,OLF.R_CV,SFGmed.R_CV,ORBsupmed.R_CV,REC.R_CV,INS.R_CV,ACG.R_CV,DCG.R_CV,PCG.R_CV,PHG.R_CV,CAL.R_CV,CUN.R_CV,LING.R_CV,SOG.R_CV,MOG.R_CV,IOG.R_CV,FFG.R_CV,PoCG.R_CV,SPG.R_CV,SMG.R_CV,ANG.R_CV,PCUN.R_CV,PCL.R_CV,HES.R_CV,STG.R_CV,TPOsup.R_CV,MTG.R_CV,TPOmid.R_CV,ITG.R_CV,Total.R_CV" > $output
for subject in $(ls -d output/* | grep -v References.txt | grep -v QC)
do
echo $(basename $subject)
echo -n $(basename $subject), >> $output
for datatype in $subject/surfaces/*lobe_areas*dat $subject/surfaces/*lobe_thickness*dat $subject/surfaces/*lobe_volumes*dat
do
tr -s ' ' < $datatype | tail -n +4 | cut -d " " -f3 | tr '\n' ','
done | sed 's/,$//g' >> $output
echo "" >> $output
done