Connectome Mapper 3

Latest released version: v3.2.0

This neuroimaging processing pipeline software is developed by the Connectomics Lab at the University Hospital of Lausanne (CHUV) for use within the SNF Sinergia Project 170873, as well as for open-source software distribution. Source code is hosted on GitHub.

GitHub release (latest by date including pre-releases) GitHub Release Date Zenodo Digital Object Identifier Joss Paper Digital Object Identifier PyPI Docker Image Version (latest by date) Docker Pulls Continuous Integration Status Code Coverage Documentation Status Code Quality Review Status All-contributors

Warning

THIS SOFTWARE IS FOR RESEARCH PURPOSES ONLY AND SHALL NOT BE USED FOR ANY CLINICAL USE. THIS SOFTWARE HAS NOT BEEN REVIEWED OR APPROVED BY THE FOOD AND DRUG ADMINISTRATION OR EQUIVALENT AUTHORITY, AND IS FOR NON-CLINICAL, IRB-APPROVED RESEARCH USE ONLY. IN NO EVENT SHALL DATA OR IMAGES GENERATED THROUGH THE USE OF THE SOFTWARE BE USED IN THE PROVISION OF PATIENT CARE.

About

_images/flowchart_bidsapp.png

Connectome Mapper 3 is an open-source Python3 image processing pipeline software, with a Graphical User Interface (GUI), that implements full anatomical, diffusion and resting-state MRI processing pipelines, from raw T1 / Diffusion / BOLD / preprocessed EEG data to multi-resolution connection matrices based on a new version of the Lausanne parcellation atlas, aka Lausanne2018.

Connectome Mapper 3 pipelines use a combination of tools from well-known software packages, including FSL, FreeSurfer, ANTs, MRtrix3, Dipy, AFNI, MNE, MNEcon, and PyCartool empowered by the Nipype dataflow library. These pipelines are designed to provide the best software implementation for each state of processing at the time of conception, and can be easily updated as newer and better neuroimaging software become available.

To enhance reproducibility and replicatibility, the processing pipelines with all dependencies are encapsulated in a Docker image container, which handles datasets organized following the BIDS standard and is distributed as a BIDS App @ Docker Hub. For execution on high-performance computing cluster, a Singularity image is also made freely available @ Sylabs Cloud.

To enhanced accessibility and reduce the risk of misconfiguration, Connectome Mapper 3 comes with an interactive GUI, aka cmpbidsappmanager, which supports the user in all the steps involved in the configuration of the pipelines, the configuration and execution of the BIDS App, and the control of the output quality. In addition, to facilitate the use by users not familiar with Docker and Singularity containers, Connectome Mapper 3 provides two Python commandline wrappers (connectomemapper3_docker and connectomemapper3_singularity) that will generate and run the appropriate command.

Since v3.1.0, CMP3 provides full support to EEG. Please check this notebook for a demonstration using the public VEPCON dataset.

Carbon footprint estimation of BIDS App run 🌍🌳✨

In support to the Organisation for Human Brain Mapping (OHBM) Sustainability and Environmental Action (OHBM-SEA) group, CMP3 enables you since v3.0.3 to be more aware about the adverse impact of your processing on the environment!

With the new --track_carbon_footprint option of the connectomemapper3_docker and connectomemapper3_singularity BIDS App python wrappers, and the new "Track carbon footprint" option of the BIDS Interface Window of cmpbidsappmanager, you can estimate the carbon footprint incurred by the execution of the BIDS App. Estimations are conducted using codecarbon to estimate the amount of carbon dioxide (CO2) produced to execute the code by the computing resources and save the results in <bids_dir>/code/emissions.csv.

Then, to visualize, interpret and track the evolution of the emitted CO2 emissions, you can use the visualization tool of codecarbon aka carbonboard that takes as input the csv created:

$ carbonboard --filepath="<bids_dir>/code/emissions.csv" --port=xxxx

Please check https://ohbm-environment.org to learn more about OHBM-SEA!

License information

This software is distributed under the open-source license Modified BSD. See license for more details.

All trademarks referenced herein are property of their respective holders.

Aknowledgment

If your are using the Connectome Mapper 3 in your work, please acknowledge this software. See Citing for more details.

Help/Questions

If you run into any problems or have any questions, you can post to the CMTK-users group. Code bugs can be reported by creating a “New Issue” on the source code repository.

Eager to contribute?

Connectome Mapper 3 is open-source and all kind of contributions (bug reporting, documentation, code,…) are welcome! See Contributing to Connectome Mapper for more details.

Contents

Installation Instructions for Users

Warning

This software is for research purposes only and shall not be used for any clinical use. This software has not been reviewed or approved by the Food and Drug Administration or equivalent authority, and is for non-clinical, IRB-approved Research Use Only. In no event shall data or images generated through the use of the Software be used in the provision of patient care.

The Connectome Mapper 3 is composed of a Docker image, namely the Connectome Mapper 3 BIDS App, and a Python Graphical User Interface, namely the Connectome Mapper BIDS App Manager.

  • Installation instructions for the Connectome mapper 3 BIDS App are found in Installation.

  • Installation instructions for the Connectome mapper 3 BIDS App Manager are found in Installation.

Make sure that you have installed the following prerequisites.

Important

On Mac and Windows, if you want to track the carbon emission incurred by the processing with the --track_carbon_footprint option flag, you will need to install in addition the Intel Power Gadget tool available here.

The Connectome Mapper 3 BIDSApp

Prerequisites

Note

Connectome Mapper 3 BIDSApp has been tested only on Ubuntu and MacOSX. In principles, it should also run on Windows but it might require a few patches to make it work.

  • Manage Docker as a non-root user

    • Open a terminal

    • Create the docker group:

      $ sudo groupadd docker
      
    • Add the current user to the docker group:

      $ sudo usermod -G docker -a $USER
      
    • Reboot

    • After reboot, test if docker is managed as non-root:

      $ docker run hello-world
      
Installation

Installation of the Connectome Mapper 3 has been facilitated through the distribution of a BIDSApp relying on the Docker software container technology.

  • Open a terminal

  • Download and extract the latest release (v3.2.0) of the BIDS App:

$ docker pull sebastientourbier/connectomemapper-bidsapp:v3.2.0

Note

This can take some time depending on your connection speed and your machine. The docker image of the BIDSApp has a compressed size of 6.28 GB on DockerHub and should take 17.6 GB of space on your machine after download and extraction.

  • To display all docker images available:

    $ docker images
    

    You should see the docker image “connectomemapper-bidsapp” with tag “v3.2.0” is now available.

  • You are ready to use the Connectome Mapper 3 BIDS App from the terminal. See its commandline usage.

The Connectome Mapper 3 BIDSApp Manager (GUI)

Prerequisites
  • Download the Python 3 installer of miniconda3 corresponding to your 32/64bits MacOSX/Linux/Win system and install it following the instructions at https://conda.io/miniconda.html.

Installation

The installation of the Connectome Mapper 3, including cmpbidsappmanager, consists of the creation of conda environment with all python dependencies installed, and the installation of connectomemapper via the Python Package Index (PyPI) as follows:

  • Download the appropriate environment.yml / environment_macosx.yml.

    Important

    It seems there is no conda package for git-annex available on Mac. For your convenience, we created an additional conda/environment_macosx.yml miniconda3 environment where the line - git-annex=XXXXXXX has been removed. Git-annex should be installed on MacOSX using brew i.e. brew install git-annex. See https://git-annex.branchable.com/install/ for more details.

    Note that git-annex is only necessary if you wish to use BIDS datasets managed by Datalad (https://www.datalad.org/).

  • Open a terminal.

  • Create a miniconda3 environment where all python dependencies will be installed:

    $ conda env create -f /path/to/downloaded/conda/environment[_macosx].yml
    

    Note

    This can take some time depending on your connection speed and your machine. It should take around 2.8GB of space on your machine.

  • Activate the conda environment:

    $ source activate py39cmp-gui
    

or:

$ conda activate py39cmp-gui
  • Install finally the latest released version of Connectome Mapper 3 with the Python Package Index (PyPI) using pip:

    (py39cmp-gui)$ pip install connectomemapper
    
  • You are ready to use the Connectome Mapper 3 (1) via its Graphical User Interface (GUI) aka CMP BIDS App Manager (See Graphical User Interface for the user guide), (2) via its python connectomemapper3_docker and connectomemapper3_singularity wrappers (See With the wrappers for commandline usage), or (3) by interacting directly with the Docker / Singularity Engine (See With the Docker / Singularity Engine for commandline usage).

In the future

If you wish to update Connectome Mapper 3 and the Connectome Mapper 3 BIDS App Manager, this could be easily done by running pip install connectomemapper==v3.X.Y.

Help/Questions

If you run into any problems or have any questions, you can post to the CMTK-users group. Code bugs can be reported by creating a “New Issue” on the source code repository.

Connectome Mapper 3 and the BIDS standard

Connectome Mapper 3 (CMP3) adopts the BIDS standard for data organization and is developed following the BIDS App standard with a Graphical User Interface (GUI).

This means CMP3 can be executed in two different ways:
  1. By running the BIDS App container image directly from the terminal or a script (See Commandline Usage section for more details).

  2. By using its Graphical User Interface, designed to facilitate the configuration of all pipeline stages, the configuration of the BIDS App run and its execution, and the inspection of the different stage outputs with appropriate viewers (See Graphical User Interface section for more details) .

For more information about BIDS and BIDS-Apps, please consult the BIDS Website, the Online BIDS Specifications, and the BIDSApps Website. HeuDiConv can assist you in converting DICOM brain imaging data to BIDS. A nice tutorial can be found @ BIDS Tutorial Series: HeuDiConv Walkthrough .

Example BIDS dataset

For instance, a BIDS dataset with T1w, DWI and rs-fMRI images should adopt the following organization, naming, and file formats::

ds-example/

    README
    CHANGES
    participants.tsv
    dataset_description.json

    sub-01/
        anat/
            sub-01_T1w.nii.gz
            sub-01_T1w.json
        dwi/
            sub-01_dwi.nii.gz
            sub-01_dwi.json
            sub-01_dwi.bvec
            sub-01_dwi.bval
        func/
            sub-01_task-rest_bold.nii.gz
            sub-01_task-rest_bold.json

    ...

    sub-<subject_label>/
        anat/
            sub-<subject_label>_T1w.nii.gz
            sub-<subject_label>_T1w.json
        ...
    ...

For an example of a dataset containing T1w, DWI and preprocessed EEG data, please check the public VEPCON dataset.

Important

Before using any BIDS App, we highly recommend you to validate your BIDS structured dataset with the free, online BIDS Validator.

Commandline Usage

Connectome Mapper 3 (CMP3) is distributed as a BIDS App which adopts the BIDS standard for data organization and takes as principal input the path of the dataset that is to be processed. The input dataset is required to be in valid BIDS format, and it must include at least a T1w or MPRAGE structural image and a DWI and/or resting-state fMRI image and/or preprocessed EEG data. See Connectome Mapper 3 and the BIDS standard page that provides links for more information about BIDS and BIDS-Apps as well as an example for dataset organization and naming.

Warning

As of CMP3 v3.0.0-RC2, the BIDS App includes a tracking system that anonymously reports the run of the BIDS App. This feature has been introduced to support us in the task of fund finding for the development of CMP3 in the future. However, users are still free to opt-out using the --notrack commandline argument.

Important

Since v3.0.0-RC4, configuration files adopt the JSON format. If you have your configuration files still in the old INI format, do not worry, the CMP3 BIDS App will convert them to the new JSON format automatically for you.

Commandline Arguments

The command to run CMP3 follows the BIDS-Apps definition standard with additional options for loading pipeline configuration files.

Entrypoint script of the BIDS-App Connectome Mapper version v3.2.0

usage: connectomemapper3 [-h]
                         [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
                         [--session_label SESSION_LABEL [SESSION_LABEL ...]]
                         [--anat_pipeline_config ANAT_PIPELINE_CONFIG]
                         [--dwi_pipeline_config DWI_PIPELINE_CONFIG]
                         [--func_pipeline_config FUNC_PIPELINE_CONFIG]
                         [--eeg_pipeline_config EEG_PIPELINE_CONFIG]
                         [--number_of_threads NUMBER_OF_THREADS]
                         [--number_of_participants_processed_in_parallel NUMBER_OF_PARTICIPANTS_PROCESSED_IN_PARALLEL]
                         [--mrtrix_random_seed MRTRIX_RANDOM_SEED]
                         [--ants_random_seed ANTS_RANDOM_SEED]
                         [--ants_number_of_threads ANTS_NUMBER_OF_THREADS]
                         [--fs_license FS_LICENSE] [--coverage] [--notrack]
                         [-v]
                         bids_dir output_dir {participant,group}
Positional Arguments
bids_dir

The directory with the input dataset formatted according to the BIDS standard.

output_dir

The directory where the output files should be stored. If you are running group level analysis this folder should be prepopulated with the results of the participant level analysis.

analysis_level

Possible choices: participant, group

Level of the analysis that will be performed. Multiple participant level analyses can be run independently (in parallel) using the same output_dir.

Named Arguments
--participant_label

The label(s) of the participant(s) that should be analyzed. The label corresponds to sub-<participant_label> from the BIDS spec (so it does not include “sub-“). If this parameter is not provided all subjects should be analyzed. Multiple participants can be specified with a space separated list.

--session_label
The label(s) of the session that should be analyzed.

The label corresponds to ses-<session_label> from the BIDS spec (so it does not include “ses-“). If this parameter is not provided all sessions should be analyzed. Multiple sessions can be specified with a space separated list.

--anat_pipeline_config

Configuration .json file for processing stages of the anatomical MRI processing pipeline

--dwi_pipeline_config

Configuration .json file for processing stages of the diffusion MRI processing pipeline

--func_pipeline_config

Configuration .json file for processing stages of the fMRI processing pipeline

--eeg_pipeline_config

Configuration .json file for processing stages of the eeg processing pipeline

--number_of_threads

The number of OpenMP threads used for multi-threading by Freesurfer (Set to [Number of available CPUs -1] by default).

--number_of_participants_processed_in_parallel

The number of subjects to be processed in parallel (One by default).

Default: 1

--mrtrix_random_seed

Fix MRtrix3 random number generator seed to the specified value

--ants_random_seed

Fix ANTS random number generator seed to the specified value

--ants_number_of_threads

Fix number of threads in ANTs. If not specified ANTs will use the same number as the number of OpenMP threads (see —-number_of_threads option flag)

--fs_license

Freesurfer license.txt

--coverage

Run connectomemapper3 with coverage

Default: False

--notrack

Do not send event to Google analytics to report BIDS App execution, which is enabled by default.

Default: False

-v, --version

show program’s version number and exit

Important

Before using any BIDS App, we highly recommend you to validate your BIDS structured dataset with the free, online BIDS Validator.

Participant Level Analysis

You can run CMP3 using the lightweight Docker or Singularity wrappers we created for convenience or you can interact directly with the Docker / Singularity Engine via the docker or singularity run command.

New in v3.0.2 ✨

You can now be aware about the adverse impact of your processing on the environment 🌍🌳!

With the new –track_carbon_footprint option of the connectomemapper3_docker and connectomemapper3_singularity BIDS App python wrappers, you can use codecarbon to estimate the amount of carbon dioxide (CO2) produced to execute the code by the computing resources and save the results in <bids_dir>/code/emissions.csv.

Then, to visualize, interpret and track the evolution of the CO2 emissions incurred, you can use the visualization tool of codecarbon aka carbonboard that takes as input the .csv created:

$ carbonboard --filepath="<bids_dir>/code/emissions.csv" --port=xxxx
With the wrappers

When you run connectomemapper3_docker, it will generate a Docker command line for you, print it out for reporting purposes, and then execute it without further action needed, e.g.:

$ connectomemapper_docker \
     "/home/user/data/ds001" "/home/user/data/ds001/derivatives" \
     participant --participant_label 01 --session_label 01 \
     --fs_license "/usr/local/freesurfer/license.txt" \
     --config_dir "/home/user/data/ds001/code" \
     --track_carbon_footprint \
     --anat_pipeline_config "ref_anatomical_config.json" \
     (--dwi_pipeline_config "ref_diffusion_config.json" \)
     (--func_pipeline_config "ref_fMRI_config.json" \)
     (--eeg_pipeline_config "ref_EEG_config.json" \)
     (--number_of_participants_processed_in_parallel 1)

When you run connectomemapper3_singularity, it will generate a Singularity command line for you, print it out for reporting purposes, and then execute it without further action needed, e.g.:

$ connectomemapper3_singularity \
     "/home/user/data/ds001" "/home/user/data/ds001/derivatives" \
     participant --participant_label 01 --session_label 01 \
     --fs_license "/usr/local/freesurfer/license.txt" \
     --config_dir "/home/user/data/ds001/code" \
     --track_carbon_footprint \
     --anat_pipeline_config "ref_anatomical_config.json" \
     (--dwi_pipeline_config "ref_diffusion_config.json" \)
     (--func_pipeline_config "ref_fMRI_config.json" \)
     (--eeg_pipeline_config "ref_EEG_config.json" \)
     (--number_of_participants_processed_in_parallel 1)
With the Docker / Singularity Engine

If you need a finer control over the container execution, or you feel comfortable with the Docker or Singularity Engine, avoiding the extra software layer of the wrapper might be a good decision.

Docker

For instance, the previous call to the connectomemapper3_docker wrapper corresponds to:

$ docker run -t --rm -u $(id -u):$(id -g) \
        -v /home/user/data/ds001:/bids_dir \
        -v /home/user/data/ds001/derivatives:/output_dir \
        (-v /usr/local/freesurfer/license.txt:/bids_dir/code/license.txt) \
        sebastientourbier/connectomemapper-bidsapp:v3.2.0 \
        /bids_dir /output_dir participant --participant_label 01 (--session_label 01) \
        --anat_pipeline_config /bids_dir/code/ref_anatomical_config.json \
        (--dwi_pipeline_config /bids_dir/code/ref_diffusion_config.json \)
        (--func_pipeline_config /bids_dir/code/ref_fMRI_config.json \)
        (--eeg_pipeline_config /bids_dir/code/ref_EEG_config.json \)
        (--number_of_participants_processed_in_parallel 1)
Singularity

For instance, the previous call to the connectomemapper3_singularity wrapper corresponds to:

$ singularity run  --containall \
        --bind /home/user/data/ds001:/bids_dir \
        --bind /home/user/data/ds001/derivatives:/output_dir \
        --bind /usr/local/freesurfer/license.txt:/bids_dir/code/license.txt \
        library://connectomicslab/default/connectomemapper-bidsapp:v3.2.0 \
        /bids_dir /output_dir participant --participant_label 01 (--session_label 01) \
        --anat_pipeline_config /bids_dir/code/ref_anatomical_config.json \
        (--dwi_pipeline_config /bids_dir/code/ref_diffusion_config.json \)
        (--func_pipeline_config /bids_dir/code/ref_fMRI_config.json \)
        (--eeg_pipeline_config /bids_dir/code/ref_EEG_config.json \)
        (--number_of_participants_processed_in_parallel 1)

Note

The local directory of the input BIDS dataset (here: /home/user/data/ds001) and the output directory (here: /home/user/data/ds001/derivatives) used to process have to be mapped to the folders /bids_dir and /output_dir respectively using the docker -v / singularity --bind run option.

Important

The user is requested to use its own Freesurfer license (available here). CMP expects by default to find a copy of the FreeSurfer license.txt in the code/ folder of the BIDS directory. However, one can also mount a freesurfer license.txt with the docker -v / singularity --bind run option. This file can be located anywhere on the computer (as in the example above, i.e. /usr/local/freesurfer/license.txt) to the code/ folder of the BIDS directory inside the docker container (i.e. /bids_dir/code/license.txt).

Note

At least a configuration file describing the processing stages of the anatomical pipeline should be provided. Diffusion and/or Functional MRI pipeline are performed only if a configuration file is set. The generation of such configuration files, the execution of the BIDS App docker image and output inpection are facilitated through the use of the Connectome Mapper GUI, i.e. cmpbidsappmanager (see dedicated documentation page)

Debugging

Logs are saved into <output dir>/cmp/sub-<participant_label>/sub-<participant_label>_log.txt.

Already have Freesurfer outputs?

If you have already Freesurfer v5 / v6 output data available, CMP3 can use them if there are properly placed in your output / derivatives directory. Since v3.0.3, CMP3 expects to find a freesurfer-7.1.1, so make sure that your derivatives are organized as follows:

your_bids_dataset
  |______ derivatives/
  |         |______ freesurfer-7.1.1/
  |                   |______ sub-01[_ses-01]/
  |                   |           |______ label/
  |                   |           |______ mri/
  |                   |           |______ surf/
  |                   |           |______ ...
  |                   |______ ...
  |______ sub-01/
  |______ ...

Support, bugs and new feature requests

If you need any support or have any questions, you can post to the CMTK-users group.

All bugs, concerns and enhancement requests for this software are managed on GitHub and can be submitted at https://github.com/connectomicslab/connectomemapper3/issues.

Not running on a local machine?

If you intend to run CMP3 on a remote system such as a high-performance computing cluster where Docker is not available due to root privileges, a Singularity image is also built for your convenience and available on Sylabs.io. Please see instructions at Running on a cluster (HPC).

Also, you will need to make your data available within that system first. Comprehensive solutions such as Datalad will handle data transfers with the appropriate settings and commands. Datalad also performs version control over your data. A tutorial is provided in Adopting Datalad for collaboration.

Graphical User Interface

Introduction

Connectome Mapper 3 comes with a Graphical User Interface, the Connectome Mapper BIDS App manager, designed to facilitate the configuration of all pipeline stages, the configuration of the BIDS App run and its execution, and the inspection of the different stage outputs with appropriate viewers.

_images/mainWindow.png

Main window of the Connectome Mapper BIDS App Manager

Start the Graphical User Interface

In a terminal, enter to following:

$ source activate py39cmp-gui

or:

$ conda activate py39cmp-gui

Please see Section Installation for more details about installation.

After activation of the conda environment, start the graphical user interface called Connectome Mapper 3 BIDS App Manager

$ cmpbidsappmanager

Note

The main window would be blank until you select the BIDS dataset.

Load a BIDS dataset

  • Click on File -> Load BIDS dataset... in the menu bar of the main window. Note that on Mac, Qt turns this menu bar into the native menu bar (top of the screen).

    The Connectome Mapper 3 BIDS App Manager gives you two different options:

    • Load BIDS dataset: load a BIDS dataset stored locally. You only have to select the root directory of your valid BIDS dataset (see note below)

    • Install Datalad BIDS dataset: create a new datalad/BIDS dataset locally from an existing local or remote datalad/BIDS dataset (This is a feature under development) If ssh connection is used, make sure to enable the “install via ssh” and to provide all connection details (IP address / Remote host name, remote user, remote password)

Note

The input dataset MUST be a valid BIDS structured dataset and must include at least one T1w or MPRAGE structural image. We highly recommend that you validate your dataset with the free, online BIDS Validator.

Pipeline stage configuration

Start the Configurator Window
  • From the main window, click on the left button to start the Configurator Window.

_images/mainWindow_configurator.png
  • The window of the Connectome Mapper BIDS App Configurator will appear, which will assist you note only in configuring the pipeline stages (each pipeline has a tab panel), but also in creating appropriate configuration files which could be used outside the Graphical User Interface.

_images/configurator_window.png

Configurator Window of the Connectome Mapper

The outputs depend on the chosen parameters.

Anatomical pipeline stages
_images/configurator_pipeline_anat.png

Panel for configuration of anatomical pipeline stages

Segmentation

Prior to Lausanne parcellation, CMP3 relies on Freesurfer for the segmentation of the different brain tissues and the reconstruction of the cortical surfaces. If you plan to use a custom parcellation, you will be required here to specify the pattern of the different existing segmentation files that follows BIDS derivatives (See Custom segmentation).

Freesurfer

_images/segmentation_fs.png
  • Number of threads: used to specify how many threads are used for parallelization

  • Brain extraction tools: alternative brain extraction methods injected in Freesurfer

  • Freesurfer args: used to specify extra Freesurfer processing options

Note

If you have already Freesurfer v5 / v6 / v7 output data available, CMP3 can use them if there are placed in your output / derivatives directory. Note however that since v3.0.3, CMP3 expects to find a freesurfer-7.1.1, so make sure that your derivatives are organized as follows:

your_bids_dataset
  derivatives/
    freesurfer-7.1.1/
      sub-01[_ses-01]/
        label/
        mri/
        surf/
        ...
      ...
  sub-01/
  ...

Custom segmentation

_images/custom_segmentation.png

You can use any parcellation scheme of your choice as long as you provide a list of segmentation files organized following the BIDS derivatives specifications for segmentation files, provide appropriate .tsv sidecar files that describes the index/label/color mapping of the parcellation, and adopt the atlas-<label> entity to encode the name of the atlas, i.e:

<derivatives_directory>/
  sub-<participant_label>/
    anat/
      <source_entities>_desc-brain_mask.nii.gz
      <source_entities>_label-GM[_desc-<label>]_dseg.nii.gz
      <source_entities>_label-WM[_desc-<label>]_dseg.nii.gz
      <source_entities>_label-CSF[_desc-<label>]_dseg.nii.gz
      <source_entities>_desc-aparcaseg_dseg.nii.gz

The desc BIDS entity can be used to target specific mask and segmentation files.

For instance, the configuration above would allows us to re-use the outputs of the anatomical pipeline obtained with the previous v3.0.2 version of CMP3:

your_bids_dataset
  derivatives/
    cmp-v3.0.2/
      sub-01/
        anat/
          sub-01_desc-brain_mask.nii.gz
          sub-01_label-GM_dseg.nii.gz
          sub-01_label-WM_dseg.nii.gz
          sub-01_label-CSF_dseg.nii.gz
          sub-01_desc-aparcaseg_dseg.nii.gz
          ...
      ...
  sub-01/
  ...

Important

If you plan to use either Anatomically Constrained or Particle Filtering tractography, you will still require to have Freesurfer 7 output data available in your output / derivatives directory, as described the above note in *Freesurfer*.

Parcellation

Generates the Native Freesurfer or Lausanne2018 parcellation from Freesurfer data. Alternatively, since v3.0.3 you can use your own custom parcellation files.

Parcellation scheme

  • NativeFreesurfer:

    _images/parcellation_fs.png

    Atlas composed of 83 regions from the Freesurfer aparc+aseg file

  • Lausanne2018:

    _images/parcellation_lausanne2018.png

    New version of Lausanne parcellation atlas, corrected, and extended with 7 thalamic nuclei, 12 hippocampal subfields, and 4 brainstem sub-structure per hemisphere

    Since v3.0.0, Lausanne2018 parcellation has completely replaced the old Lausanne2008 parcellation.

    As it provides improvements in the way Lausanne parcellation label are generated, any code and data related to Lausanne2008 has been removed. If you still wish to use this old parcellation scheme, please use v3.0.0-RC4 which is the last version that supports it.

  • Custom:

    _images/custom_parcellation.png

    You can use any parcellation scheme of your choice as long as they follow the BIDS derivatives specifications for segmentation files, provide appropriate .tsv sidecar files that describes the index/label/color mapping of the parcellation, and adopt the atlas-<label> entity to encode the name of the atlas, i.e:

    <derivatives_directory>/
      sub-<participant_label>/
        anat/
          <source_entities>[_space-<space>]_atlas-<label>[_res-<label>]_dseg.nii.gz
          <source_entities>[_space-<space>]_atlas-<label>[_res-<label>]_dseg.tsv
    

    The res BIDS entity allows the differentiation between multiple scales of the same atlas.

    For instance, the above configuration would allows us to re-use the scale 1 of the Lausanne parcellation generated by the anatomical pipeline obtained of the previous v3.0.2 version of CMP3:

    your_bids_dataset
      derivatives/
        cmp-v3.0.2/
          sub-01/
            anat/
              sub-01_atlas-L2018_res-scale1_dseg.nii.gz
              sub-01_atlas-L2018_res-scale1_dseg.tsv
              ...
          ...
      sub-01/
      ...
    
Diffusion pipeline stages
_images/configurator_pipeline_dwi.png

Panel for configuration of diffusion pipeline stages

Preprocessing

Preprocessing includes denoising, bias field correction, motion and eddy current correction for diffusion data.

_images/preprocessing.png

Denoising

Remove noise from diffusion images using (1) MRtrix3 MP-PCA method or (2) Dipy Non-Local Mean (NLM) denoising with Gaussian or Rician noise models

Bias field correction

Remove intensity inhomogeneities due to the magnetic resonance bias field using (1) MRtrix3 N4 bias field correction or (2) the bias field correction provided by FSL FAST

Motion correction

Aligns diffusion volumes to the b0 volume using FSL’s MCFLIRT

Eddy current correction

Corrects for eddy current distortions using FSL’s Eddy correct tool

Resampling

Resample morphological and diffusion data to F0 x F1 x F2 mm^3

Registration

Registration mode

  • FSL (Linear):

    _images/registration_flirt.png

    Perform linear registration from T1 to diffusion b0 using FSL’s flirt

  • Non-linear (ANTS):

    _images/registration_ants.png

    Perform symmetric diffeomorphic SyN registration from T1 to diffusion b=0

Diffusion reconstruction and tractography

Perform diffusion reconstruction and local deterministic or probabilistic tractography based on several tools. ROI dilation is required to map brain connections when the tracking only operates in the white matter.

_images/diffusion_config_window.png

Diffusion stage configuration window

Reconstruction tool

Dipy: perform SHORE, tensor, CSD and MAP-MRI reconstruction

  • SHORE:

    _images/diffusion_dipy_shore.png

    SHORE performed only on DSI data

  • Tensor:

    _images/diffusion_dipy_tensor.png

    Tensor performed only on DTI data

  • CSD:

    _images/diffusion_dipy_csd.png

    CSD performed on DTI and multi-shell data

  • MAP_MRI:

    _images/diffusion_dipy_mapmri.png

    MAP-MRI performed only on multi-shell data

MRtrix: perform CSD reconstruction.

  • CSD:

    _images/diffusion_mrtrix_csd.png

    CSD performed on DTI and multi-shell data

Tractography tool

Dipy: perform deterministic and probabilistic fiber tracking as well as particle filtering tractography.

  • Deterministic tractography:

    _images/diffusion_dipy_deterministic.png

    Deterministic tractography (SD_STREAM) performed on single tensor or CSD reconstruction

  • Probabilistic tractography:

    _images/diffusion_dipy_probabilistic.png

    Probabilistic tractography (iFOD2) performed on SHORE or CSD reconstruction

  • Probabilistic particle filtering tractography (PFT):

    _images/diffusion_dipy_probabilistic_PFT.png

    Probabilistic PFT tracking performed on SHORE or CSD reconstruction. Seeding from the gray matter / white matter interface is possible.

Note

We noticed a shift of the center of tractograms obtained by dipy. As a result, tractograms visualized in TrackVis are not commonly centered despite the fact that the tractogram and the ROIs are properly aligned.

MRtrix: perform deterministic and probabilistic fiber tracking as well as anatomically-constrained tractography. ROI dilation is required to map brain connections when the tracking only operates in the white matter.

  • Deterministic tractography:

    _images/diffusion_mrtrix_deterministic.png

    Deterministic tractography (SD_STREAM) performed on single tensor or CSD reconstruction

  • Deterministic anatomically-constrained tractography (ACT):

    _images/diffusion_mrtrix_deterministic_ACT.png

    Deterministic ACT tracking performed on single tensor or CSD reconstruction. Seeding from the gray matter / white matter interface is possible. Backtrack option is not available in deterministic tracking.

  • Probabilistic tractography:

    _images/diffusion_mrtrix_probabilistic.png

    Probabilistic tractography (iFOD2) performed on SHORE or CSD reconstruction

  • Probabilistic anatomically-constrained tractography (ACT):

    _images/diffusion_mrtrix_probabilistic_ACT.png

    Probabilistic ACT tracking performed on SHORE or CSD reconstruction. Seeding from the gray matter / white matter interface is possible.

Connectome

Compute fiber length connectivity matrices. If DTI data is processed, FA additional map is computed. In case of DSI, additional maps include GFA and RTOP. In case of MAP-MRI, additional maps are RTPP, RTOP, …

_images/connectome.png

Output types

Select in which formats the connectivity matrices should be saved.

FMRI pipeline stages
_images/configurator_pipeline_fmri.png

Panel for configuration of fMRI pipeline stages

Preprocessing

Preprocessing refers to processing steps prior to registration. It includes discarding volumes, despiking, slice timing correction and motion correction for fMRI (BOLD) data.

_images/preprocessing_fmri.png

Discard n volummes

Discard n volumes from further analysis

Despiking

Perform despiking of the BOLD signal using AFNI.

Slice timing and Repetition time

Perform slice timing correction using FSL’s slicetimer.

Motion correction

Align BOLD volumes to the mean BOLD volume using FSL’s MCFLIRT.

Registration

Registration mode

  • FSL (Linear):

    _images/registration_flirt_fmri.png

    Perform linear registration from T1 to mean BOLD using FSL’s flirt.

  • BBregister (FS)

    _images/registration_fs_fmri.png

    Perform linear registration using Freesurfer BBregister tool from T1 to mean BOLD via T2.

    Warning

    development in progress

fMRI processing

Performs detrending, nuisance regression, bandpass filteringdiffusion reconstruction and local deterministic or probabilistic tractography based on several tools. ROI dilation is required to map brain connections when the tracking only operates in the white matter.

Detrending

_images/detrending.png

Detrending of BOLD signal using:

  1. linear trend removal algorithm provided by the scipy library

  2. quadratic trend removal algorithm provided by the obspy library

Nuisance regression

_images/nuisance.png

A number of options for removing nuisance signals is provided. They consist of:

  1. Global signal regression

  2. CSF regression

  3. WM regression

  4. Motion parameters regression

Bandpass filtering

_images/bandpass.png

Perform bandpass filtering of the time-series using FSL’s slicetimer

Connectome

Computes ROI-averaged time-series and the correlation connectivity matrices.

_images/connectome_fmri.png

Output types

Select in which formats the connectivity matrices should be saved.

EEG pipeline stages
_images/configurator_pipeline_eeg.png

Panel for configuration of EEG pipeline stages

EEG Preprocessing

EEG Preprocessing refers to steps that loads, crop, and save preprocessed EEG epochs data of a given task in fif format, the harmonized format used further in the pipeline.

EEG data can be provided as:

  1. A mne.Epochs object already saved in fif format:

    _images/eeg_preproc_fif.png
  2. A set of the following files and parameters:

    _images/eeg_preproc_set.png
    • Preprocessed EEG recording: store the Epochs * Electrodes dipole time-series in eeglab set format

    • Recording events file in BIDS *_events.tsv format: describe timing and other properties of events recorded during the task

    • Electrodes file file in `BIDS *_electrodes.tsv format`_ or in Cartool *.xyz format: store the electrode coordinates

    • Epochs time window: relative start and end time to crop the epochs

EEG Source Imaging

EEG Source Imaging refers to the all the steps necessary to obtain the inverse solutions and extract ROI time-series for a given parcellation scheme.

  • Structural parcellation: specify the cmp derivatives directory, the parcellation scheme, and the scale (for Lausanne 2018) to retrieve the parcellation files

    _images/eeg_esi_parcellation.png
  • Tool: CMP3 can either leverage MNE to compute the inverse solutions or take inverse solutions already pre-computed with Cartool as input.

    _images/eeg_esi_tool.png
    • MNE

      If MNE is selected, all steps necessary to reconstruct the inverse solutions are performed by leveraging MNE. In this case, the following files and parameters need to be provided:

      _images/eeg_esi_mne.png
      • MNE ESI method: Method to compute the inverse solutions

      • MNE ESI method SNR: SNR level used to regularize the inverse solutions

      • MNE electrode transform: Additional transform in MNE trans.fif format to be applied to electrode coordinates when Apply electrode transform is enabled

    • Cartool

      If Cartool is selected, the following files (generated by this tool) and parameters need to be provided:

      _images/eeg_esi_cartool.png
      • Source space file: *.spi text file with 3D-coordinates (x, y and z-dimension) with possible solution points necessary to obtain the sources or generators of ERPs

      • Inverse solution file: *.is binary file that includes number of electrodes and solution points

      • Cartool esi method: Method used to compute the inverse solutions (Cartool esi method)

      • Cartool esi lamb: Regularization level of inverse solutions

      • SVD for ROI time-courses extraction: Start and end TOI parameters for the SVD algorithm that extract single ROI time-series from dipoles.

EEG Connectome

Computes frequency- and time-frequency-domain connectivity matrices with MNE Spectral Connectivity.

_images/connectome_eeg.png

Output types

Select in which formats the connectivity matrices should be saved.

Save the configuration files

You can save the pipeline stage configuration files in two different way:

  1. You can save all configuration files at once by clicking on the Save All Pipeline Configuration Files. This will save automatically the configuration file of the anatomical / diffusion / fMRI pipeline to <bids_dataset>/code/ref_anatomical_config.ini / <bids_dataset>/code/ref_diffusion_config.ini / <bids_dataset>/code/ref_fMRI_config.ini, <bids_dataset>/code/ref_EEG_config.ini respectively.

  2. You can save individually each of the pipeline configuration files and edit its filename in the File menu (File -> Save anatomical/diffusion/fMRI/EEG configuration file as…)

Nipype

Connectome Mapper relies on Nipype. All intermediate steps for the processing are saved in the corresponding <bids_dataset/derivatives>/nipype/sub-<participant_label>/<pipeline_name>/<stage_name> stage folder (See Nipype workflow outputs for more details).

Run the BIDS App

Start the Connectome Mapper BIDS App GUI
  • From the main window, click on the middle button to start the Connectome Mapper BIDS App GUI.

_images/mainWindow_bidsapp.png
  • The window of the Connectome Mapper BIDS App GUI will appear, which will help you in setting up and launching the BIDS App run.

_images/bidsapp_window.png

Window of the Connectome Mapper BIDS App GUI

Run configuration
  • Select the output directory for data derivatives

    _images/bidsapp_select_output.png
  • Select the subject labels to be processed

    _images/bidsapp_select_subject.png
  • Tune the number of subjects to be processed in parallel

    _images/bidsapp_subject_parallelization.png
  • Tune the advanced execution settings for each subject process. This include finer control on the number of threads used by each process as well as on the seed value of ANTs and MRtrix random number generators.

    _images/bidsapp_execution_settings.png

    Important

    Make sure the number of threads multiplied by the number of subjects being processed in parallel do not exceed the number of CPUs available on your system.

  • Check/Uncheck the pipelines to be performed

    _images/bidsapp_pipeline_check.png

    Note

    The list of pipelines might vary as it is automatically updated based on the availability of raw diffusion MRI, resting-state fMRI, and preprocessed EEG data.

  • Specify your Freesurfer license

    _images/bidsapp_fslicense.png

    Note

    Your Freesurfer license will be copied to your dataset directory as <bids_dataset>/code/license.txt which will be mounted inside the BIDS App container image.

  • When the run is set up, you can click on the Check settings button.

    _images/bidsapp_checksettings.png
  • If the setup is complete and valid, this will enable the Run BIDS App button.

    _images/bidsapp_checksettings2.png

You are ready to launch the BIDS App run!

Execute the BIDS App
  • Click on the Run BIDS App button to execute the BIDS App

    _images/bidsapp_run.png
  • You can see the complete docker run command generated by the Connectome Mapper BIDS App GUI from the terminal output such as in this example

    Start BIDS App
    > FreeSurfer license copy skipped as it already exists (BIDS App Manager)
    > Datalad available: True
    ... BIDS App execution command: ['docker', 'run', '-it', '--rm', '-v', '/home/localadmin/Data/ds-demo:/bids_dir', '-v', '/home/localadmin/Data/ds-demo/derivatives:/output_dir', '-v', '/usr/local/freesurfer/license.txt:/bids_dir/code/license.txt', '-v', '/home/localadmin/Data/ds-demo/code/ref_anatomical_config.ini:/code/ref_anatomical_config.ini', '-v', '/home/localadmin/Data/ds-demo/code/ref_diffusion_config.ini:/code/ref_diffusion_config.ini', '-v', '/home/localadmin/Data/ds-demo/code/ref_fMRI_config.ini:/code/ref_fMRI_config.ini', '-u', '1000:1000', 'sebastientourbier/connectomemapper-bidsapp:v3.0.3', '/bids_dir', '/output_dir', 'participant', '--participant_label', '01', '--anat_pipeline_config', '/code/ref_anatomical_config.ini', '--dwi_pipeline_config', '/code/ref_diffusion_config.ini', '--func_pipeline_config', '/code/ref_fMRI_config.ini', '--fs_license', '/bids_dir/code/license.txt', '--number_of_participants_processed_in_parallel', '1', '--number_of_threads', '3', '--ants_number_of_threads', '3']
    > BIDS dataset: /bids_dir
    > Subjects to analyze : ['01']
    > Set $FS_LICENSE which points to FreeSurfer license location (BIDS App)
      ... $FS_LICENSE : /bids_dir/code/license.txt
      * Number of subjects to be processed in parallel set to 1 (Total of cores available: 11)
      * Number of parallel threads set to 10 (total of cores: 11)
      * OMP_NUM_THREADS set to 3 (total of cores: 11)
      * ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS set to 3
    Report execution to Google Analytics.
    Thanks to support us in the task of finding new funds for CMP3 development!
    > Sessions to analyze : ['ses-01']
    > Process subject sub-103818 session ses-01
    WARNING: rewriting config file /output_dir/cmp-v3.0.3/sub-01/ses-01/sub-01_ses-01_anatomical_config.ini
    ... Anatomical config created : /output_dir/cmp-v3.0.3/sub-01/ses-01/sub-01_ses-01_anatomical_config.ini
    WARNING: rewriting config file /output_dir/cmp-v3.0.3/sub-01/ses-01/sub-01_ses-01_diffusion_config.ini
    ... Diffusion config created : /output_dir/cmp-v3.0.3/sub-01/ses-01/sub-01_ses-01_diffusion_config.ini
    WARNING: rewriting config file /output_dir/cmp-v3.0.3/sub-01/ses-01/sub-01_ses-01_fMRI_config.ini
    ... Running pipelines :
            - Anatomical MRI (segmentation and parcellation)
            - Diffusion MRI (structural connectivity matrices)
    ... cmd : connectomemapper3 --bids_dir /bids_dir --output_dir /output_dir --participant_label sub-01 --session_label ses-01 --anat_pipeline_config /output_dir/cmp-v3.0.3/sub-01/ses-01/sub-01_ses-01_anatomical_config.ini --dwi_pipeline_config /output_dir/cmp-v3.0.3/sub-01/ses-01/sub-01_ses-01_diffusion_config.ini --number_of_threads 3
    

    Note

    Also, this can be helpful in you wish to design your own batch scripts to call the BIDS App with the correct syntax.

Check progress

For each subject, the execution output of the pipelines are redirected to a log file, written as <bids_dataset/derivatives>/cmp-v3.X.Y/sub-<subject_label>_log.txt. Execution progress can be checked by the means of these log files.

Check stages outputs

Start the Inspector Window
  • From the main window, click on the right button to start the Inspector Window.

_images/mainWindow_outputs.png
  • The Connectome Mapper 3 Inspector Window will appear, which will assist you in inspecting outputs of the different pipeline stages (each pipeline has a tab panel).

Anatomical pipeline stages
  • Click on the stage you wish to check the output(s):

    _images/outputcheck_pipeline_anat.png

    Panel for output inspection of anatomical pipeline stages

Segmentation
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_seg.png

Segmentation results

Surfaces extracted using Freesurfer.

_images/ex_segmentation1.png

T1 segmented using Freesurfer.

_images/ex_segmentation2.png
Parcellation
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_parc.png

Parcellation results

Cortical and subcortical parcellation are shown with Freeview.

_images/ex_parcellation2.png
Diffusion pipeline stages
  • Click on the stage you wish to check the output(s):

    _images/outputcheck_pipeline_dwi.png

    Panel for output inspection of diffusion pipeline stages

Preprocessing
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_prep.png
Registration
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_reg.png

Registration results

Registration of T1 to Diffusion space (b0). T1 in copper overlayed to the b0 image.

_images/ex_registration.png
Diffusion reconstruction and tractography
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_dwi.png

Tractography results

DSI Tractography results are displayed with TrackVis.

_images/ex_tractography1.png _images/ex_tractography2.png
Connectome
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_conn.png

Generated connection matrix

Displayed using a:

  1. matrix layout with pyplot

_images/ex_connectionmatrix.png
  1. circular layout with pyplot and MNE

_images/ex_connectioncircular.png
FMRI pipeline stages
  • Click on the stage you wish to check the output(s):

    _images/outputcheck_pipeline_fmri.png

    Panel for output inspection of fMRI pipeline stages

Preprocessing
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_prep_fmri.png
Registration
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_reg_fmri.png
fMRI processing
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_func.png

ROI averaged time-series

_images/ex_rsfMRI.png
Connectome
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_conn_fmri.png

Generated connection matrix

Displayed using a:

  1. matrix layout with pyplot

_images/ex_connectionmatrix_fmri.png
  1. circular layout with pyplot and MNE

_images/ex_connectioncircular_fmri.png
EEG pipeline stages
  • Click on the stage you wish to check the output(s):

    _images/outputcheck_pipeline_eeg.png

    Panel for output inspection of EEG pipeline stages

EEG Preprocessing
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_eeg_prep.png

Epochs * Electrodes time-series

Plot saved mne.Epochs object.

_images/ex_eeg_epo.png
EEG Source Imaging
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_eeg_esi.png

BEM surfaces

Surfaces of the boundary-element model used the MNE ESI workflow.

_images/ex_bem.png

BEM surfaces with sources

Surfaces of the boundary-element model and sources used the MNE ESI workflow.

_images/ex_bem_sources.png

Noise covariance

Noise covariance matrix and spectrum estimated by the MNE ESI workflow.

_images/ex_eeg_cov.png

ROI time-series

Carpet plot of extracted ROI time-series.

_images/ex_eeg_rtc.png
EEG Connectome
  • Select the desired output from the list and click on view:

    _images/outputcheck_stage_conn_eeg.png

Generated connection matrix

Displayed using a:

  1. matrix layout with pyplot

_images/ex_connectionmatrix_eeg.png
  1. circular layout with pyplot and MNE

_images/ex_connectioncircular_eeg.png

Outputs of Connectome Mapper 3

Processed, or derivative, data are outputed to <bids_dataset/derivatives>/.

BIDS derivatives entities

Entity

Description

sub-<label>

Distinguish different subjects

ses-<label>

Distinguish different acquisition sessions

task-<label>

Distinguish different experiment tasks

label-<label>

Describe the type of brain tissue segmented (for _probseg/dseg)

atlas-<label>

Distinguish data derived from different types of parcellation atlases

res-<label>

Distinguish data derived from the different scales of Lausanne2008 and Lausanne2018 parcellation atlases

space-DWI

Distinguish anatomical MRI derivatives in the target diffusion MRI space

model-<label>

Distinguish different diffusion signal models (DTI, CSD, SHORE, MAPMRI)

See Original BIDS Entities Appendix for more description.

Note

Connectome Mapper 3 introduced a new BIDS entity atlas-<atlas_label> (where <atlas_label>: Desikan/ L2018), that is used in combination with the res-<atlas_scale> (where <atlas_scale>: scale1 / scale2 / scale3 / scale4 / scale5) entity to distinguish data derived from different parcellation atlases and different scales.

Main Connectome Mapper Derivatives

Main outputs produced by Connectome Mapper 3 are written to cmp/sub-<subject_label>/. In this folder, a configuration file generated for each modality pipeline (i.e. anatomical/diffusion/fMRI/EEG) and used for processing each participant is saved as sub-<subject_label>_anatomical/diffusion/fMRI/EEG_config.json. It summarizes pipeline workflow options and parameters used for processing. An execution log of the full workflow is saved as sub-<subject_label>_log.txt`.

Anatomical derivatives
  • Anatomical derivatives in the individual T1w space are placed in each subject’s anat/ subfolder, including:

    • The original T1w image:

      • anat/sub-<subject_label>_desc-head_T1w.nii.gz

    • The masked T1w image with its corresponding brain mask:

      • anat/sub-<subject_label>_desc-brain_T1w.nii.gz

      • anat/sub-<subject_label>_desc-brain_mask.nii.gz

    • The segmentations of the white matter (WM), gray matter (GM), and Cortical Spinal Fluid (CSF) tissues:

      • anat/sub-<subject_label>_label-WM_dseg.nii.gz

      • anat/sub-<subject_label>_label-GM_dseg.nii.gz

      • anat/sub-<subject_label>_label-CSF_dseg.nii.gz

    • The five different brain parcellations:

      • anat/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_dseg.nii.gz

        where:

        • <atlas_label>: Desikan / L2018 is the parcellation scheme used

        • <scale_label>: scale1, scale2, scale3, scale4, scale5 corresponds to the parcellation scale if applicable

      with two tsv side-car files that follow the BIDS derivatives, one describing the parcel label/index mapping (_dseg.tsv), one reporting volumetry of the different parcels (_stats.tsv), and two files used internally by CMP3, one describing the parcel labels in graphml format (dseg.graphml), one providing the color lookup table of the parcel labels in Freesurfer format which can used directly in freeview (_FreeSurferColorLUT.txt):

      • anat/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_dseg.tsv

      • anat/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_stats.tsv

      • anat/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_dseg.graphml

      • anat/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_FreeSurferColorLUT.txt

  • Anatomical derivatives in the DWI space produced by the diffusion pipeline are placed in each subject’s anat/ subfolder, including:

    • The unmasked T1w image:

      • anat/sub-<subject_label>_space-DWI_desc-head_T1w.nii.gz

    • The masked T1w image with its corresponding brain mask:

      • anat/sub-<subject_label>_space-DWI_desc-brain_T1w.nii.gz

      • anat/sub-<subject_label>_space-DWI_desc-brain_mask.nii.gz

    • The segmentation of WM tissue used for tractography seeding:

      • anat/sub-<subject_label>_space-DWI_label-WM_dseg.nii.gz

    • The five different brain parcellation are saved as:

      • anat/sub-<subject_label>_space-DWI_atlas-<atlas_label>[_res-<scale_label>]_dseg.nii.gz

        where:

        • <atlas_label>: Desikan / L2018 is the parcellation scheme used

        • <scale_label>: scale1, scale2, scale3, scale4, scale5 corresponds to the parcellation scale if applicable

    • The 5TT image used for Anatomically Constrained Tractorgaphy (ACT):

      • anat/sub-<subject_label>_space-DWI_label-5TT_probseg.nii.gz

    • The patial volume maps for white matter (WM), gray matter (GM), and Cortical Spinal Fluid (CSF) used for Particale Filtering Tractography (PFT), generated from 5TT image:

      • anat/sub-<subject_label>_space-DWI_label-WM_probseg.nii.gz

      • anat/sub-<subject_label_space-DWI>_label-GM_probseg.nii.gz

      • anat/sub-<subject_label>_space-DWI_label-CSF_probseg.nii.gz

    • The GM/WM interface used for ACT and PFT seeding:

      • anat/sub-<subject_label>_space-DWI_label-GMWMI_probseg.nii.gz

Diffusion derivatives

Diffusion derivatives in the individual DWI space are placed in each subject’s dwi/ subfolder, including:

  • The final preprocessed DWI image used to fit the diffusion model for tensor or fiber orientation distribution estimation:

    • dwi/sub-<subject_label>_desc-preproc_dwi.nii.gz

  • The brain mask used to mask the DWI image:

    • dwi/sub-<subject_label>_desc-brain_mask_resampled.nii.gz

  • The diffusion tensor (DTI) fit (if used for tractography):

    • dwi/sub-<subject_label>]_desc-WLS_model-DTI_diffmodel.nii.gz

      with derived Fractional Anisotropic (FA) and Mean Diffusivity (MD) maps:

    • dwi/sub-<subject_label>]_model-DTI_FA.nii.gz

    • dwi/sub-<subject_label>]_model-DTI_MD.nii.gz

  • The Fiber Orientation Distribution (FOD) image from Constrained Spherical Deconvolution (CSD) fit (if performed):

    • dwi/sub-<subject_label>]_model-CSD_diffmodel.nii.gz

  • The MAP-MRI fit for DSI and multi-shell DWI data (if performed):

    • dwi/sub-<subject_label>]_model-MAPMRI_diffmodel.nii.gz

    with derived Generalized Fractional Anisotropic (GFA), Mean Squared Displacement (MSD), Return-to-Origin Probability (RTOP) and Return-to-Plane Probability (RTPP) maps:

    • dwi/sub-<subject_label>]_model-MAPMRI_GFA.nii.gz

    • dwi/sub-<subject_label>]_model-MAPMRI_MSD.nii.gz

    • dwi/sub-<subject_label>]_model-MAPMRI_RTOP.nii.gz

    • dwi/sub-<subject_label>]_model-MAPMRI_RTPP.nii.gz

  • The SHORE fit for DSI data:

    • dwi/sub-<subject_label>]_model-SHORE_diffmodel.nii.gz

    with derived Generalized Fractional Anisotropic (GFA), Mean Squared Displacement (MSD), Return-to-Origin Probability (RTOP) maps:

    • dwi/sub-<subject_label>]_model-SHORE_GFA.nii.gz

    • dwi/sub-<subject_label>]_model-SHORE_MSD.nii.gz

    • dwi/sub-<subject_label>]_model-SHORE_RTOP.nii.gz

  • The tractogram:

    • dwi/sub-<subject_label>_model-<model_label>_desc-<label>_tractogram.trk

      where:

      • <model_label> is the diffusion model used to drive tractography (DTI, CSD, SHORE)

      • <label> is the type of tractography algorithm employed (DET for deterministic, PROB for probabilistic)

  • The structural connectivity (SC) graphs:

    • dwi/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_conndata-network_connectivity.<fmt>

      where:

      • <atlas_label>: Desikan / L2018 is the parcellation scheme used

      • <scale_label>: scale1, scale2, scale3, scale4, scale5 corresponds to the parcellation scale if applicable

      • <fmt>: mat / gpickle / tsv / graphml is the format used to store the graph

Functional derivatives

Functional derivatives in the ‘meanBOLD’ (individual) space are placed in each subject’s func/ subfolder including:

  • The original BOLD image:

    • func/sub-<subject_label>_task-rest_desc-cmp_bold.nii.gz

  • The mean BOLD image:

    • func/sub-<subject_label>_meanBOLD.nii.gz

  • The fully preprocessed band-pass filtered used to compute ROI time-series:

    • func/sub-<subject_label>_desc-bandpass_task-rest_bold.nii.gz

  • For scrubbing (if enabled):

    • The change of variance (DVARS):

      • func/sub-<subject_label>_desc-scrubbing_DVARS.npy

    • The frame displacement (FD):

      • func/sub-<subject_label>_desc-scrubbing_FD.npy

  • Motion-related time-series:

    • func/sub-<subject_label>_motion.tsv

  • The ROI time-series for each parcellation scale:

    • func/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_timeseries.npy

    • func/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_timeseries.mat

      where:

      • <atlas_label>: Desikan / L2018 is the parcellation scheme used

      • <scale_label>: scale1, scale2, scale3, scale4, scale5 corresponds to the parcellation scale if applicable

  • The functional connectivity (FC) graphs:

    • func/sub-<subject_label>_atlas-<atlas_label>[_res-<scale_label>]_conndata-network_connectivity.<fmt>

      where:

      • <atlas_label>: Desikan / L2018 is the parcellation scheme used

      • <scale_label>: scale1, scale2, scale3, scale4, scale5 corresponds to the parcellation scale if applicable

      • <fmt>: mat / gpickle / tsv / graphml is the format used to store the graph

EEG derivatives

EEG derivatives are placed in each subject’s eeg/ subfolder including:

  • The preprocessed EEG epochs data in fif format:

    • eeg/sub-<subject_label>_task-<task_label>_epo.fif

  • The BEM surfaces in fif format:

    • eeg/sub-<subject_label>_task-<task_label>_bem.fif

  • The source space in fif format:

    • eeg/sub-<subject_label>_task-<task_label>_src.fif

  • The forward solution in fif format:

    • eeg/sub-<subject_label>_task-<task_label>_fwd.fif

  • The inverse operator in fif format:

    • eeg/sub-<subject_label>_task-<task_label>_inv.fif

  • The computed noise covariance in fif format:

    • eeg/sub-<subject_label>_task-<task_label>_noisecov.fif

  • The transform of electrode positions that might be used for ESI in fif format:

    • eeg/sub-<subject_label>_trans.fif

  • The ROI time-series for each parcellation atlas (and scale):

    • eeg/sub-<subject_label>_task-<task_label>_atlas-<atlas_label>[_res-<scale_label>]_timeseries.npy

    • eeg/sub-<subject_label>_task-<task_label>_atlas-<atlas_label>[_res-<scale_label>]_timeseries.mat

      where:

      • <atlas_label>: Desikan / L2018 is the parcellation scheme used

      • <scale_label>: scale1, scale2, scale3, scale4, scale5 corresponds to the parcellation scale if applicable

  • The functional frequency- and time-frequency-domain based connectivity graphs:

    • eeg/sub-<subject_label>_task-<task_label>_atlas-<atlas_label>[_res-<scale_label>]_conndata-network_connectivity.<fmt>

      where:

      • <atlas_label>: Desikan / L2018 is the parcellation scheme used

      • <scale_label>: scale1, scale2, scale3, scale4, scale5 corresponds to the parcellation scale if applicable

      • <fmt>: mat / gpickle / tsv / graphml is the format used to store the graph

FreeSurfer Derivatives

A FreeSurfer subjects directory is created in <bids_dataset/derivatives>/freesurfer-7.2.0.

freesurfer-7.1.1/
    fsaverage/
        mri/
        surf/
        ...
    sub-<subject_label>/
        mri/
        surf/
        ...
    ...

The fsaverage subject distributed with the running version of FreeSurfer is copied into this directory.

Nipype Workflow Derivatives

The execution of each Nipype workflow (pipeline) dedicated to the processing of one modality (i.e. anatomical/diffusion/fMRI/EEG) involves the creation of a number of intermediate outputs which are written to <bids_dataset/derivatives>/nipype/sub-<subject_label>/<anatomical/diffusion/fMRI/eeg>_pipeline respectively:

_images/nipype_wf_derivatives.png

To enhance transparency on how data is processed, outputs include a pipeline execution graph saved as <anatomical/diffusion/fMRI/eeg>_pipeline/graph.svg which summarizes all processing nodes involves in the given processing pipeline:

_images/nipype_wf_graph.png

Execution details (data provenance) of each interface (node) of a given pipeline are reported in <anatomical/diffusion/fMRI/eeg>_pipeline/<stage_name>/<interface_name>/_report/report.rst

_images/nipype_node_report.png

Note

Connectome Mapper 3 outputs are currently being updated to conform to BIDS v1.4.0.

Packages and modules

cmp package

Submodules
cmp.parser module

Connectome Mapper 3 Commandline Parser.

cmp.parser.get() argparse.ArgumentParser[source]

Return the argparse parser of the BIDS App.

Returns

p – Instance of argparse.ArgumentParser

Return type

argparse.ArgumentParser

cmp.parser.get_docker_wrapper_parser() argparse.ArgumentParser[source]

Return the argparse parser of the Docker BIDS App.

Returns

p – Instance of argparse.ArgumentParser

Return type

argparse.ArgumentParser

cmp.parser.get_singularity_wrapper_parser() argparse.ArgumentParser[source]

Return the argparse parser of the Singularity BIDS App.

Returns

p – Instance of argparse.ArgumentParser

Return type

argparse.ArgumentParser

cmp.parser.get_wrapper_parser() argparse.ArgumentParser[source]

Create and return the parser object of the python wrappers of the BIDS App.

cmp.project module
Pipelines and stages modules
cmp.pipelines.common module

Definition of common parent classes for pipelines.

class cmp.pipelines.common.Pipeline(project_info)[source]

Bases: traits.has_traits.HasTraits

Parent class that extends HasTraits and represents a processing pipeline.

It is extended by the various pipeline classes.

See also

cmp.pipelines.anatomical.anatomical.AnatomicalPipeline, cmp.pipelines.diffusion.diffusion.DiffusionPipeline, cmp.pipelines.functional.fMRI.fMRIPipeline

anat_flow = None
clear_stages_outputs()[source]

Clear processing stage outputs.

create_stage_flow(stage_name)[source]

Create the sub-workflow of a processing stage.

Parameters

stage_name (str) – Stage name

Returns

flow – Created stage sub-workflow

Return type

nipype.pipeline.engine.Workflow

fill_stages_outputs()[source]

Update processing stage output list for visual inspection.

number_of_cores = 1
subject = 'sub-01'
cmp.pipelines.anatomical package
Submodules
cmp.pipelines.anatomical.anatomical module

Anatomical pipeline Class definition.

class cmp.pipelines.anatomical.anatomical.AnatomicalPipeline(project_info)[source]

Bases: cmp.pipelines.common.Pipeline

Class that extends a Pipeline and represents the processing pipeline for structural MRI.

It is composed of the segmentation stage that performs FreeSurfer recon-all and the parcellation stage that creates the Lausanne brain parcellations.

check_config()[source]

Check if custom white matter mask and custom atlas files specified in the configuration exist.

Returns

message – String empty if all the checks pass, otherwise it contains the error message

Return type

string

check_input(layout, gui=True)[source]

Check if inputs of the anatomical pipeline are available.

Parameters
  • layout (bids.BIDSLayout) – Instance of BIDSLayout

  • gui (traits.Bool) – Boolean used to display different messages but not really meaningful anymore since the GUI components have been migrated to cmp.bidsappmanager

Returns

valid_inputs – True if inputs are available

Return type

traits.Bool

check_output()[source]

Check if outputs of an AnatomicalPipeline are available.

Returns

  • valid_output <Bool> – True if all outputs are found

  • error_message <string> – Error message if an output is not found.

create_datagrabber_node(base_directory)[source]

Create the appropriate Nipype DataGrabber node.`

Parameters

base_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

Returns

datasource – Output Nipype Node with DataGrabber interface

Return type

Output Nipype DataGrabber Node

create_datasinker_node(base_directory)[source]

Create the appropriate Nipype DataSink node depending on the parcellation_scheme

Parameters

base_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

Returns

sinker – Output Nipype Node with DataSink interface

Return type

Output Nipype DataSink Node

create_pipeline_flow(cmp_deriv_subject_directory, nipype_deriv_subject_directory)[source]

Create the pipeline workflow.

Parameters
  • cmp_deriv_subject_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

  • nipype_deriv_subject_directory (Directory) – Intermediate Nipype output directory of a subject e.g. /output_dir/nipype/sub-XX/(ses-YY)

Returns

anat_flow – An instance of nipype.pipeline.engine.Workflow

Return type

nipype.pipeline.engine.Workflow

define_custom_mapping(custom_last_stage)[source]

Define the pipeline to be executed until a specific stages.

Not used yet by CMP3.

Parameters

custom_last_stage (string) – Last stage to execute. Valid values are “Segmentation” and “Parcellation”

global_conf = <cmp.pipelines.anatomical.anatomical.GlobalConfig object>
init_subject_derivatives_dirs()[source]

Return the paths to Nipype and CMP derivatives folders of a given subject / session.

Notes

self.subject is updated to “sub-<participant_label>_ses-<session_label>” when subject has multiple sessions.

input_folders = ['anat']
now = '20240607_0909'
ordered_stage_list = ['Segmentation', 'Parcellation']
process()[source]

Executes the anatomical pipeline workflow and returns True if successful.

class cmp.pipelines.anatomical.anatomical.GlobalConfig[source]

Bases: traits.has_traits.HasTraits

Global pipeline configurations.

process_type

Processing pipeline type

Type

‘anatomical’

subjects

List of subjects ID (in the form sub-XX)

Type

traits.List

subject

Subject to be processed (in the form sub-XX)

Type

traits.Str

subject_session

Subject session to be processed (in the form ses-YY)

Type

traits.Str

cmp.pipelines.diffusion package
Submodules
cmp.pipelines.diffusion.diffusion module
cmp.pipelines.functional package
Submodules
cmp.pipelines.functional.eeg module

EEG pipeline Class definition.

class cmp.pipelines.functional.eeg.EEGPipeline(project_info)[source]

Bases: cmp.pipelines.common.Pipeline

Class that extends a Pipeline and represents the processing pipeline for EEG.

It is composed of:
  • the EEG preprocessing stage that loads the input preprocessed EGG Epochs files and convert them to the MNE fif format.

  • the EEG source imaging stage that takes care of all the steps necessary to extract the ROI time courses.

  • the EEG connectome stage that computes different frequency- and time-frequency-domain connectivity measures from the extracted ROI time courses.

check_config()[source]
check_input()[source]

Check if input of the eeg pipeline are available (Not available yet).

Returns

valid_inputs – True if inputs are available

Return type

bool

create_datagrabber_node(name='eeg_datasource', base_directory=None, debug=False)[source]

Create the appropriate Nipype BIDSDataGrabber node depending on the configuration of the different EEG pipeline stages.

Parameters
  • name (str) – Name of the datagrabber node

  • base_directory (str) – Path to the directory that store the check_input node output

  • debug (bool) – Print extra debugging messages if True

Returns

datasource – Output Nipype Node with BIDSDataGrabber interface

Return type

Output Nipype BIDSDataGrabber Node

create_datasinker_node(output_directory)[source]

Create the appropriate Nipype DataSink node depending on EEG task_label and parcellation_scheme

Parameters

output_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

Returns

sinker – Output Nipype Node with DataSink interface

Return type

Output Nipype DataSink Node

create_pipeline_flow(cmp_deriv_subject_directory, nipype_deriv_subject_directory)[source]

Create the workflow of the EEG pipeline.

Parameters
  • cmp_deriv_subject_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

  • nipype_deriv_subject_directory (Directory) – Intermediate Nipype output directory of a subject e.g. /output_dir/nipype/sub-XX/(ses-YY)

Returns

eeg_flow – An instance of nipype.pipeline.engine.Workflow

Return type

nipype.pipeline.engine.Workflow

get_nipype_eeg_pipeline_subject_dir()[source]

Return the path to Nipype eeg_pipeline folder of a given subject / session.

global_conf = <cmp.pipelines.functional.eeg.GlobalConfig object>
init_subject_derivatives_dirs()[source]

Return the paths to Nipype and CMP derivatives folders of a given subject / session.

Notes

self.subject is updated to “sub-<participant_label>_ses-<session_label>” when subject has multiple sessions.

input_folders = ['anat', 'eeg']
now = '20240607_0909'
ordered_stage_list = ['EEGPreparer', 'EEGLoader', 'InverseSolution']
process()[source]

Executes the anatomical pipeline workflow and returns True if successful.

class cmp.pipelines.functional.eeg.GlobalConfig[source]

Bases: traits.has_traits.HasTraits

Global EEG pipeline configurations.

process_type

Processing pipeline type

Type

‘EEG’

subjects

List of subjects ID (in the form sub-XX)

Type

traits.List

subject

Subject to be processed (in the form sub-XX)

Type

traits.Str

subject_session

Subject session to be processed (in the form ses-YY)

Type

traits.Str

cmp.pipelines.functional.fMRI module

Functional pipeline Class definition.

class cmp.pipelines.functional.fMRI.GlobalConfig[source]

Bases: traits.has_traits.HasTraits

Global pipeline configurations.

process_type

Processing pipeline type

Type

‘fMRI’

imaging_model

Imaging model used by RegistrationStage

Type

‘fMRI’

class cmp.pipelines.functional.fMRI.fMRIPipeline(project_info)[source]

Bases: cmp.pipelines.common.Pipeline

Class that extends a Pipeline and represents the processing pipeline for structural MRI.

It is composed of:
  • the preprocessing stage that can perform slice timing correction, deskiping and motion correction

  • the registration stage that co-registered the anatomical T1w scan to the mean BOLD image and projects the parcellations to the native fMRI space

  • the extra-preprocessing stage (FunctionalMRIStage) that can perform nuisance regression and bandpass filtering

  • the connectome stage that extracts the time-series of each parcellation ROI and computes the Pearson’s correlation coefficient between ROI time-series to create the functional connectome.

check_config()[source]

Check if the fMRI pipeline parameters is properly configured.

Returns

message – String that is empty if success, otherwise it contains the error message

Return type

string

check_input(layout, gui=True)[source]

Check if input of the diffusion pipeline are available.

Parameters
  • layout (bids.BIDSLayout) – Instance of BIDSLayout

  • gui (traits.Bool) – Boolean used to display different messages but not really meaningful anymore since the GUI components have been migrated to cmp.bidsappmanager

Returns

valid_inputs – True if inputs are available

Return type

traits.Bool

create_datagrabber_node(base_directory, bids_atlas_label)[source]

Create the appropriate Nipype DataGrabber node depending on the parcellation_scheme

Parameters
  • base_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

  • bids_atlas_label (string) – Parcellation atlas label

Returns

datasource – Output Nipype Node with DataGrabber interface

Return type

Output Nipype DataGrabber Node

create_datasinker_node(base_directory, bids_atlas_label)[source]

Create the appropriate Nipype DataSink node depending on the parcellation_scheme

Parameters
  • base_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

  • bids_atlas_label (string) – Parcellation atlas label

Returns

sinker – Output Nipype Node with DataSink interface

Return type

Output Nipype DataSink Node

create_pipeline_flow(cmp_deriv_subject_directory, nipype_deriv_subject_directory)[source]

Create the pipeline workflow.

Parameters
  • cmp_deriv_subject_directory (Directory) – Main CMP output directory of a subject e.g. /output_dir/cmp/sub-XX/(ses-YY)

  • nipype_deriv_subject_directory (Directory) – Intermediate Nipype output directory of a subject e.g. /output_dir/nipype/sub-XX/(ses-YY)

Returns

fMRI_flow – An instance of nipype.pipeline.engine.Workflow

Return type

nipype.pipeline.engine.Workflow

define_custom_mapping(custom_last_stage)[source]

Define the pipeline to be executed until a specific stages.

Not used yet by CMP3.

Parameters

custom_last_stage (string) – Last stage to execute. Valid values are: “Preprocessing”, “Registration”, “FunctionalMRI” and “Connectome”.

global_conf = <cmp.pipelines.functional.fMRI.GlobalConfig object>
init_subject_derivatives_dirs()[source]

Return the paths to Nipype and CMP derivatives folders of a given subject / session.

Notes

self.subject is updated to “sub-<participant_label>_ses-<session_label>” when subject has multiple sessions.

input_folders = ['anat', 'func']
now = '20240607_0909'
ordered_stage_list = ['Preprocessing', 'Registration', 'FunctionalMRI', 'Connectome']
process()[source]

Executes the fMRI pipeline workflow and returns True if successful.

update_nuisance_requirements()[source]

Update nuisance requirements.

Configure the registration to apply the estimated transformation to multiple segmentation masks depending on the Nuisance correction steps performed.

update_registration()[source]

Configure the list of registration tools.

update_scrubbing()[source]

Update to precompute or inputs for scrubbing during the FunctionalMRI stage.

cmp.stages package
Subpackages
cmp.stages.connectome package
Submodules
cmp.stages.connectome.connectome module

Definition of config and stage classes for building structural connectivity matrices.

class cmp.stages.connectome.connectome.ConnectomeConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a ConnectomeStage instance.

compute_curvature

Compute fiber curvature (Default: False)

Type

traits.Bool

output_types

Output connectome format

Type

[‘gpickle’, ‘mat’, ‘graphml’]

connectivity_metrics

Set of connectome maps to compute

Type

[‘Fiber number’, ‘Fiber length’, ‘Fiber density’, ‘Fiber proportion’, ‘Normalized fiber density’, ‘ADC’, ‘gFA’]

log_visualization

Log visualization that might be obsolete as this has been detached after creation of the bidsappmanager (Default: True)

Type

traits.Bool

circular_layout

Visualization of the connectivity matrix using a circular layout that might be obsolete as this has been detached after creation of the bidsappmanager (Default: False)

Type

traits.Bool

subject

BIDS subject ID (in the form sub-XX)

Type

traits.Str

class cmp.stages.connectome.connectome.ConnectomeStage(bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the connectome building stage of a DiffusionPipeline.

create_workflow()[source]

Create the workflow of the diffusion ConnectomeStage

See also

cmp.pipelines.diffusion.diffusion.DiffusionPipeline, cmp.stages.connectome.connectome.ConnectomeConfig

create_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the Diffusion pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.connectome.eeg_connectome module

Definition of config and stage classes for building functional connectivity matrices from preprocessed EEG.

class cmp.stages.connectome.eeg_connectome.EEGConnectomeConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a EEGConnectomeStage instance.

task_label

Task label (e.g. _task-<label>_)

Type

Str

parcellation_scheme

Parcellation used to create the ROI source time-series

Type

Enum([“NativeFreesurfer”, “Lausanne2018”])

lausanne2018_parcellation_res

Resolution of the parcellation if Lausanne2018 parcellation scheme is used

Type

Enum([“scale1”, “scale2”, “scale3”, “scale4”, “scale5”])

connectivity_metrics

Set of frequency- and time-frequency-domain connectivity metrics to compute

Type

[‘coh’, ‘cohy’, ‘imcoh’, ‘plv’, ‘ciplv’, ‘ppc’, ‘pli’, ‘wpli’, ‘wpli2_debiased’]

output_types

Output connectome file format

Type

[‘tsv’, ‘gpickle’, ‘mat’, ‘graphml’]

class cmp.stages.connectome.eeg_connectome.EEGConnectomeStage(bids_dir, output_dir, subject, session='')[source]

Bases: cmp.stages.common.Stage

Class that represents the connectome building stage of a EEGPipeline.

create_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the EEG pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs(log_visualization=True, circular_layout=False)[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.connectome.fmri_connectome module

Definition of config and stage classes for building functional connectivity matrices.

class cmp.stages.connectome.fmri_connectome.ConnectomeConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a ConnectomeStage instance.

apply_scrubbing

Apply scrubbing before mapping the functional connectome if True (Default: False)

Type

traits.Bool

FD_thr

Framewise displacement threshold (Default: 0.2)

Type

traits.Float

DVARS_thr

DVARS (RMS of variance over voxels) threshold (Default: 4.0)

Type

traits.Float

output_types

Output connectome format

Type

[‘gpickle’, ‘mat’, ‘cff’, ‘graphml’]

log_visualization

Log visualization that might be obsolete as this has been detached after creation of the bidsappmanager (Default: True)

Type

traits.Bool

circular_layout

Visualization of the connectivity matrix using a circular layout that might be obsolete as this has been detached after creation of the bidsappmanager (Default: False)

Type

traits.Bool

subject

BIDS subject ID (in the form sub-XX)

Type

traits.Str

class cmp.stages.connectome.fmri_connectome.ConnectomeStage(bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the connectome building stage of a fMRIPipeline.

create_workflow()[source]

Create the workflow of the fMRI ConnectomeStage

create_workflow(flow, inputnode, outputnode)[source]

Create the stage worflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the fMRI pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.diffusion package
Submodules
cmp.stages.diffusion.diffusion module
cmp.stages.diffusion.reconstruction module
cmp.stages.diffusion.tracking module
cmp.stages.eeg package
Submodules
cmp.stages.eeg.esi module

Definition of config and stage classes for computing brain parcellation.

class cmp.stages.eeg.esi.EEGSourceImagingConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a EEGSourceImagingStage instance.

task_label

Task label (e.g. _task-<label>_)

Type

Str

esi_tool

Select the tool used for EEG source imaging (inverse solution)

Type

Enum

mne_apply_electrode_transform

If True, apply the transform specified below to electrode positions

Type

Bool

mne_electrode_transform_file

Instance of CustomEEGCartoolMNETransformBIDSFile that describes the input BIDS-formatted MNE transform file in fif format

Type

CustomEEGMNETransformBIDSFile

cartool_spi_file

Instance of CustomEEGCartoolSpiBIDSFile that describes the input BIDS-formatted EEG Solution Points Irregularly spaced file created by Cartool

Type

CustomEEGCartoolSpiBIDSFile

cartool_invsol_file

Instance of CustomEEGCartoolInvSolBIDSFile that describes the input BIDS-formatted EEG Inverse Solution file created by Cartool

Type

CustomEEGCartoolInvSolBIDSFile

cartool_esi_method

Cartool Source Imaging method

Type

Enum([‘LAURA’, ‘LORETA’])

parcellation_scheme

Parcellation used to create the ROI source time-series

Type

Enum([“NativeFreesurfer”, “Lausanne2018”])

lausanne2018_parcellation_res

Resolution of the parcellation if Lausanne2018 parcellation scheme is used

Type

Enum([“scale1”, “scale2”, “scale3”, “scale4”, “scale5”])

cartool_esi_lamb

Regularization weight of inverse solutions computed with Cartool (Default: 6)

Type

Float

cartool_svd_toi_begin

Start TOI for SVD projection (Default: 0.0)

Type

Float

cartool_svd_toi_end

End TOI for SVD projection (Default: 0.25)

Type

Float

mne_esi_method

MNE Source Imaging method

Type

Enum([“sLORETA”, “eLORETA”, “MNE”, “dSPM”])

mne_esi_method_snr

SNR value such as the regularization weight lambda2 of MNE ESI method’ is set to 1.0 / mne_esi_method_snr ** 2 (Default: 3.0)

Type

Float

class cmp.stages.eeg.esi.EEGSourceImagingStage(subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the reconstruction of the inverse solutions stage of a EEGPipeline.

If MNE is selected for ESI reconstruction, this stage consists of five processing interfaces:

  • CreateBem: Create the Boundary Element Model that consists of surfaces obtained with Freesurfer.

  • CreateSrc: Create a bilateral hemisphere surface-based source space file with subsampling.

  • CreateFwd: Create the forward solution (leadfield) from the BEM and the source space.

  • CreateCov: Create the noise covariance matrix from the data.

  • MNEInverseSolutionROI: Create and apply the actual inverse operator to generate the ROI time courses.

If you decide to use ESI reconstruction outputs precomputed with Cartool, then this stage consists of two processing interfaces:

  • CreateSpiRoisMapping: Create Cartool-reconstructed sources / parcellation ROI mapping file.

  • CartoolInverseSolutionROIExtraction: Use Pycartool to load inverse solutions estimated by Cartool and generate the ROI time courses.

create_cartool_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow using Cartool-precomputed inverse solutions.

This method is called by create_workflow() main function if Cartool is selected for ESI.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the EEG pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

create_mne_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow using MNE.

This method is called by create_workflow() main function if MNE is selected for ESI.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the EEG pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

create_workflow(flow, inputnode, outputnode)[source]

Main method to create the stage workflow.

Based on the tool used for ESI, this method calls either the create_cartool_workflow() or the :func:`~cmp.stages.eeg.esi.create_mne_workflow method.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the EEG pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.eeg.preprocessing module

Definition of config and stage classes for computing brain parcellation.

class cmp.stages.eeg.preprocessing.EEGPreprocessingConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a EEGPreprocessingStage instance.

task_label

Task label (e.g. _task-<label>_)

Type

Str

eeg_ts_file

Instance of CustomEEGPreprocBIDSFile that describes the input BIDS-formatted preprocessed EEG file

Type

CustomEEGPreprocBIDSFile

events_file

Instance of CustomEEGEventsBIDSFile that describes the input BIDS-formatted EEG events file

Type

CustomEEGEventsBIDSFile

electrodes_file_fmt

Select the type of tabular file describing electrode positions

Type

Enum([“BIDS”, “Cartool”])

bids_electrodes_file

Instance of CustomEEGElectrodesBIDSFile that describes the input BIDS-compliant EEG electrode file

Type

CustomEEGElectrodesBIDSFile

cartool_electrodes_file

Instance of CustomEEGCartoolElectrodesBIDSFile that describes the input BIDS-formatted EEG electrode file created by Cartool

Type

CustomEEGCartoolElectrodesBIDSFile

t_min

Start time of the epochs in seconds, relative to the time-locked event (Default: -0.2)

Type

Float

t_max

End time of the epochs in seconds, relative to the time-locked event (Default: 0.5)

Type

Float

See also

cmp.stages.eeg.preparer.EEGPreprocessingStage

class cmp.stages.eeg.preprocessing.EEGPreprocessingStage(subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the preprocessing stage of a EEGPipeline.

This stage consists of converting EEGLab set EEG files to MNE Epochs in fif format, the format used in the rest of the pipeline by calling, if necessary the following interface:

  • EEGLAB2fif: Reads eeglab data and converts them to MNE format (fif file extension).

See also

cmp.pipelines.functional.eeg.EEGPipeline, cmp.stages.eeg.preparer.EEGPreprocessingConfig

create_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the EEG pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.functional package
Submodules
cmp.stages.functional.functionalMRI module

Definition of config and stage classes for the extra functional preprocessing stage.

class cmp.stages.functional.functionalMRI.FunctionalMRIConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a FunctionalMRIStage object.

global_nuisance

Perform global nuisance regression (Default: False)

Type

traits.Bool

csf

Perform CSF nuisance regression (Default: True)

Type

traits.Bool

wm

Perform White-Matter nuisance regression (Default: True)

Type

traits.Bool

motion

Perform motion nuisance regression (Default: True)

Type

traits.Bool

detrending = Bool

Perform detrending (Default: True)

detrending_mode = Enum("linear", "quadratic")

Detrending mode (Default: “Linear”)

bandpass_filtering = Bool

Perform bandpass filtering (Default: True)

lowpass_filter = Float

Lowpass filter frequency (Default: 0.01)

highpass_filter = Float

Highpass filter frequency (Default: 0.1)

scrubbing = Bool

Perform scrubbing (Default: True)

class cmp.stages.functional.functionalMRI.FunctionalMRIStage(bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the post-registration preprocessing stage of the fMRIPipeline.

create_workflow()[source]

Create the workflow of the FunctionalMRIStage

create_workflow(flow, inputnode, outputnode)[source]

Create the stage worflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the fMRI pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.parcellation package
Submodules
cmp.stages.parcellation.parcellation module

Definition of config and stage classes for computing brain parcellation.

class cmp.stages.parcellation.parcellation.ParcellationConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a ParcellationStage object.

pipeline_mode

Distinguish if a parcellation is run in a “Diffusion” or in a fMRI pipeline

Type

traits.Enum([“Diffusion”, “fMRI”])

parcellation_scheme

Parcellation scheme used (Default: ‘Lausanne2018’)

Type

traits.Str

parcellation_scheme_editor

Choice of parcellation schemes

Type

traits.List([‘NativeFreesurfer’, ‘Lausanne2018’, ‘Custom’])

include_thalamic_nuclei_parcellation

Perform and include thalamic nuclei segmentation in ‘Lausanne2018’ parcellation (Default: True)

Type

traits.Bool

ants_precision_type

Specify ANTs used by thalamic nuclei segmentation to adopt single / double precision float representation to reduce memory usage. (Default: ‘double’)

Type

traits.Enum([‘double’, ‘float’])

segment_hippocampal_subfields

Perform and include FreeSurfer hippocampal subfields segmentation in ‘Lausanne2018’ parcellation (Default: True)

Type

traits.Bool

segment_brainstem

Perform and include FreeSurfer brainstem segmentation in ‘Lausanne2018’ parcellation (Default: True)

Type

traits.Bool

atlas_info

Dictionary storing information of atlases in the form >>> atlas_info = { >>> “atlas_name”: { >>> ‘number_of_regions’: 83, >>> ‘node_information_graphml’: “/path/to/file.graphml” >>> } >>> } # doctest: +SKIP

Type

traits.Dict

custom_parcellation

Instance of CustomParcellationBIDSFile that describes the custom BIDS-formatted brain parcellation file

Type

traits.Instance(CustomParcellationBIDSFile)

class cmp.stages.parcellation.parcellation.ParcellationStage(pipeline_mode, subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the parcellation stage of a AnatomicalPipeline.

create_workflow()[source]

Create the workflow of the ParcellationStage

create_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the anatomical pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the parcellation stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the parcellation stage

create_workflow_custom(flow, outputnode)[source]

Create the stage workflow when custom inputs are specified.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the anatomical pipeline

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the parcellation stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.preprocessing package
Submodules
cmp.stages.preprocessing.fmri_preprocessing module

Definition of config and stage classes for pre-registration fMRI preprocessing.

class cmp.stages.preprocessing.fmri_preprocessing.PreprocessingConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a PreprocessingStage object.

discard_n_volumes

(Default: ‘5’)

Type

traits.Int

despiking

(Default: True)

Type

traits.Bool

slice_timing

Slice acquisition order for slice timing correction that can be: “bottom-top interleaved”, “bottom-top interleaved”, “top-bottom interleaved”, “bottom-top”, and “top-bottom” (Default: “none”)

Type

traits.Enum

repetition_time

Repetition time (Default: 1.92)

Type

traits.Float

motion_correction

Perform motion correction (Default: True)

Type

traits.Bool

class cmp.stages.preprocessing.fmri_preprocessing.PreprocessingStage(bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the pre-registration preprocessing stage of a fMRIPipeline instance.

create_workflow()[source]

Create the workflow of the PreprocessingStage

create_workflow(flow, inputnode, outputnode)[source]

Create the stage worflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the fMRI pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.preprocessing.preprocessing module

Definition of config and stage classes for diffusion MRI preprocessing.

class cmp.stages.preprocessing.preprocessing.PreprocessingConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a PreprocessingStage instance.

total_readout

Acquisition total readout time used by FSL Eddy (Default: 0.0)

Type

traits.Float

description

Description (Default: ‘description’)

Type

traits.Str

denoising

Perform diffusion MRI denoising (Default: False)

Type

traits.Bool

denoising_algo

Type of denoising algorithm (Default: ‘MRtrix (MP-PCA)’)

Type

traits.Enum([‘MRtrix (MP-PCA)’, ‘Dipy (NLM)’])

dipy_noise_model

Type of noise model when Dipy denoising is performed that can be: ‘Rician’ or ‘Gaussian’ (Default: ‘Rician’)

Type

traits.Enum

bias_field_correction

Perform diffusion MRI bias field correction (Default: False)

Type

traits.Bool

bias_field_algo

Type of bias field correction algorithm that can be: ‘ANTS N4’ or ‘FSL FAST’ (Default: ‘ANTS N4’)

Type

traits.Enum, [‘ANTS N4’, ‘FSL FAST’])

eddy_current_and_motion_correction

Perform eddy current and motion correction (Default: True)

Type

traits.Bool

eddy_correction_algo

Algorithm used for eddy current correction that can be: ‘FSL eddy_correct’ or ‘FSL eddy’ (Default: ‘FSL eddy_correct’)

Type

traits.Enum

eddy_correct_motion_correction

Perform eddy current and motion correction MIGHT BE OBSOLETE (Default: True)

Type

traits.Bool

partial_volume_estimation

Estimate partial volume maps from brain tissues segmentation (Default: True)

Type

traits.Bool

fast_use_priors

Use priors when FAST is used for partial volume estimation (Default: True)

Type

traits.Bool

resampling

Tuple describing the target resolution (Default: (1, 1, 1))

Type

traits.Tuple

interpolation

Type of interpolation used when resampling that can be: ‘interpolate’, ‘weighted’, ‘nearest’, ‘sinc’, or ‘cubic’ (Default: ‘interpolate’)

Type

traits.Enum

tracking_tool

Tool used for tractography

Type

Enum([‘Dipy’, ‘MRtrix’])

act_tracking

True if Anatomically-Constrained or Particle Filtering Tractography is enabled (Default: False)

Type

Bool

gmwmi_seeding

True if tractography seeding is performed from the gray-matter / white-matter interface (Default: False)

Type

Bool

class cmp.stages.preprocessing.preprocessing.PreprocessingStage(bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the pre-registration preprocessing stage of a DiffusionPipeline instance.

create_workflow()[source]

Create the workflow of the PreprocessingStage

See also

cmp.pipelines.diffusion.diffusion.DiffusionPipeline, cmp.stages.preprocessing.preprocessing.PreprocessingConfig

create_workflow(flow, inputnode, outputnode)[source]

Create the stage worflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the Diffusion pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.registration package
Submodules
cmp.stages.registration.registration module

Definition of config and stage classes for MRI co-registration.

class cmp.stages.registration.registration.RegistrationConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a RegistrationStage instance.

pipeline

Pipeline type (Default: “Diffusion”)

Type

traits.Enum([“Diffusion”, “fMRI”])

registration_mode_trait

Choices of registration tools updated depending on the pipeline type. (Default: [‘FSL’, ‘ANTs’] if “Diffusion”, [‘FSL’, ‘BBregister (FS)’] if “fMRI”)

Type

traits.List([‘FSL’, ‘ANTs’, ‘BBregister (FS)’])

registration_mode

Registration tool used from the registration_mode_trait list (Default: ‘ANTs’)

Type

traits.Str

diffusion_imaging_model

Diffusion imaging model (‘DTI’ for instance)

Type

traits.Str

use_float_precision

Use ‘single’ instead of ‘double’ float representation to reduce memory usage of ANTs (Default: False)

Type

traits.Bool

ants_interpolation

Interpolation type used by ANTs that can be: ‘Linear’, ‘NearestNeighbor’, ‘CosineWindowedSinc’, ‘WelchWindowedSinc’, ‘HammingWindowedSinc’, ‘LanczosWindowedSinc’, ‘BSpline’, ‘MultiLabel’, or ‘Gaussian’ (Default: ‘Linear’)

Type

traits.Enum

ants_bspline_interpolation_parameters

ANTs BSpline interpolation parameters (Default: traits.Tuple(Int(3)))

Type

traits.Tuple

ants_gauss_interpolation_parameters

ANTs Gaussian interpolation parameters (Default: traits.Tuple(Float(5), Float(5)))

Type

traits.Tuple

ants_multilab_interpolation_parameters

ANTs Multi-label interpolation parameters (Default: traits.Tuple(Float(5), Float(5)))

Type

traits.Tuple

ants_lower_quantile

ANTs lower quantile (Default: 0.005)

Type

traits.Float

ants_upper_quantile

ANTs upper quantile (Default: 0.995)

Type

traits.Float

ants_convergence_thresh

ANTs convergence threshold (Default: 1e-06)

Type

traits.Float

ants_convergence_winsize

ANTs convergence window size (Default: 10)

Type

traits.Int

ants_linear_gradient_step

ANTS linear gradient step size (Default: 0.1)

Type

traits.Float

ants_linear_cost

Metric used by ANTs linear registration phase that can be ‘CC’, ‘MeanSquares’, ‘Demons’, ‘GC’, ‘MI’, or ‘Mattes’ (Default: ‘MI’)

Type

traits.Enum

ants_linear_sampling_strategy

ANTS sampling strategy for the linear registration phase that can be ‘None’, ‘Regular’, or ‘Random’ (Default: ‘Regular’)

Type

traits.Enum

ants_linear_sampling_perc

Percentage used if random sampling strategy is employed in the linear registration phase (Default: 0.25)

Type

traits.Float

ants_perform_syn

(Default: True)

Type

traits.Bool

ants_nonlinear_gradient_step

(Default: 0.1)

Type

traits.Float

ants_nonlinear_cost

Metric used by ANTs nonlinear (SyN) registration phase that can be ‘CC’, ‘MeanSquares’, ‘Demons’, ‘GC’, ‘MI’, or ‘Mattes’ (Default: ‘CC’)

Type

traits.Enum

ants_nonlinear_update_field_variance

Weight to update field variance in ANTs nonlinear (SyN) registration phase (Default: 3.0)

Type

traits.Float

ants_nonlinear_total_field_variance

Weight to give to total field variance in ANTs nonlinear (SyN) registration phase (Default: 0.0)

Type

traits.Float

flirt_args

FLIRT extra arguments that will be append to the FSL FLIRT command (Default: None)

Type

traits.Str

uses_qform

FSL FLIRT uses qform (Default: True)

Type

traits.Bool

dof

Specify number of degree-of-freedom to FSL FLIRT (Default: 6)

Type

traits.Int

fsl_cost

Metric used by FSL registration that can be ‘mutualinfo’, ‘corratio’, ‘normcorr’, ‘normmi’, ‘leastsq’, or ‘labeldiff’ (Default: ‘normmi’)

Type

traits.Enum

Enable FSL FLIRT “no search” option (Default: True)

Type

traits.Bool

init

Initialization type of FSL registration: ‘spm’, ‘fsl’, or ‘header’ (Default: ‘smp’)

Type

traits.Enum(‘header’, [‘spm’, ‘fsl’, ‘header’])

contrast_type

Contrast type specified to BBRegister: ‘t1’, ‘t2’, or ‘dti’ (Default: ‘dti’)

Type

traits.Enum(‘dti’, [‘t1’, ‘t2’, ‘dti’])

apply_to_eroded_wm

Apply estimated transform to eroded white-matter mask (Default: True)

Type

traits.Bool

apply_to_eroded_csf

Apply estimated transform to eroded cortico spinal fluid mask (Default: True)

Type

traits.Bool

apply_to_eroded_brain

Apply estimated transform to eroded brain mask (Default: False)

Type

traits.Bool

tracking_tool

Tool used for tractography

Type

Enum([‘Dipy’, ‘MRtrix’])

act_tracking

True if Anatomically-Constrained or Particle Filtering Tractography is enabled (Default: False)

Type

traits.Bool

gmwmi_seeding

True if tractography seeding is performed from the gray-matter / white-matter interface (Default: False)

Type

traits.Bool

class cmp.stages.registration.registration.RegistrationStage(pipeline_mode, fs_subjects_dir=None, fs_subject_id=None, bids_dir='', output_dir='')[source]

Bases: cmp.stages.common.Stage

Class that represents the registration stage of both DiffusionPipeline and fMRIPipeline.

fs_subjects_dir

Freesurfer subjects directory (needed by BBRegister)

Type

traits.Directory

fs_subject_id

Freesurfer subject (being processed) directory (needed by BBRegister)

Type

traits.Str

create_workflow()[source]

Create the workflow of the RegistrationStage

See also

cmp.pipelines.diffusion.diffusion.DiffusionPipeline, cmp.pipelines.functional.fMRI.fMRIPipeline, cmp.stages.registration.registration.RegistrationConfig

create_ants_workflow(flow, inputnode, outputnode)[source]

Create the registration workflow using ANTs.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the Diffusion pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

create_bbregister_workflow(flow, inputnode, outputnode)[source]

Create the workflow of the registration stage using FreeSurfer BBRegister.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the fMRI pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

create_flirt_workflow(flow, inputnode, outputnode)[source]

Create the workflow of the registration stage using FSL FLIRT.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of either the Diffusion pipeline or the fMRI pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

create_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of either the Diffusion pipeline or the fMRI pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the stage

define_inspect_outputs()[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

cmp.stages.segmentation package
Submodules
cmp.stages.segmentation.segmentation module

Definition of config and stage classes for segmentation.

class cmp.stages.segmentation.segmentation.SegmentationConfig[source]

Bases: traits.has_traits.HasTraits

Class used to store configuration parameters of a SegmentationStage object.

seg_tool

Choice of segmentation tool that can be “Freesurfer”

Type

traits.Enum([“Freesurfer”, “Custom segmentation”])

make_isotropic

Resample to isotropic resolution (Default: False)

Type

traits.Bool

isotropic_vox_size

Isotropic resolution to be resampled (Default: 1.2, desc=’’)

Type

traits.Float

isotropic_interpolation

Interpolation type used for resampling that can be: ‘cubic’, ‘weighted’, ‘nearest’, ‘sinc’, or ‘interpolate’, (Default: ‘cubic’)

Type

traits.Enum

brain_mask_extraction_tool

Choice of brain extraction tool: “Freesurfer”, “BET”, or “ANTs” (Default: Freesurfer)

Type

traits.Enum

ants_templatefile

Anatomical template used by ANTS brain extraction

Type

traits.File

ants_probmaskfile

Brain probability mask used by ANTS brain extraction

Type

traits.File

ants_regmaskfile

Mask (defined in the template space) used during registration in ANTs brain extraction. To limit the metric computation to a specific region.

Type

traits.File

use_fsl_brain_mask

Use FSL BET for brain extraction (Default: False)

Type

traits.Bool

use_existing_freesurfer_data

(Default: False)

Type

traits.Bool

freesurfer_subjects_dir

Freesurfer subjects directory path usually /output_dir/freesurfer

Type

traits.Str

freesurfer_subject_id

Freesurfer subject (being processed) ID in the form sub-XX(_ses-YY)

Type

traits.Str

freesurfer_args

Extra Freesurfer recon-all arguments

Type

traits.Str

custom_brainmask

Instance of CustomBrainMaskBIDSFile that describes the custom BIDS formatted brain mask

Type

traits.Instance(CustomBrainMaskBIDSFile)

custom_wm_mask

Instance of CustomWMMaskBIDSFile that describes the custom BIDS formatted white-matter mask

Type

traits.Instance(CustomWMMaskBIDSFile)

custom_gm_mask

Instance of CustomGMMaskBIDSFile that describes the custom BIDS formatted gray-matter mask

Type

traits.Instance(CustomGMMaskBIDSFile)

custom_csf_mask

Instance of CustomCSFMaskBIDSFile that describes the custom BIDS formatted CSF mask

Type

traits.Instance(CustomCSFMaskBIDSFile)

custom_aparcaseg

Instance of CustomAparcAsegBIDSFile that describes the custom BIDS formatted Freesurfer aparc-aseg file

Type

traits.Instance(CustomAparcAsegBIDSFile)

number_of_threads

Number of threads leveraged by OpenMP and used in the stage by Freesurfer and ANTs (Default: 1)

Type

traits.Int

class cmp.stages.segmentation.segmentation.SegmentationStage(subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.common.Stage

Class that represents the segmentation stage of a AnatomicalPipeline.

create_workflow()[source]

Create the workflow of the SegmentationStage

create_workflow(flow, inputnode, outputnode)[source]

Create the stage workflow.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the anatomical pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the segmentation stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the segmentation stage

create_workflow_custom(flow, inputnode, outputnode)[source]

Create the stage workflow when custom inputs are specified.

Parameters
  • flow (nipype.pipeline.engine.Workflow) – The nipype.pipeline.engine.Workflow instance of the anatomical pipeline

  • inputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the inputs of the segmentation stage

  • outputnode (nipype.interfaces.utility.IdentityInterface) – Identity interface describing the outputs of the segmentation stage

define_inspect_outputs(debug=False)[source]

Update the inspect_outputs class attribute.

It contains a dictionary of stage outputs with corresponding commands for visual inspection.

Parameters

debug (bool) – If True, show printed output

Submodules
cmp.stages.common module

Definition of common parent classes for stages.

class cmp.stages.common.Stage[source]

Bases: traits.has_traits.HasTraits

Parent class that extends HasTraits and represents a processing pipeline stage.

It is extended by the various pipeline stage subclasses.

bids_subject_label

BIDS subject (participant) label

Type

traits.Str

bids_session_label

BIDS session label

Type

traits.Str

bids_dir

BIDS dataset root directory

Type

traits.Str

output_dir

Output directory

Type

traits.Str

inspect_outputs

Dictionary of stage outputs with corresponding commands for visual inspection (Initialization: ‘Outputs not available’)

Type

traits.Dict

inspect_outputs_enum

Choice of output to be visually inspected (values=’inspect_outputs’)

Type

traits.Enum

enabled

Stage enabled in the pipeline (Default: True)

Type

traits.Bool

config

Instance of stage configuration

Type

Instance(HasTraits)

enabled = True
inspect_outputs = ['Outputs not available']
is_running()[source]

Return the number of unfinished files in the stage.

Returns

nb_of_unfinished_files – Number of unfinished files in the stage

Return type

int

GUI modules
cmp.bidsappmanager.gui package

Module that provides the definition of all classes, functions, and variables dedicated to the GUI of Connectome Mapper 3.

Submodules
cmp.bidsappmanager.gui.bidsapp module
cmp.bidsappmanager.gui.config module
cmp.bidsappmanager.gui.globals module

Modules that defines multiple variables and functions used by the different windows of the GUI.

cmp.bidsappmanager.gui.globals.get_icon(icon_fname)[source]

Return an instance of ImageResource or None is there is not graphical backend.

Parameters

icon_fname (string) – Filename to an icon image

Returns

icon – Return the full path to the icon file or None is there is not graphical backend.

Return type

string

cmp.bidsappmanager.gui.handlers module
cmp.bidsappmanager.gui.principal module
cmp.bidsappmanager.gui.qc module
cmp.bidsappmanager.gui.traits module

Module that defines traits-based classes for Connectome Mapper 3 BIDS App Interface TraitsUI View.

class cmp.bidsappmanager.gui.traits.MultiSelectAdapter[source]

Bases: traitsui.tabular_adapter.TabularAdapter

This adapter is used by left and right tables for selection of subject to be processed.

cmp.bidsappmanager.project module
cmp.bidsappmanager.pipelines.anatomical package
Submodules
cmp.bidsappmanager.pipelines.anatomical.anatomical module

Anatomical pipeline UI Class definition.

class cmp.bidsappmanager.pipelines.anatomical.anatomical.AnatomicalPipelineUI(project_info)[source]

Bases: cmp.pipelines.anatomical.anatomical.AnatomicalPipeline

Class that extends the AnatomicalPipeline with graphical components.

segmentation

Button to open the window for configuration or quality inspection of the segmentation stage depending on the view_mode

Type

traits.ui.Button

parcellation

Button to open the window for configuration or quality inspection of the segmentation stage depending on the view_mode

Type

traits.ui.Button

view_mode

Variable used to control the display of either (1) the configuration or (2) the quality inspection of stage of the pipeline

Type

[‘config_view’, ‘inspect_outputs_view’]

pipeline_group

Panel defining the layout of the buttons of the stages with corresponding images

Type

traitsUI panel

traits_view

QtView that includes the pipeline_group panel

Type

QtView

check_input(layout)[source]

Method that checks if inputs of the anatomical pipeline are available in the datasets.

Parameters

layout (bids.BIDSLayout) – BIDSLayout object used to query

Returns

valid_inputs – True in all inputs of the anatomical pipeline are available

Return type

bool

check_output()[source]

Method that checks if outputs of the anatomical pipeline are available.

Returns

  • valid_output (bool) – True is all outputs are found

  • error_message (string) – Message in case there is an error

cmp.bidsappmanager.pipelines.diffusion package
Submodules
cmp.bidsappmanager.pipelines.diffusion.diffusion module
cmp.bidsappmanager.pipelines.functional package
Submodules
cmp.bidsappmanager.pipelines.functional.eeg module

EEG pipeline UI Class definition.

class cmp.bidsappmanager.pipelines.functional.eeg.EEGPipelineUI(project_info)[source]

Bases: cmp.pipelines.functional.eeg.EEGPipeline

Class that extends the EEGPipeline with graphical components.

preprocessing

Button to open the window for configuration or quality inspection of the preprocessing stage depending on the view_mode

Type

traits.ui.Button

sourceimaging

Button to open the window for configuration or quality inspection of the source imaging stage depending on the view_mode

Type

traits.ui.Button

connectome

Button to open the window for configuration or quality inspection of the connectome stage depending on the view_mode

Type

traits.ui.Button

view_mode

Variable used to control the display of either (1) the configuration or (2) the quality inspection of stage of the pipeline

Type

[‘config_view’, ‘inspect_outputs_view’]

pipeline_group

Panel defining the layout of the buttons of the stages with corresponding images

Type

traitsUI panel

traits_view

QtView that includes the pipeline_group panel

Type

QtView

cmp.bidsappmanager.pipelines.functional.fMRI module

Functional pipeline UI Class definition.

class cmp.bidsappmanager.pipelines.functional.fMRI.fMRIPipelineUI(project_info)[source]

Bases: cmp.pipelines.functional.fMRI.fMRIPipeline

Class that extends the fMRIPipeline with graphical components.

preprocessing

Button to open the window for configuration or quality inspection of the preprocessing stage depending on the view_mode

Type

traits.ui.Button

registration

Button to open the window for configuration or quality inspection of the registration stage depending on the view_mode

Type

traits.ui.Button

functionalMRI

Button to open the window for configuration or quality inspection of the extra preprocessing stage stage depending on the view_mode

Type

traits.ui.Button

connectome

Button to open the window for configuration or quality inspection of the connectome stage depending on the view_mode

Type

traits.ui.Button

view_mode

Variable used to control the display of either (1) the configuration or (2) the quality inspection of stage of the pipeline

Type

[‘config_view’, ‘inspect_outputs_view’]

pipeline_group

Panel defining the layout of the buttons of the stages with corresponding images

Type

traitsUI panel

traits_view

QtView that includes the pipeline_group panel

Type

QtView

check_input(layout, gui=True)[source]

Method that checks if inputs of the fMRI pipeline are available in the datasets.

Parameters
  • layout (bids.BIDSLayout) – BIDSLayout object used to query

  • gui (bool) – If True, display message in GUI

Returns

valid_inputs – True in all inputs of the fMRI pipeline are available

Return type

bool

cmp.bidsappmanager.stages package
Subpackages
cmp.bidsappmanager.stages.connectome package
Submodules
cmp.bidsappmanager.stages.connectome.connectome module

Definition of structural connectome config and stage UI classes.

class cmp.bidsappmanager.stages.connectome.connectome.ConnectomeConfigUI[source]

Bases: cmp.stages.connectome.connectome.ConnectomeConfig

Class that extends the ConnectomeConfig with graphical components.

output_types

A list of output_types. Valid output_types are ‘gpickle’, ‘mat’, ‘cff’, ‘graphml’

Type

list of string

connectivity_metrics

A list of connectivity metrics to stored. Valid connectivity_metrics are ‘Fiber number’, ‘Fiber length’, ‘Fiber density’, ‘Fiber proportion’, ‘Normalized fiber density’, ‘ADC’, ‘gFA’

Type

list of string

traits_view

TraitsUI view that displays the Attributes of this class

Type

traits.ui.View

class cmp.bidsappmanager.stages.connectome.connectome.ConnectomeStageUI(bids_dir, output_dir)[source]

Bases: cmp.stages.connectome.connectome.ConnectomeStage

Class that extends the ConnectomeStage with graphical components.

log_visualization

If True, display with a log transformation

Type

traits.Bool

circular_layout

If True, display the connectivity matrix using a circular layout

Type

traits.Bool

inspect_output_button

Button that displays the selected connectivity matrix in the graphical component for quality inspection

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.connectome.eeg_connectome module

Definition of EEG connectome config and stage UI classes.

class cmp.bidsappmanager.stages.connectome.eeg_connectome.EEGConnectomeConfigUI[source]

Bases: cmp.stages.connectome.eeg_connectome.EEGConnectomeConfig

Class that extends the cmp.stages.connectome.eeg_connectome.EEGConnectomeConfig with graphical components.

output_types

A list of output_types. Valid output_types are ‘gpickle’, ‘mat’, ‘cff’, ‘graphml’

Type

list of string

connectivity_metrics

A list of time/frequency connectivity metrics to stored. Valid connectivity_metrics are ‘coh’, ‘cohy’, ‘imcoh’, ‘plv’, ‘ciplv’, ‘ppc’, ‘pli’, ‘wpli’, and ‘wpli2_debiased’

Type

list of string

traits_view

TraitsUI view that displays the Attributes of this class

Type

traits.ui.View

class cmp.bidsappmanager.stages.connectome.eeg_connectome.EEGConnectomeStageUI(subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.connectome.eeg_connectome.EEGConnectomeStage

Class that extends the cmp.stages.connectome.eeg_connectome.EEGConnectomeStage with graphical components.

log_visualization

Log visualization that might be obsolete as this has been detached after creation of the bidsappmanager (Default: True)

Type

traits.Bool

circular_layout

Visualization of the connectivity matrix using a circular layout that might be obsolete as this has been detached after creation of the bidsappmanager (Default: False)

Type

traits.Bool

inspect_output_button

Button that displays the selected connectivity matrix in the graphical component for quality inspection

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.connectome.fmri_connectome module

Definition of functional connectome config and stage UI classes.

class cmp.bidsappmanager.stages.connectome.fmri_connectome.ConnectomeConfigUI[source]

Bases: cmp.stages.connectome.fmri_connectome.ConnectomeConfig

Class that extends the ConnectomeConfig with graphical components.

output_types

A list of output_types. Valid output_types are ‘gpickle’, ‘mat’, ‘cff’, ‘graphml’

Type

list of string

traits_view

TraitsUI view that displays the Attributes of this class

Type

traits.ui.View

class cmp.bidsappmanager.stages.connectome.fmri_connectome.ConnectomeStageUI(bids_dir, output_dir)[source]

Bases: cmp.stages.connectome.fmri_connectome.ConnectomeStage

Class that extends the ConnectomeStage with graphical components.

log_visualization

If True, display with a log transformation

Type

traits.Bool

circular_layout

If True, display the connectivity matrix using a circular layout

Type

traits.Bool

inspect_output_button

Button that displays the selected connectivity matrix in the graphical component for quality inspection

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.diffusion package
Submodules
cmp.bidsappmanager.stages.diffusion.diffusion module
cmp.bidsappmanager.stages.diffusion.reconstruction module
cmp.bidsappmanager.stages.diffusion.tracking module
cmp.bidsappmanager.stages.eeg package
Submodules
cmp.bidsappmanager.stages.eeg.esi module

Definition of EEG Source Imaging config and stage UI classes.

class cmp.bidsappmanager.stages.eeg.esi.EEGSourceImagingConfigUI[source]

Bases: cmp.stages.eeg.esi.EEGSourceImagingConfig

Class that extends the cmp.stages.eeg.esi.EEGSourceImagingConfig with graphical components.

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.eeg.esi.EEGSourceImagingStageUI(subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.eeg.esi.EEGSourceImagingStage

Class that extends the cmp.stages.eeg.esi.EEGSourceImagingStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.eeg.preprocessing module

Definition of EEG preprocessing config and stage UI classes.

class cmp.bidsappmanager.stages.eeg.preprocessing.EEGPreprocessingConfigUI[source]

Bases: cmp.stages.eeg.preprocessing.EEGPreprocessingConfig

Class that extends the cmp.stages.eeg.preprocessing.EEGPreprocessingConfig with graphical components.

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.eeg.preprocessing.EEGPreprocessingStageUI(subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.eeg.preprocessing.EEGPreprocessingStage

Class that extends the cmp.stages.eeg.preprocessing.EEGPreprocessingStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.functional package
Submodules
cmp.bidsappmanager.stages.functional.functionalMRI module

Definition of extra preprocessing of functional MRI (post-registration) config and stage UI classes.

class cmp.bidsappmanager.stages.functional.functionalMRI.FunctionalMRIConfigUI[source]

Bases: cmp.stages.functional.functionalMRI.FunctionalMRIConfig

Class that extends the FunctionalMRIConfig with graphical components.

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.functional.functionalMRI.FunctionalMRIStageUI(bids_dir, output_dir)[source]

Bases: cmp.stages.functional.functionalMRI.FunctionalMRIStage

Class that extends the FunctionalMRIStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.parcellation package
Submodules
cmp.bidsappmanager.stages.parcellation.parcellation module

Definition of parcellation config and stage UI classes.

class cmp.bidsappmanager.stages.parcellation.parcellation.ParcellationConfigUI[source]

Bases: cmp.stages.parcellation.parcellation.ParcellationConfig

Class that extends the ParcellationConfig with graphical components.

custom_parcellation_view

VGroup that displays the different parts of a custom BIDS parcellation file

Type

traits.ui.View

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.parcellation.parcellation.ParcellationStageUI(pipeline_mode, subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.parcellation.parcellation.ParcellationStage

Class that extends the ParcellationStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.preprocessing package
Submodules
cmp.bidsappmanager.stages.preprocessing.fmri_preprocessing module

Definition of fMRI preprocessing config and stage UI classes.

class cmp.bidsappmanager.stages.preprocessing.fmri_preprocessing.PreprocessingConfigUI[source]

Bases: cmp.stages.preprocessing.fmri_preprocessing.PreprocessingConfig

Class that extends the (functional) PreprocessingConfig with graphical components.

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.preprocessing.fmri_preprocessing.PreprocessingStageUI(bids_dir, output_dir)[source]

Bases: cmp.stages.preprocessing.fmri_preprocessing.PreprocessingStage

Class that extends the (functional) PreprocessingStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.preprocessing.preprocessing module

Definition of diffusion preprocessing config and stage UI classes.

class cmp.bidsappmanager.stages.preprocessing.preprocessing.PreprocessingConfigUI[source]

Bases: cmp.stages.preprocessing.preprocessing.PreprocessingConfig

Class that extends the (diffusion) PreprocessingConfig with graphical components.

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.preprocessing.preprocessing.PreprocessingStageUI(bids_dir, output_dir)[source]

Bases: cmp.stages.preprocessing.preprocessing.PreprocessingStage

Class that extends the (diffusion) PreprocessingStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.registration package
Submodules
cmp.bidsappmanager.stages.registration.registration module

Definition of registration config and stage UI classes.

class cmp.bidsappmanager.stages.registration.registration.RegistrationConfigUI[source]

Bases: cmp.stages.registration.registration.RegistrationConfig

Class that extends the RegistrationConfig with graphical components.

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.registration.registration.RegistrationStageUI(pipeline_mode, fs_subjects_dir=None, fs_subject_id=None, bids_dir='', output_dir='')[source]

Bases: cmp.stages.registration.registration.RegistrationStage

Class that extends the RegistrationStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmp.bidsappmanager.stages.segmentation package
Submodules
cmp.bidsappmanager.stages.segmentation.segmentation module

Definition of segmentation config and stage UI classes.

class cmp.bidsappmanager.stages.segmentation.segmentation.SegmentationConfigUI[source]

Bases: cmp.stages.segmentation.segmentation.SegmentationConfig

Class that extends the SegmentationConfig with graphical components.

custom_brainmask_group

VGroup that displays the different parts of a custom BIDS brain mask file

Type

traits.ui.VGroup

custom_gm_mask_group

VGroup that displays the different parts of a custom BIDS gray matter mask file

Type

traits.ui.VGroup

custom_wm_mask_group

VGroup that displays the different parts of a custom BIDS white matter mask file

Type

traits.ui.VGroup

custom_csf_mask_group

VGroup that displays the different parts of a custom BIDS CSF mask file

Type

traits.ui.VGroup

custom_aparcaseg_group

VGroup that displays the different parts of a custom BIDS-formatted Freesurfer’s aparc+aseg file

Type

traits.ui.VGroup

traits_view

TraitsUI view that displays the attributes of this class, e.g. the parameters for the stage

Type

traits.ui.View

class cmp.bidsappmanager.stages.segmentation.segmentation.SegmentationStageUI(subject, session, bids_dir, output_dir)[source]

Bases: cmp.stages.segmentation.segmentation.SegmentationStage

Class that extends the SegmentationStage with graphical components.

inspect_output_button

Button that displays the selected output in an appropriate viewer (present only in the window for quality inspection)

Type

traits.ui.Button

inspect_outputs_view

TraitsUI view that displays the quality inspection window of this stage

Type

traits.ui.View

config_view

TraitsUI view that displays the configuration window of this stage

Type

traits.ui.View

cmtklib package

Subpackages
cmtklib.bids package
Submodules
cmtklib.bids.io module

This module provides classes to handle custom BIDS derivatives file input.

class cmtklib.bids.io.CustomAparcAsegBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted Freesurfer aparc+aseg file in the form sub-<label>_desc-aparcaseg_dseg.nii.gz.

class cmtklib.bids.io.CustomBIDSFile(p_toolbox_derivatives_dir='', p_datatype='', p_suffix='', p_extension='', p_acquisition='', p_rec='', p_atlas='', p_res='', p_label='', p_desc='', p_task='')[source]

Bases: traits.has_traits.HasTraits

Base class used to represent a BIDS-formatted file inside a custom BIDS derivatives directory.

toolbox_derivatives_dir

Toolbox folder name in the derivatives/ of the BIDS dataset

Type

Str

datatype

BIDS data type

Type

Enum([“anat”, “dwi”, “func”, “eeg”])

suffix

Filename suffix e.g. sub-01_T1w.nii.gz has suffix T1w

Type

Str

acquisition

Label used in _acq-<label>_

Type

Str

task

Label used in _task-<label>_

Type

Str

rec

Label used in _rec-<label>_

Type

Str

res

Label used in _res-<label>_

Type

Str

extension

File extension

Type

Str

atlas

Label used in _atlas-<label>_

Type

Str

label

Label used in _label-<label>_

Type

Str

desc

Label used in _desc-<label>_

Type

Str

get_filename(subject, session=None, debug=False)[source]

Return the filename path with extension of the represented BIDS file.

Parameters
  • subject (str) – Subject filename entity e.g. “sub-01”

  • session (str) – Session filename entity e.g. “ses-01” if applicable (Default: None)

  • debug (bool) – Debug mode (Extra output messages) if True

get_filename_path(base_dir, subject, session=None, debug=False)[source]

Return the filename path without extension of the represented BIDS file.

Parameters
  • base_dir (str) – BIDS root directory or derivatives/ directory in BIDS root directory

  • subject (str) – Subject filename entity e.g. “sub-01”

  • session (str) – Session filename entity e.g. “ses-01” if applicable (Default: None)

  • debug (bool) – Debug mode (Extra output messages) if True

get_query_dict()[source]

Return the dictionary to be passed to BIDSDataGrabber to query a list of files.

get_toolbox_derivatives_dir()[source]

Return the value of custom_derivatives_dir attribute.

class cmtklib.bids.io.CustomBrainMaskBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom brain mask in the form sub-<label>_desc-brain_mask.nii.gz.

class cmtklib.bids.io.CustomCSFMaskBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom CSF mask in the form sub-<label>_label-CSF_dseg.nii.gz.

class cmtklib.bids.io.CustomEEGCartoolElectrodesBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted electrode file produced by Cartool, in the form sub-<label>_eeg.xyz.

class cmtklib.bids.io.CustomEEGCartoolInvSolBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted inverse solution file produced by Cartool in the form sub-<label>_eeg.[LAURA|LORETA].is.

class cmtklib.bids.io.CustomEEGCartoolMapSpiRoisBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted spi / rois mapping file in the form sub-<label>_eeg.pickle.rois.

class cmtklib.bids.io.CustomEEGCartoolSpiBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted Source Point Irregularly spaced file produced by Cartool, in the form sub-<label>_eeg.spi.

class cmtklib.bids.io.CustomEEGElectrodesBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted EEG electrodes file in the form sub-<label>_task-<label>_electrodes.tsv.

class cmtklib.bids.io.CustomEEGEpochsBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted EEG Epochs file in .set or .fif format.

class cmtklib.bids.io.CustomEEGEventsBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted EEG task events file in the form sub-<label>_task-<label>_events.tsv.

extract_event_ids_from_json_sidecar(base_dir, subject, session=None, debug=False)[source]
class cmtklib.bids.io.CustomEEGMNETransformBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted electrode transform file in the form sub-<label>_trans.fif.

class cmtklib.bids.io.CustomEEGPreprocBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom BIDS-formatted preprocessed EEG file in the form sub-<label>_task-<label>_desc-preproc_eeg.[set|fif].

class cmtklib.bids.io.CustomGMMaskBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom gray-matter mask in the form sub-<label>_label-GM_dseg.nii.gz.

class cmtklib.bids.io.CustomParcellationBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom parcellation files in the form sub-<label>_atlas-<label>[_res-<label>]_dseg.nii.gz.

get_nb_of_regions(bids_dir, subject, session=None, debug=False)[source]

Return the number of regions by reading its associated TSV side car file describing the nodes.

Parameters
  • bids_dir (str) – BIDS root directory

  • subject (str) – Subject filename entity e.g. “sub-01”

  • session (str) – Session filename entity e.g. “ses-01” if applicable (Default: None)

  • debug (bool) – Debug mode (Extra output messages) if True

class cmtklib.bids.io.CustomWMMaskBIDSFile[source]

Bases: cmtklib.bids.io.CustomBIDSFile

Represent a custom white-matter mask in the form sub-<label>_label-WM_dseg.nii.gz.

cmtklib.bids.network module

This module provides functions to handle connectome networks / graphs generated by CMP3.

cmtklib.bids.network.load_graphs(output_dir, subjects, parcellation_scheme, weight)[source]

Return a dictionary of connectivity matrices (graph adjacency matrices).

Still in development

Parameters
  • output_dir (string) – Output/derivatives directory

  • subjects (list) – List of subject

  • parcellation_scheme (['NativeFreesurfer', 'Lausanne2018', 'Custom']) – Parcellation scheme

  • weight (['number_of_fibers','fiber_density',...]) – Edge metric to extract from the graph

Returns

connmats – Dictionary of connectivity matrices

Return type

dict

cmtklib.bids.utils module

This module provides CMTK Utility functions to handle BIDS datasets.

CreateBIDSStandardParcellationLabelIndexMappingFile

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Creates the BIDS standard generic label-index mapping file that describes parcellation nodes.

roi_colorluta pathlike object or string representing an existing file

Path to FreesurferColorLUT.txt file that describes the RGB color of the graph nodes for a given parcellation.

roi_graphmla pathlike object or string representing an existing file

Path to graphml file that describes graph nodes for a given parcellation.

verbosea boolean

Verbose mode.

roi_bids_tsva pathlike object or string representing a file

Output BIDS standard generic label-index mapping file that describes parcellation nodes.

CreateCMPParcellationNodeDescriptionFilesFromBIDSFile

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Creates CMP graphml and FreeSurfer colorLUT files that describe parcellation nodes from the BIDS TSV file

roi_bids_tsva pathlike object or string representing an existing file

Output BIDS standard generic label-index mapping file that describes parcellation nodes.

roi_colorluta pathlike object or string representing a file

Path to FreesurferColorLUT.txt file that describes the RGB color of the graph nodes for a given parcellation.

roi_graphmla pathlike object or string representing a file

Path to graphml file that describes graph nodes for a given parcellation.

CreateMultipleCMPParcellationNodeDescriptionFilesFromBIDSFile

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Creates CMP graphml and FreeSurfer colorLUT files describing parcellation nodes from a list of BIDS TSV files

roi_bids_tsvs : a list of items which are a pathlike object or string representing an existing file

roi_colorluts : a list of items which are a pathlike object or string representing a file roi_graphmls : a list of items which are a pathlike object or string representing a file

cmtklib.bids.utils.get_native_space_files(filepathlist)[source]

Return a list of files without _space-<label>_ in the filename.

cmtklib.bids.utils.get_native_space_no_desc_files(filepathlist)[source]

Return a list of files without _space-<label>_ and _desc-<label>_ in the filename.

cmtklib.bids.utils.get_native_space_tsv_sidecar_files(filepathlist)[source]

Return path to tsv sidecar file of a list of niftis (nii.gz) without _space-<label>_ in their filename.

cmtklib.bids.utils.write_derivative_description(bids_dir, deriv_dir, pipeline_name)[source]

Write a dataset_description.json in each type of CMP derivatives.

Parameters
  • bids_dir (string) – BIDS root directory

  • deriv_dir (string) – Output/derivatives directory

  • pipeline_name (string) – Type of derivatives (['cmp-<version>', 'freesurfer-<version>', 'nipype-<version>'])

cmtklib.interfaces package
Submodules
cmtklib.interfaces.afni module

The AFNI module provides Nipype interfaces for the AFNI toolbox missing in nipype or modified.

Bandpass

Link to code

Bases: nipype.interfaces.afni.base.AFNICommand

Wrapped executable: 3dBandpass.

Program to lowpass and/or highpass each voxel time series in a dataset.

Calls the 3dBandpass tool from AFNI, offering more/different options than Fourier.

For complete details, see the 3dBandpass Documentation.

Examples

>>> from nipype.interfaces import afni as afni
>>> from nipype.testing import  example_data
>>> bandpass = afni.Bandpass()
>>> bandpass.inputs.in_file = example_data('functional.nii')
>>> bandpass.inputs.highpass = 0.005
>>> bandpass.inputs.lowpass = 0.1
>>> res = bandpass.run()  
highpassa float

Highpass. Maps to a command-line argument: %f (position: -3).

in_filea pathlike object or string representing an existing file

Input file to 3dBandpass. Maps to a command-line argument: %s (position: -1).

lowpassa float

Lowpass. Maps to a command-line argument: %f (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

automaska boolean

Create a mask from the input dataset. Maps to a command-line argument: -automask.

blura float

Blur (inside the mask only) with a filter width (FWHM) of ‘fff’ millimeters. Maps to a command-line argument: -blur %f.

despikea boolean

Despike each time series before other processing. Hopefully, you don’t actually need to do this, which is why it is optional. Maps to a command-line argument: -despike.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

localPVa float

Replace each vector by the local Principal Vector (AKA first singular vector) from a neighborhood of radius ‘rrr’ millimiters. Note that the PV time series is L2 normalized. This option is mostly for Bob Cox to have fun with. Maps to a command-line argument: -localPV %f.

maska pathlike object or string representing an existing file

Mask file. Maps to a command-line argument: -mask %s (position: 2).

nfftan integer

Set the FFT length [must be a legal value]. Maps to a command-line argument: -nfft %d.

no_detrenda boolean

Skip the quadratic detrending of the input that occurs before the FFT-based bandpassing. ++ You would only want to do this if the dataset had been detrended already in some other program. Maps to a command-line argument: -nodetrend.

normalizea boolean

Make all output time series have L2 norm = 1 ++ i.e., sum of squares = 1. Maps to a command-line argument: -norm.

notransa boolean

Don’t check for initial positive transients in the data: The test is a little slow, so skipping it is OK, if you KNOW the data time series are transient-free. Maps to a command-line argument: -notrans.

num_threadsan integer

Set number of threads. (Nipype default value: 1)

orthogonalize_dseta pathlike object or string representing an existing file

Orthogonalize each voxel to the corresponding voxel time series in dataset ‘fset’, which must have the same spatial and temporal grid structure as the main input dataset. At present, only one ‘-dsort’ option is allowed. Maps to a command-line argument: -dsort %s.

orthogonalize_filea list of items which are a pathlike object or string representing an existing file

Also orthogonalize input to columns in f.1D Multiple ‘-ort’ options are allowed. Maps to a command-line argument: -ort %s.

out_filea pathlike object or string representing a file

Output file from 3dBandpass. Maps to a command-line argument: -prefix %s (position: 1).

outputtype‘NIFTI’ or ‘AFNI’ or ‘NIFTI_GZ’

AFNI output filetype.

tra float

Set time step (TR) in sec [default=from dataset header]. Maps to a command-line argument: -dt %f.

out_filea pathlike object or string representing an existing file

Output file.

Despike

Link to code

Bases: nipype.interfaces.afni.base.AFNICommand

Wrapped executable: 3dDespike.

Removes ‘spikes’ from the 3D+time input dataset.

It calls the 3dDespike tool from AFNI.

For complete details, see the 3dDespike Documentation.

Examples

>>> from nipype.interfaces import afni
>>> despike = afni.Despike()
>>> despike.inputs.in_file = 'functional.nii'
>>> res = despike.run()  
in_filea pathlike object or string representing an existing file

Input file to 3dDespike. Maps to a command-line argument: %s (position: -1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

num_threadsan integer

Set number of threads. (Nipype default value: 1)

out_filea pathlike object or string representing a file

Output image file name. Maps to a command-line argument: -prefix %s.

outputtype‘NIFTI’ or ‘AFNI’ or ‘NIFTI_GZ’

AFNI output filetype.

out_filea pathlike object or string representing an existing file

Output file.

cmtklib.interfaces.ants module

The ANTs module provides Nipype interfaces for the ANTs registration toolbox missing in nipype or modified.

MultipleANTsApplyTransforms

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Apply linear and deformable transforms estimated by ANTS to a list of images.

It calls the antsApplyTransform on a series of images.

Examples

>>> apply_tf = MultipleANTsApplyTransforms()
>>> apply_tf.inputs.input_images = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                                 '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                                 '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                                 '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                                 '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> apply_tf.inputs.transforms = ['/path/to/final1Warp.nii.gz',
>>>                               '/path/to/final0GenericAffine.mat']
>>> apply_tf.inputs.reference_image = File(mandatory=True, exists=True)
>>> apply_tf.inputs.interpolation = 'NearestNeighbor'
>>> apply_tf.inputs.default_value = 0.0
>>> apply_tf.inputs.out_postfix = "_transformed"
>>> apply_tf.run() 

reference_image : a string or os.PathLike object referring to an existing file transforms : a list of items which are a string or os.PathLike object referring to an existing file

Transform files: will be applied in reverse order. For example, the last specified transform will be applied first.

default_value : a float input_images : a list of items which are a string or os.PathLike object referring to an existing file interpolation : ‘Linear’ or ‘NearestNeighbor’ or ‘CosineWindowedSinc’ or ‘WelchWindowedSinc’ or ‘HammingWindowedSinc’ or ‘LanczosWindowedSinc’ or ‘MultiLabel’ or ‘Gaussian’ or ‘BSpline’

(Nipype default value: Linear)

out_postfixa string

(Nipype default value: _transformed)

output_images : a list of items which are a string or os.PathLike object

cmtklib.interfaces.dipy module
cmtklib.interfaces.freesurfer module

The FreeSurfer module provides Nipype interfaces for Freesurfer tools missing in nipype or modified.

BBRegister

Link to code

Bases: nipype.interfaces.freesurfer.base.FSCommand

Wrapped executable: bbregister.

Use FreeSurfer bbregister to register a volume to the Freesurfer anatomical.

This program performs within-subject, cross-modal registration using a

boundary-based cost function. The registration is constrained to be 6 DOF (rigid).

It is required that you have an anatomical scan of the subject that has

already been recon-all-ed using freesurfer.

Examples

>>> from cmtklib.interfaces.freesurfer import BBRegister
>>> bbreg = BBRegister(subject_id='me',
>>>                    source_file='structural.nii',
>>>                    init='header',
>>>                    contrast_type='t2')
>>> bbreg.run()  
contrast_type‘t1’ or ‘t2’ or ‘dti’

Contrast type of image. Maps to a command-line argument: --%s.

init‘spm’ or ‘fsl’ or ‘header’

Initialize registration spm, fsl, header. Maps to a command-line argument: --init-%s. Mutually exclusive with inputs: init_reg_file.

init_reg_filea pathlike object or string representing an existing file

Existing registration file. Mutually exclusive with inputs: init.

source_filea pathlike object or string representing a file

Source file to be registered. Maps to a command-line argument: --mov %s.

subject_ida string

Freesurfer subject id. Maps to a command-line argument: --s %s.

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

epi_maska boolean

Mask out B0 regions in stages 1 and 2. Maps to a command-line argument: --epi-mask.

intermediate_filea pathlike object or string representing an existing file

Intermediate image, e.g. in case of partial FOV. Maps to a command-line argument: --int %s.

out_fsl_filea boolean or a pathlike object or string representing a file

Write the transformation matrix in FSL FLIRT format. Maps to a command-line argument: --fslmat %s.

out_reg_filea pathlike object or string representing a file

Output registration file. Maps to a command-line argument: --reg %s.

reg_framean integer

0-based frame index for 4D source file. Maps to a command-line argument: --frame %d. Mutually exclusive with inputs: reg_middle_frame.

reg_middle_framea boolean

Register middle frame of 4D source file. Maps to a command-line argument: --mid-frame. Mutually exclusive with inputs: reg_frame.

registered_filea boolean or a pathlike object or string representing a file

Output warped sourcefile either True or filename. Maps to a command-line argument: --o %s.

spm_niftia boolean

Force use of nifti rather than analyze with SPM. Maps to a command-line argument: --spm-nii.

subjects_dira pathlike object or string representing an existing directory

Subjects directory.

min_cost_filea pathlike object or string representing an existing file

Output registration minimum cost file.

out_fsl_filea pathlike object or string representing a file

Output FLIRT-style registration file.

out_reg_filea pathlike object or string representing an existing file

Output registration file.

registered_filea pathlike object or string representing a file

Registered and resampled source file.

Tkregister2

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: tkregister2.

Performs image co-registration using Freesurfer tkregister2.

Examples

>>> from cmtklib.interfaces.freesurfer import Tkregister2
>>> tkreg = Tkregister2()
>>> tkreg.inputs.in_file = 'sub-01_desc-brain_mask.nii.gz'
>>> tkreg.inputs.subject_dir = '/path/to/output_dir/freesurfer/sub-01'
>>> tkreg.inputs.subjects_dir = '/path/to/output_dir/freesurfer'
>>> tkreg.inputs.subject_id = 'sub-01'
>>> tkreg.inputs.regheader = True
>>> tkreg.inputs.in_file = '/path/to/moving_image.nii.gz'
>>> tkreg.inputs.target_file = '/path/to/fixed_image.nii.gz'
>>> tkreg.inputs.fslreg_out = 'motions.par'
>>> tkreg.inputs.noedit = True
>>> tkreg.run()  
fslreg_outa string

FSL-Style registration output matrix. Maps to a command-line argument: --fslregout %s.

in_filea pathlike object or string representing a file

Movable volume. Maps to a command-line argument: --mov %s.

reg_outa string

Input/output registration file. Maps to a command-line argument: --reg %s.

target_filea pathlike object or string representing a file

Target volume. Maps to a command-line argument: --targ %s.

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

noedita boolean

Do not open edit window (exit) - for conversions. Maps to a command-line argument: --noedit.

regheadera boolean

Compute registration from headers. Maps to a command-line argument: --regheader.

subject_ida string

Set subject id. Maps to a command-line argument: --s %s.

subjects_dira pathlike object or string representing an existing directory

Use dir as SUBJECTS_DIR. Maps to a command-line argument: --sd %s.

fslregout_filea pathlike object or string representing a file

Resulting FSL-Style registration matrix.

regout_filea pathlike object or string representing a file

Resulting registration file.

copyBrainMaskToFreesurfer

Link to code

Bases: nipype.interfaces.io.IOBase

Copy a custom brain mask in the freesurfer subject mri/ directory.

It replaces the brainmask files generated by Freesurfer recon-all

in order to re-run recon-all with a custom brain mask.

Examples

>>> from cmtklib.interfaces.freesurfer import copyBrainMaskToFreesurfer
>>> copy_mask_fs = copyBrainMaskToFreesurfer()
>>> copy_mask_fs.inputs.in_file = 'sub-01_desc-brain_mask.nii.gz'
>>> copy_mask_fs.inputs.subject_dir = '/path/to/output_dir/freesurfer/sub-01'
>>> copy_mask_fs.run()  

in_file : a pathlike object or string representing an existing file subject_dir : a pathlike object or string representing an existing directory

out_brainmask_file : a pathlike object or string representing an existing file out_brainmaskauto_file : a pathlike object or string representing an existing file

copyFileToFreesurfer

Link to code

Bases: nipype.interfaces.io.IOBase

Copy a file to an output specified.

Note

Not used.

in_file : a pathlike object or string representing an existing file out_file : a pathlike object or string representing a file

out_file : a pathlike object or string representing an existing file

cmtklib.interfaces.fsl module

The FSL module provides Nipype interfaces for FSL functions missing in Nipype or modified.

ApplymultipleWarp

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Apply a deformation field estimated by FSL fnirt to a list of images.

Example

>>> from cmtklib.interfaces import fsl
>>> apply_warp = fsl.ApplymultipleWarp()
>>> apply_warp.inputs.in_files = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                               '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                               '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                               '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                               '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> apply_warp.inputs.field_file = '/path/to/fnirt_deformation.nii.gz'
>>> apply_warp.inputs.ref_file = '/path/to/sub-01_meanBOLD.nii.gz'
>>> apply_warp.run()  
field_filea pathlike object or string representing an existing file

Deformation field.

ref_filea pathlike object or string representing an existing file

Reference image used for target space.

in_filesa list of items which are a pathlike object or string representing an existing file

Files to be registered.

interp‘nn’ or ‘trilinear’ or ‘sinc’ or ‘spline’

Interpolation method. Maps to a command-line argument: --interp=%s (position: -2).

out_filesa list of items which are a pathlike object or string representing a file

Warped files.

ApplymultipleXfm

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Apply an XFM transform estimated by FSL flirt to a list of images.

Example

>>> from cmtklib.interfaces import fsl
>>> apply_xfm = fsl.ApplymultipleXfm
>>> apply_xfm.inputs.in_files = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                              '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                              '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                              '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                              '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> apply_xfm.inputs.xfm_file = '/path/to/flirt_transform.xfm'
>>> apply_xfm.inputs.reference = '/path/to/sub-01_meanBOLD.nii.gz'
>>> apply_xfm.run()  
referencea pathlike object or string representing an existing file

Reference image used for target space.

xfm_filea pathlike object or string representing an existing file

Transform file.

in_filesa list of items which are a pathlike object or string representing an existing file

Files to be registered.

interp‘nearestneighbour’ or ‘spline’

Interpolation used.

out_filesa list of items which are a pathlike object or string representing a file

Transformed files.

BinaryThreshold

Link to code

Bases: nipype.interfaces.fsl.base.FSLCommand

Wrapped executable: fslmaths.

Use fslmaths to apply a threshold to an image in a variety of ways.

Examples

>>> from cmtklib.interfaces.fsl import BinaryThreshold
>>> thresh = BinaryThreshold()
>>> thresh.inputs.in_file = '/path/to/probseg.nii.gz'
>>> thresh.inputs.thresh = 0.5
>>> thresh.inputs.out_file = '/path/to/output_binseg.nii.gz'
>>> thresh.run()  
in_filea pathlike object or string representing an existing file

Image to operate on. Maps to a command-line argument: %s (position: 2).

out_filea pathlike object or string representing a file

Image to write. Maps to a command-line argument: %s (position: 5).

thresha float

Threshold value. Maps to a command-line argument: -thr %s (position: 3).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

binarizea boolean

Maps to a command-line argument: -bin (position: 4).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

output_type‘NIFTI’ or ‘NIFTI_PAIR’ or ‘NIFTI_GZ’ or ‘NIFTI_PAIR_GZ’

FSL output type.

out_filea pathlike object or string representing an existing file

Image written after calculations.

CreateAcqpFile

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Create an acquisition Acqp file for FSL eddy.

Note

This value can be extracted from dMRI data acquired on Siemens scanner

Examples

>>> from cmtklib.interfaces.fsl import CreateAcqpFile
>>> create_acqp = CreateAcqpFile()
>>> create_acqp.inputs.total_readout  = 0.28
>>> create_acqp.run()  

total_readout : a float

acqp : a pathlike object or string representing an existing file

CreateIndexFile

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Create an index file for FSL eddy from a mrtrix diffusion gradient table.

Examples

>>> from cmtklib.interfaces.fsl import CreateIndexFile
>>> create_index = CreateIndexFile()
>>> create_index.inputs.in_grad_mrtrix  = 'grad.txt'
>>> create_index.run()  
in_grad_mrtrixa pathlike object or string representing an existing file

Input DWI gradient table in MRTrix format.

index : a pathlike object or string representing an existing file

Eddy

Link to code

Bases: nipype.interfaces.fsl.base.FSLCommand

Wrapped executable: eddy.

Performs eddy current distorsion correction using FSL eddy.

Example

>>> from cmtklib.interfaces import fsl
>>> eddyc = fsl.Eddy(in_file='diffusion.nii',
>>>                  bvecs='diffusion.bvecs',
>>>                  bvals='diffusion.bvals',
>>>                  out_file="diffusion_eddyc.nii")
>>> eddyc.run()  
acqpa pathlike object or string representing an existing file

File containing acquisition parameters. Maps to a command-line argument: --acqp=%s (position: 3).

bvalsa pathlike object or string representing an existing file

File containing the b-values for all volumes in –imain. Maps to a command-line argument: --bvals=%s (position: 5).

bvecsa pathlike object or string representing an existing file

File containing the b-vectors for all volumes in –imain. Maps to a command-line argument: --bvecs=%s (position: 4).

in_filea pathlike object or string representing an existing file

File containing all the images to estimate distortions for. Maps to a command-line argument: --imain=%s (position: 0).

indexa pathlike object or string representing an existing file

File containing indices for all volumes in –imain into –acqp and –topup. Maps to a command-line argument: --index=%s (position: 2).

maska pathlike object or string representing an existing file

Mask to indicate brain. Maps to a command-line argument: --mask=%s (position: 1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing a file

Basename for output. Maps to a command-line argument: --out=%s (position: 6).

output_type‘NIFTI’ or ‘NIFTI_PAIR’ or ‘NIFTI_GZ’ or ‘NIFTI_PAIR_GZ’

FSL output type.

verbosea boolean

Display debugging messages. Maps to a command-line argument: --verbose (position: 7).

bvecs_rotateda pathlike object or string representing an existing file

Path/name of rotated DWI gradient bvecs file.

eddy_correcteda pathlike object or string representing an existing file

Path/name of 4D eddy corrected DWI file.

EddyOpenMP

Link to code

Bases: nipype.interfaces.fsl.base.FSLCommand

Wrapped executable: eddy_openmp.

Performs eddy current distorsion correction using FSL eddy_openmp.

Example

>>> from cmtklib.interfaces import fsl
>>> eddyc = fsl.EddyOpenMP(in_file='diffusion.nii',
>>>                        bvecs='diffusion.bvecs',
>>>                        bvals='diffusion.bvals',
>>>                        out_file="diffusion_eddyc.nii")
>>> eddyc.run()  
acqpa pathlike object or string representing an existing file

File containing acquisition parameters. Maps to a command-line argument: --acqp=%s (position: 3).

bvalsa pathlike object or string representing an existing file

File containing the b-values for all volumes in –imain. Maps to a command-line argument: --bvals=%s (position: 5).

bvecsa pathlike object or string representing an existing file

File containing the b-vectors for all volumes in –imain. Maps to a command-line argument: --bvecs=%s (position: 4).

in_filea pathlike object or string representing an existing file

File containing all the images to estimate distortions for. Maps to a command-line argument: --imain=%s (position: 0).

indexa pathlike object or string representing an existing file

File containing indices for all volumes in –imain into –acqp and –topup. Maps to a command-line argument: --index=%s (position: 2).

maska pathlike object or string representing an existing file

Mask to indicate brain. Maps to a command-line argument: --mask=%s (position: 1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing a file

Basename for output. Maps to a command-line argument: --out=%s (position: 6).

output_type‘NIFTI’ or ‘NIFTI_PAIR’ or ‘NIFTI_GZ’ or ‘NIFTI_PAIR_GZ’

FSL output type.

verbosea boolean

Display debugging messages. Maps to a command-line argument: --verbose (position: 7).

bvecs_rotateda pathlike object or string representing an existing file

Path/name of rotated DWI gradient bvecs file.

eddy_correcteda pathlike object or string representing an existing file

Path/name of 4D eddy corrected DWI file.

FSLCreateHD

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: fslcreatehd.

Calls the fslcreatehd command to create an image for space / dimension reference.

Examples

>>> from cmtklib.interfaces.fsl import FSLCreateHD
>>> fsl_create = FSLCreateHD()
>>> fsl_create.inputs.im_size = [256, 256, 256, 1]
>>> fsl_create.inputs.vox_size = [1, 1, 1]
>>> fsl_create.inputs.tr = 0
>>> fsl_create.inputs.origin = [0, 0, 0]
>>> fsl_create.inputs.datatype = '16' # 16: float
>>> fsl_create.inputs.out_filename = '/path/to/generated_image.nii.gz'
>>> fsl_create.run()  
datatype‘2’ or ‘4’ or ‘8’ or ‘16’ or ‘32’ or ‘64’

Datatype values: 2=char, 4=short, 8=int, 16=float, 64=double. Maps to a command-line argument: %s (position: 5).

im_sizea list of from 4 to 4 items which are an integer

Image size : xsize , ysize, zsize, tsize . Maps to a command-line argument: %s (position: 1).

origina list of from 3 to 3 items which are an integer

Origin coordinates : xorig, yorig, zorig. Maps to a command-line argument: %s (position: 4).

out_filenamea pathlike object or string representing a file

the output temp reference image created.

Maps to a command-line argument: %s (position: 6).

tran integer

<tr>. Maps to a command-line argument: %s (position: 3).

vox_sizea list of from 3 to 3 items which are an integer

Voxel size : xvoxsize, yvoxsize, zvoxsize. Maps to a command-line argument: %s (position: 2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing an existing file

Path/name of the output reference image created.

MathsCommand

Link to code

Bases: nipype.interfaces.fsl.base.FSLCommand

Wrapped executable: fslmaths.

Calls the fslmaths command in a variety of ways.

Examples

>>> from cmtklib.interfaces.fsl import MathsCommand
>>> fsl_maths = MathsCommand()
>>> fsl_maths.inputs.in_file = '/path/to/image_with_nans.nii.gz'
>>> fsl_maths.inputs.nan2zeros = True
>>> fsl_maths.inputs.out_file = '/path/to/image_with_no_nans.nii.gz'
>>> fsl_maths.run()  
in_filea pathlike object or string representing an existing file

Image to operate on. Maps to a command-line argument: %s (position: 2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

internal_datatype‘float’ or ‘char’ or ‘int’ or ‘short’ or ‘double’ or ‘input’

Datatype to use for calculations (default is float). Maps to a command-line argument: -dt %s (position: 1).

nan2zerosa boolean

Change NaNs to zeros before doing anything. Maps to a command-line argument: -nan (position: 3).

out_filea pathlike object or string representing a file

Image to write. Maps to a command-line argument: %s (position: -2).

output_datatype‘float’ or ‘char’ or ‘int’ or ‘short’ or ‘double’ or ‘input’

Datatype to use for output (default uses input type). Maps to a command-line argument: -odt %s (position: -1).

output_type‘NIFTI’ or ‘NIFTI_PAIR’ or ‘NIFTI_GZ’ or ‘NIFTI_PAIR_GZ’

FSL output type.

out_filea pathlike object or string representing an existing file

Image written after calculations.

Orient

Link to code

Bases: nipype.interfaces.fsl.base.FSLCommand

Wrapped executable: fslorient.

Use fslorient to get/set orientation information from an image’s header.

Advanced tool that reports or sets the orientation information in a file. Note that only in NIfTI files can the orientation be changed - Analyze files are always treated as “radiological” (meaning that they could be simply rotated into the same alignment as the MNI152 standard images - equivalent to the appropriate sform or qform in a NIfTI file having a negative determinant).

Examples

>>> from cmtklib.interfaces.fsl import Orient
>>> fsl_orient = Orient()
>>> fsl_orient.inputs.in_file = 'input_image.nii.gz'
>>> fsl_orient.inputs.force_radiological = True
>>> fsl_orient.inputs.out_file = 'output_image.nii.gz'
>>> fsl_orient.run()  
in_filea pathlike object or string representing an existing file

Input image. Maps to a command-line argument: %s (position: 2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

copy_qform2sforma boolean

Sets the sform equal to the qform - code and matrix. Maps to a command-line argument: -copyqform2sform (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

copy_sform2qforma boolean

Sets the qform equal to the sform - code and matrix. Maps to a command-line argument: -copysform2qform (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

delete_orienta boolean

Removes orient info from header. Maps to a command-line argument: -deleteorient (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

force_neurologicala boolean

Makes FSL neurological header - not Analyze. Maps to a command-line argument: -forceneurological (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

force_radiologicala boolean

Makes FSL radiological header. Maps to a command-line argument: -forceradiological (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

get_orienta boolean

Gets FSL left-right orientation. Maps to a command-line argument: -getorient (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

get_qforma boolean

Gets the 16 elements of the qform matrix. Maps to a command-line argument: -getqform (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

get_qformcodea boolean

Gets the qform integer code. Maps to a command-line argument: -getqformcode (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

get_sforma boolean

Gets the 16 elements of the sform matrix. Maps to a command-line argument: -getsform (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

get_sformcodea boolean

Gets the sform integer code. Maps to a command-line argument: -getsformcode (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

output_type‘NIFTI’ or ‘NIFTI_PAIR’ or ‘NIFTI_GZ’ or ‘NIFTI_PAIR_GZ’

FSL output type.

set_qforma list of from 16 to 16 items which are a float

<m11 m12 … m44> sets the 16 elements of the qform matrix. Maps to a command-line argument: -setqform %f (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

set_qformcodean integer

<code> sets qform integer code. Maps to a command-line argument: -setqormcode %d (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

set_sforma list of from 16 to 16 items which are a float

<m11 m12 … m44> sets the 16 elements of the sform matrix. Maps to a command-line argument: -setsform %f (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

set_sformcodean integer

<code> sets sform integer code. Maps to a command-line argument: -setformcode %d (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

swap_orienta boolean

Swaps FSL radiological and FSL neurological. Maps to a command-line argument: -swaporient (position: 1). Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.

orienta string

FSL left-right orientation.

out_filea pathlike object or string representing an existing file

Image with modified orientation.

qforma list of from 16 to 16 items which are a float

The 16 elements of the qform matrix.

qformcodean integer

Qform integer code.

sforma list of from 16 to 16 items which are a float

The 16 elements of the sform matrix.

sformcodean integer

Sform integer code.

Orient.aggregate_outputs(runtime=None, needed_outputs=None)[source]

Collate expected outputs and apply output traits validation.

cmtklib.interfaces.misc module
ConcatOutputsAsTuple

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Concatenate 2 different output file as a Tuple of 2 files.

Examples

>>> from cmtklib.interfaces.misc import ConcatOutputsAsTuple
>>> concat_outputs = ConcatOutputsAsTuple()
>>> concat_outputs.inputs.input1  = 'output_interface1.nii.gz'
>>> concat_outputs.inputs.input2  = 'output_interface2.nii.gz'
>>> concat_outputs.run()  

input1 : a pathlike object or string representing an existing file input2 : a pathlike object or string representing an existing file

out_tuple : a tuple of the form: (a pathlike object or string representing an existing file, a pathlike object or string representing an existing file)

ExtractHeaderVoxel2WorldMatrix

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Write in a text file the voxel-to-world transform matrix from the heaer of a Nifti image.

Examples

>>> from cmtklib.interfaces.misc import ExtractHeaderVoxel2WorldMatrix
>>> extract_mat = ExtractHeaderVoxel2WorldMatrix()
>>> extract_mat.inputs.in_file = 'sub-01_T1w.nii.gz'
>>> extract_mat.run()  
in_filea pathlike object or string representing an existing file

Input image file.

out_matrixa pathlike object or string representing an existing file

Output voxel to world affine transform file.

ExtractImageVoxelSizes

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Returns a list of voxel sizes from an image.

Examples

>>> from cmtklib.interfaces.misc import ExtractImageVoxelSizes
>>> extract_voxel_sizes = ExtractImageVoxelSizes()
>>> extract_voxel_sizes.inputs.in_file = 'sub-01_T1w.nii.gz'
>>> extract_voxel_sizes.run()  

in_file : a pathlike object or string representing an existing file

voxel_sizes : a list of items which are any value

Rename001

Link to code

Bases: nipype.interfaces.utility.base.Rename

Change the name of a file based on a mapped format string.

To use additional inputs that will be defined at run-time, the class constructor must be called with the format template, and the fields identified will become inputs to the interface. Additionally, you may set the parse_string input, which will be run over the input filename with a regular expressions search, and will fill in additional input fields from matched groups. Fields set with inputs have precedence over fields filled in with the regexp match.

It corresponds to the nipype.interfaces.utility.base.Rename interface that has been modified to force hard link during copy

Examples

>>> from nipype.interfaces.utility import Rename
>>> rename1 = Rename()
>>> rename1.inputs.in_file = os.path.join(datadir, "zstat1.nii.gz") # datadir is a directory with exemplary files, defined in conftest.py
>>> rename1.inputs.format_string = "Faces-Scenes.nii.gz"
>>> res = rename1.run()          
>>> res.outputs.out_file         
'Faces-Scenes.nii.gz"            
>>> rename2 = Rename(format_string="%(subject_id)s_func_run%(run)02d")
>>> rename2.inputs.in_file = os.path.join(datadir, "functional.nii")
>>> rename2.inputs.keep_ext = True
>>> rename2.inputs.subject_id = "subj_201"
>>> rename2.inputs.run = 2
>>> res = rename2.run()          
>>> res.outputs.out_file         
'subj_201_func_run02.nii'        
>>> rename3 = Rename(format_string="%(subject_id)s_%(seq)s_run%(run)02d.nii")
>>> rename3.inputs.in_file = os.path.join(datadir, "func_epi_1_1.nii")
>>> rename3.inputs.parse_string = r"func_(?P<seq>\w*)_.*"
>>> rename3.inputs.subject_id = "subj_201"
>>> rename3.inputs.run = 2
>>> res = rename3.run()          
>>> res.outputs.out_file         
'subj_201_epi_run02.nii'         

References

Adapted from https://github.com/nipy/nipype/blob/cd4c34d935a43812d1756482fdc4034844e485b8/nipype/interfaces/utility/base.py#L232-L272

format_stringa string

Python formatting string for output template.

in_filea pathlike object or string representing an existing file

File to rename.

keep_exta boolean

Keep in_file extension, replace non-extension component of name.

parse_stringa string

Python regexp parse string to define replacement inputs.

use_fullpatha boolean

Use full path as input to regex parser. (Nipype default value: False)

out_filea pathlike object or string representing an existing file

Softlink to original file with new name.

cmtklib.interfaces.mne module

The MNE module provides Nipype interfaces for MNE tools missing in Nipype or modified.

CreateBEM

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use MNE to create the BEM surfaces.

Examples

>>> from cmtklib.interfaces.mne import CreateBEM
>>> create_bem = CreateBEM()
>>> create_bem.inputs.fs_subject = 'sub-01'
>>> create_bem.inputs.fs_subjects_dir = '/path/to/bids_dataset/derivatives/freesurfer-7.1.1'
>>> create_bem.inputs.out_bem_fname = 'bem.fif'
>>> create_bem.run()  

References

fs_subjecta string

FreeSurfer subject ID.

fs_subjects_dira string or os.PathLike object referring to an existing directory

Freesurfer subjects (derivatives) directory.

out_bem_fnamea string

Name of output BEM file in fif format.

bem_filea string or os.PathLike object

Path to output BEM file in fif format.

CreateCov

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use MNE to create the noise covariance matrix.

Examples

>>> from cmtklib.interfaces.mne import CreateCov
>>> create_cov = CreateCov()
>>> create_cov.inputs.epochs_file = '/path/to/sub-01_epo.fif'
>>> create_cov.inputs.out_noise_cov_fname = 'sub-01_noisecov.fif'
>>> create_cov.run()  

References

epochs_filea string or os.PathLike object referring to an existing file

Eeg * epochs in .set format.

out_noise_cov_fnamea string

Name of output file to save noise covariance matrix in fif format.

noise_cov_filea string or os.PathLike object

Location and name to store noise covariance matrix in fif format.

CreateFwd

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use MNE to calculate the forward solution.

Examples

>>> from cmtklib.interfaces.mne import CreateFwd
>>> create_fwd = CreateFwd()
>>> create_fwd.inputs.epochs_file = '/path/to/sub-01_epo.fif'
>>> create_fwd.inputs.out_fwd_fname = 'sub-01_fwd.fif'
>>> create_fwd.inputs.src_file = '/path/to/sub-01_src.fif'
>>> create_fwd.inputs.bem_file = '/path/to/sub-01_bem.fif'
>>> create_fwd.inputs.trans_file = '/path/to/sub-01_trans.fif'
>>> create_fwd.run()  

References

bem_filea string or os.PathLike object referring to an existing file

Boundary surfaces for MNE head model in fif format.

epochs_filea string or os.PathLike object referring to an existing file

Eeg * epochs in .fif format, containing information about electrode montage.

src_filea string or os.PathLike object referring to an existing file

Source space file in fif format.

out_fwd_fnamea string

Name of output forward solution file created with MNE.

trans_filea string or os.PathLike object referring to an existing file

Trans.fif file containing co-registration information (electrodes x MRI).

fwd_filea string or os.PathLike object

Path to generated forward solution file in fif format.

CreateSrc

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use MNE to set up bilateral hemisphere surface-based source space with subsampling and write source spaces to a file.

Examples

>>> from cmtklib.interfaces.mne import CreateSrc
>>> create_src = CreateSrc()
>>> create_src.inputs.fs_subject = 'sub-01'
>>> create_src.inputs.fs_subjects_dir = '/path/to/bids_dataset/derivatives/freesurfer-7.1.1'
>>> create_src.inputs.out_src_fname = 'sub-01_src.fif'
>>> create_src.run()  

References

fs_subjecta string

FreeSurfer subject ID.

fs_subjects_dira string or os.PathLike object referring to an existing directory

Freesurfer subjects (derivatives) directory.

out_src_fnamea string

Name of output source space file created with MNE.

overwritea boolean

Overwrite source space file if already existing.

src_filea string or os.PathLike object

Path to output source space files in fif format.

EEGLAB2fif

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use MNE to convert EEG data from EEGlab to MNE format.

Examples

>>> from cmtklib.interfaces.mne import EEGLAB2fif
>>> eeglab2fif = EEGLAB2fif()
>>> eeglab2fif.inputs.eeg_ts_file = ['sub-01_task-faces_desc-preproc_eeg.set']
>>> eeglab2fif.inputs.events_file = ['sub-01_task-faces_events.tsv']
>>> eeglab2fif.inputs.out_epochs_fif_fname = 'sub-01_epo.fif'
>>> eeglab2fif.inputs.electrodes_file = 'sub-01_eeg.xyz'
>>> eeglab2fif.inputs.event_ids = {"SCRAMBLED":0, "FACES":1}
>>> eeglab2fif.inputs.t_min = -0.2
>>> eeglab2fif.inputs.t_max = 0.6
>>> eeglab2fif.run()  

References

eeg_ts_filea string or os.PathLike object referring to an existing file

Eeg * epochs in .set format.

events_filea string or os.PathLike object referring to an existing file

Epochs metadata in _behav.txt.

out_epochs_fif_fnamea string

Output filename for eeg * epochs in .fif format, e.g. sub-01_epo.fif.

electrodes_filea string or os.PathLike object referring to an existing file

Positions of EEG electrodes in a txt file.

event_idsa dictionary with keys which are any value and with values which are any value

The id of the events to consider in dict form. The keys of the dict can later be used to access associated events. If None, all events will be used and a dict is created with string integer names corresponding to the event id integers.

t_maxa float

End time of the epochs in seconds, relative to the time-locked event.

t_mina float

Start time of the epochs in seconds, relative to the time-locked event.

epochs_filea string or os.PathLike object referring to an existing file

Eeg * epochs in .fif format.

MNEInverseSolutionROI

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use MNE to convert EEG data from EEGlab to MNE format.

Examples

>>> from cmtklib.interfaces.mne import MNEInverseSolutionROI
>>> inv_sol = MNEInverseSolutionROI()
>>> inv_sol.inputs.esi_method_snr = 3.0
>>> inv_sol.inputs.fs_subject = 'sub-01'
>>> inv_sol.inputs.fs_subjects_dir = '/path/to/bids_dataset/derivatives/freesurfer-7.1.1'
>>> inv_sol.inputs.epochs_file = '/path/to/sub-01_epo.fif'
>>> inv_sol.inputs.src_file = '/path/to/sub-01_src.fif'
>>> inv_sol.inputs.bem_file = '/path/to/sub-01_bem.fif'
>>> inv_sol.inputs.noise_cov_file = '/path/to/sub-01_noisecov.fif'
>>> inv_sol.inputs.fwd_file = '/path/to/sub-01_fwd.fif'
>>> inv_sol.inputs.atlas_annot = 'lausanne2018.scale1'
>>> inv_sol.inputs.out_roi_ts_fname_prefix = 'sub-01_atlas-L2018_res-scale1_desc-epo_timeseries'
>>> inv_sol.inputs.out_inv_fname = 'sub-01_inv.fif'
>>> inv_sol.run()  

References

bem_filea string or os.PathLike object referring to an existing file

Surfaces for head model in fif format.

epochs_filea string or os.PathLike object referring to an existing file

Eeg * epochs in .fif format.

fs_subjecta string

FreeSurfer subject ID.

fs_subjects_dira string or os.PathLike object referring to an existing directory

Freesurfer subjects (derivatives) directory.

fwd_filea string or os.PathLike object

Forward solution in fif format.

noise_cov_filea string or os.PathLike object referring to an existing file

Noise covariance matrix in fif format.

out_inv_fnamea string

Output filename for inverse operator in fif format.

src_filea string or os.PathLike object referring to an existing file

Source space created with MNE in fif format.

atlas_annot‘aparc’ or ‘lausanne2018.scale1’ or ‘lausanne2018.scale2’ or ‘lausanne2018.scale3’ or ‘lausanne2018.scale4’ or ‘lausanne2018.scale5’

The parcellation to use, e.g., ‘aparc’, ‘lausanne2018.scale1’, ‘lausanne2018.scale2’, ‘lausanne2018.scale3’, ‘lausanne2018.scale4’ or’lausanne2018.scale5’.

esi_method‘sLORETA’ or ‘eLORETA’ or ‘MNE’ or ‘dSPM’

Use minimum norm 1, dSPM 2, sLORETA (default) 3, or eLORETA 4.

esi_method_snra float

SNR value such as the ESI method regularization weight lambda2 is set to 1.0 / esi_method_snr ** 2.

out_roi_ts_fname_prefixa string

Output filename prefix (no extension) for rois * time series in .npy and .mat formats.

inv_filea string or os.PathLike object

Path to output inverse operator file in fif format.

roi_ts_mat_filea string or os.PathLike object

Path to output ROI time series file in .mat format.

roi_ts_npy_filea string or os.PathLike object

Path to output ROI time series file in .npy format.

MNESpectralConnectivity

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use MNE to compute frequency- and time-frequency-domain connectivity measures.

Examples

>>> from cmtklib.interfaces.mne import MNESpectralConnectivity
>>> eeg_cmat = MNESpectralConnectivity()
>>> eeg_cmat.inputs.fs_subject = 'sub-01'
>>> eeg_cmat.inputs.fs_subjects_dir = '/path/to/bids_dataset/derivatives/freesurfer-7.1.1'
>>> eeg_cmat.inputs.atlas_annot = 'lausanne2018.scale1'
>>> eeg_cmat.inputs.connectivity_metrics = ['imcoh', 'pli', 'wpli']
>>> eeg_cmat.inputs.output_types = ['tsv', 'gpickle', 'mat', 'graphml']
>>> eeg_cmat.inputs.epochs_file = '/path/to/sub-01_epo.fif'
>>> eeg_cmat.inputs.roi_ts_file = '/path/to/sub-01_timeseries.npy'
>>> eeg_cmat.run()  

References

fs_subjecta string

FreeSurfer subject ID.

fs_subjects_dira string or os.PathLike object referring to an existing directory

Freesurfer subjects (derivatives) directory.

atlas_annot‘aparc’ or ‘lausanne2018.scale1’ or ‘lausanne2018.scale2’ or ‘lausanne2018.scale3’ or ‘lausanne2018.scale4’ or ‘lausanne2018.scale5’

The parcellation to use, e.g., ‘aparc’, ‘lausanne2018.scale1’, ‘lausanne2018.scale2’, ‘lausanne2018.scale3’, ‘lausanne2018.scale4’ or’lausanne2018.scale5’.

connectivity_metricsa list of items which are any value

Set of frequency- and time-frequency-domain connectivity metrics to compute.

epochs_filea pathlike object or string representing an existing file

Epochs file in fif format.

out_cmat_fnamea string

Basename of output connectome file (without any extension).

output_typesa list of items which are any value

Set of format to save output connectome files.

roi_ts_filea pathlike object or string representing an existing file

Extracted ROI time courses from ESI in .npy format.

roi_volume_tsv_filea pathlike object or string representing an existing file

Index / label atlas mapping file in .tsv format accordingly to BIDS.

connectivity_matricesa list of items which are a pathlike object or string representing a file

Connectivity matrices.

cmtklib.interfaces.mrtrix3 module

The MRTrix3 module provides Nipype interfaces for MRTrix3 tools missing in Nipype or modified.

ApplymultipleMRConvert

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Apply mrconvert tool to multiple images.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> mrconvert = mrt.ApplymultipleMRConvert()
>>> mrconvert.inputs.in_files = ['dwi_FA.mif','dwi_MD.mif']
>>> mrconvert.inputs.extension = 'nii'
>>> mrconvert.run()  
extension‘mif’ or ‘nii’ or ‘float’ or ‘char’ or ‘short’ or ‘int’ or ‘long’ or ‘double’

“i.e. Bfloat”. Can be “char”, “short”, “int”, “long”, “float” or “double”. (Nipype default value: mif)

in_filesa list of items which are a pathlike object or string representing an existing file

Files to be registered.

output_datatype‘float32’ or ‘float32le’ or ‘float32be’ or ‘float64’ or ‘float64le’ or ‘float64be’ or ‘int64’ or ‘uint64’ or ‘int64le’ or ‘uint64le’ or ‘int64be’ or ‘uint64be’ or ‘int32’ or ‘uint32’ or ‘int32le’ or ‘uint32le’ or ‘int32be’ or ‘uint32be’ or ‘int16’ or ‘uint16’ or ‘int16le’ or ‘uint16le’ or ‘int16be’ or ‘uint16be’ or ‘cfloat32’ or ‘cfloat32le’ or ‘cfloat32be’ or ‘cfloat64’ or ‘cfloat64le’ or ‘cfloat64be’ or ‘int8’ or ‘uint8’ or ‘bit’

Specify output image data type. Valid choices are: float32, float32le, float32be, float64, float64le, float64be, int64, uint64, int64le, uint64le, int64be, uint64be, int32, uint32, int32le, uint32le, int32be, uint32be, int16, uint16, int16le, uint16le, int16be, uint16be, cfloat32, cfloat32le, cfloat32be, cfloat64, cfloat64le, cfloat64be, int8, uint8, bit. Maps to a command-line argument: -datatype %s (position: 2).

stridea list of from 3 to 4 items which are an integer

Three to four comma-separated numbers specifying the strides of the output data in memory. The actual strides produced will depend on whether the output image format can support it.. Maps to a command-line argument: -stride %s (position: 3).

converted_filesa list of items which are a pathlike object or string representing a file

Output files.

ApplymultipleMRCrop

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Apply MRCrop to a list of images.

Example

>>> from cmtklib.interfaces.mrtrix3 import ApplymultipleMRCrop
>>> multi_crop = ApplymultipleMRCrop()
>>> multi_crop.inputs.in_files = ['/sub-01_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                               'sub-01_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                               'sub-01_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                               'sub-01_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                               'sub-01_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> multi_crop.inputs.template_image = 'sub-01_T1w.nii.gz'
>>> multi_crop.run()  

See also

cmtklib.interfaces.mrtrix3.MRCrop

template_imagea pathlike object or string representing an existing file

Template image.

in_filesa list of items which are a pathlike object or string representing an existing file

Files to be cropped.

out_filesa list of items which are a pathlike object or string representing a file

Cropped files.

ApplymultipleMRTransforms

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Apply MRTransform to a list of images.

Example

>>> from cmtklib.interfaces.mrtrix3 import ApplymultipleMRTransforms
>>> multi_transform = ApplymultipleMRTransforms()
>>> multi_transform.inputs.in_files = ['/sub-01_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                                    'sub-01_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                                    'sub-01_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                                    'sub-01_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                                    'sub-01_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> multi_transform.inputs.template_image = 'sub-01_T1w.nii.gz'
>>> multi_transform.run()  

See also

cmtklib.interfaces.mrtrix3.MRTransform

template_imagea pathlike object or string representing an existing file

Template image.

in_filesa list of items which are a pathlike object or string representing an existing file

Files to be transformed.

out_filesa list of items which are a pathlike object or string representing a file

Transformed files.

ConstrainedSphericalDeconvolutionMultiTissue

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: dwi2fod.

Perform non-negativity constrained spherical deconvolution using dwi2fod.

Note that this program makes use of implied symmetries in the diffusion profile. First, the fact the signal attenuation profile is real implies that it has conjugate symmetry, i.e. Y(l,-m) = Y(l,m)* (where * denotes the complex conjugate). Second, the diffusion profile should be antipodally symmetric (i.e. S(x) = S(-x)), implying that all odd l components should be zero. Therefore, this program only computes the even elements. Note that the spherical harmonics equations used here differ slightly from those conventionally used, in that the (-1)^m factor has been omitted. This should be taken into account in all subsequent calculations. Each volume in the output image corresponds to a different spherical harmonic component, according to the following convention:

  • [0] Y(0,0)

  • [1] Im {Y(2,2)}

  • [2] Im {Y(2,1)}

  • [3] Y(2,0)

  • [4] Re {Y(2,1)}

  • [5] Re {Y(2,2)}

  • [6] Im {Y(4,4)}

  • [7] Im {Y(4,3)}

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> csdeconv = mrt.ConstrainedSphericalDeconvolution()
>>> csdeconv.inputs.in_file = 'dwi.mif'
>>> csdeconv.inputs.encoding_file = 'encoding.txt'
>>> csdeconv.run()                                          
algorithm‘csd’ or ‘msmt_csd’

Select the CSD algorithm to be use for FOD estimationOptions are: csd (single shell single tissue) or msmt_csd (multi-shell multi-tissues). Maps to a command-line argument: %s (position: 0).

in_filea pathlike object or string representing an existing file

Diffusion-weighted image. Maps to a command-line argument: %s (position: 1).

response_csf_filea pathlike object or string representing an existing file

The diffusion-weighted signal response function for a single fibre population CSF (see EstimateResponse). Maps to a command-line argument: %s (position: 6).

response_filea pathlike object or string representing an existing file

The diffusion-weighted signal response function for a single fibre population WM (see EstimateResponse). Maps to a command-line argument: %s (position: 2).

response_gm_filea pathlike object or string representing an existing file

The diffusion-weighted signal response function for a single fibre population GM (see EstimateResponse). Maps to a command-line argument: %s (position: 4).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

directions_filea pathlike object or string representing an existing file

A text file containing the [ el az ] pairs for the directions: Specify the directions over which to apply the non-negativity constraint (by default, the built-in 300 direction set is used). Maps to a command-line argument: -directions %s (position: -2).

encoding_filea pathlike object or string representing an existing file

Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix. Maps to a command-line argument: -grad %s (position: 8).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

filter_filea pathlike object or string representing an existing file

A text file containing the filtering coefficients for each even harmonic order.the linear frequency filtering parameters used for the initial linear spherical deconvolution step (default = [ 1 1 1 0 0 ]). Maps to a command-line argument: -filter %s (position: -2).

iterationsan integer

The maximum number of iterations to perform for each voxel (default = 50). Maps to a command-line argument: -niter %s.

lambda_valuea float

The regularisation parameter lambda that controls the strength of the constraint (default = 1.0). Maps to a command-line argument: -norm_lambda %s.

mask_imagea pathlike object or string representing an existing file

Only perform computation within the specified binary brain mask image. Maps to a command-line argument: -mask %s (position: 2).

maximum_harmonic_orderan integer

Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images. Maps to a command-line argument: -lmax %s.

out_csf_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: 7).

out_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: 3).

out_gm_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: 5).

threshold_valuea float

The threshold below which the amplitude of the FOD is assumed to be zero, expressed as a fraction of the mean value of the initial FOD (default = 0.1). Maps to a command-line argument: -threshold %s.

csf_spherical_harmonics_imagea pathlike object or string representing an existing file

CSF Spherical harmonics image.

gm_spherical_harmonics_imagea pathlike object or string representing an existing file

GM Spherical harmonics image.

spherical_harmonics_imagea pathlike object or string representing an existing file

WM Spherical harmonics image.

ConstrainedSphericalDeconvolutionSingleTissue

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: dwi2fod.

Perform non-negativity constrained spherical deconvolution using dwi2fod.

Note that this program makes use of implied symmetries in the diffusion profile. First, the fact the signal attenuation profile is real implies that it has conjugate symmetry, i.e. Y(l,-m) = Y(l,m)* (where * denotes the complex conjugate). Second, the diffusion profile should be antipodally symmetric (i.e. S(x) = S(-x)), implying that all odd l components should be zero. Therefore, this program only computes the even elements. Note that the spherical harmonics equations used here differ slightly from those conventionally used, in that the (-1)^m factor has been omitted. This should be taken into account in all subsequent calculations. Each volume in the output image corresponds to a different spherical harmonic component, according to the following convention:

  • [0] Y(0,0)

  • [1] Im {Y(2,2)}

  • [2] Im {Y(2,1)}

  • [3] Y(2,0)

  • [4] Re {Y(2,1)}

  • [5] Re {Y(2,2)}

  • [6] Im {Y(4,4)}

  • [7] Im {Y(4,3)}

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> csdeconv = mrt.ConstrainedSphericalDeconvolution()
>>> csdeconv.inputs.in_file = 'dwi.mif'
>>> csdeconv.inputs.encoding_file = 'encoding.txt'
>>> csdeconv.run()                                          
algorithm‘csd’ or ‘msmt_csd’

Select the CSD algorithm to be use for FOD estimationOptions are: csd (single shell single tissue) or msmt_csd (multi-shell multi-tissues). Maps to a command-line argument: %s (position: -4).

in_filea pathlike object or string representing an existing file

Diffusion-weighted image. Maps to a command-line argument: %s (position: -3).

response_filea pathlike object or string representing an existing file

The diffusion-weighted signal response function for a single fibre population (see EstimateResponse). Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

directions_filea pathlike object or string representing an existing file

A text file containing the [ el az ] pairs for the directions: Specify the directions over which to apply the non-negativity constraint (by default, the built-in 300 direction set is used). Maps to a command-line argument: -directions %s (position: -2).

encoding_filea pathlike object or string representing an existing file

Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix. Maps to a command-line argument: -grad %s (position: 1).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

filter_filea pathlike object or string representing an existing file

A text file containing the filtering coefficients for each even harmonic order.the linear frequency filtering parameters used for the initial linear spherical deconvolution step (default = [ 1 1 1 0 0 ]). Maps to a command-line argument: -filter %s (position: -2).

iterationsan integer

The maximum number of iterations to perform for each voxel (default = 50). Maps to a command-line argument: -niter %s.

lambda_valuea float

The regularisation parameter lambda that controls the strength of the constraint (default = 1.0). Maps to a command-line argument: -norm_lambda %s.

mask_imagea pathlike object or string representing an existing file

Only perform computation within the specified binary brain mask image. Maps to a command-line argument: -mask %s (position: 2).

maximum_harmonic_orderan integer

Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images. Maps to a command-line argument: -lmax %s.

out_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: -1).

threshold_valuea float

The threshold below which the amplitude of the FOD is assumed to be zero, expressed as a fraction of the mean value of the initial FOD (default = 0.1). Maps to a command-line argument: -threshold %s.

spherical_harmonics_imagea pathlike object or string representing an existing file

Spherical harmonics image.

DWI2Tensor

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: dwi2tensor.

Converts diffusion-weighted images to tensor images using dwi2tensor.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> dwi2tensor = mrt.DWI2Tensor()
>>> dwi2tensor.inputs.in_file = 'dwi.mif'
>>> dwi2tensor.inputs.encoding_file = 'encoding.txt'
>>> dwi2tensor.run()  
in_filea list of items which are any value

Diffusion-weighted images. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug (position: 1).

encoding_filea pathlike object or string representing a file

Encoding file, , supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix(). Maps to a command-line argument: -grad %s (position: 2).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

ignore_slice_by_volumea list of from 2 to 2 items which are an integer

Requires two values (i.e. [34 1] for [Slice Volume] Ignores the image slices specified when computing the tensor. Slice here means the z coordinate of the slice to be ignored. Maps to a command-line argument: -ignoreslices %s (position: 2).

ignore_volumesa list of at least 1 items which are an integer

Requires two values (i.e. [2 5 6] for [Volumes] Ignores the image volumes specified when computing the tensor. Maps to a command-line argument: -ignorevolumes %s (position: 2).

in_mask_filea pathlike object or string representing an existing file

Input DWI mask. Maps to a command-line argument: -mask %s (position: -3).

out_filenamea pathlike object or string representing a file

Output tensor filename. Maps to a command-line argument: %s (position: -1).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet (position: 1).

tensora pathlike object or string representing an existing file

Path/name of output diffusion tensor image.

DWIBiasCorrect

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: dwibiascorrect.

Correct for bias field in diffusion MRI data using the dwibiascorrect tool.

Example

>>> from cmtklib.interfaces.mrtrix3 import DWIBiasCorrect
>>> dwi_biascorr = DWIBiasCorrect()
>>> dwi_biascorr.inputs.in_file = 'sub-01_dwi.nii.gz'
>>> dwi_biascorr.inputs.use_ants = True
>>> dwi_biascorr.run() 
in_filea pathlike object or string representing an existing file

The input image series to be corrected. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug (position: 5).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

force_writinga boolean

Force file overwriting. Maps to a command-line argument: -force (position: 4).

maska pathlike object or string representing a file

Manually provide a mask image for bias field estimation (optional). Maps to a command-line argument: -mask %s (position: 2).

out_biasa pathlike object or string representing a file

Output the estimated bias field. Maps to a command-line argument: -bias %s (position: 3).

out_filea pathlike object or string representing a file

The output corrected image series. Maps to a command-line argument: %s (position: -1).

use_antsa boolean

Use ANTS N4 to estimate the inhomogeneity field. Maps to a command-line argument: ants (position: 1). Mutually exclusive with inputs: use_ants, use_fsl.

use_fsla boolean

Use FSL FAST to estimate the inhomogeneity field. Maps to a command-line argument: fsl (position: 1). Mutually exclusive with inputs: use_ants, use_fsl.

out_biasa pathlike object or string representing an existing file

Output estimated bias field.

out_filea pathlike object or string representing an existing file

Output corrected DWI image.

DWIDenoise

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: dwidenoise.

Denoise diffusion MRI data using the dwidenoise tool.

Example

>>> from cmtklib.interfaces.mrtrix3 import DWIDenoise
>>> dwi_denoise = DWIDenoise()
>>> dwi_denoise.inputs.in_file = 'sub-01_dwi.nii.gz'
>>> dwi_denoise.inputs.out_file = 'sub-01_desc-denoised_dwi.nii.gz'
>>> dwi_denoise.inputs.out_noisemap = 'sub-01_mod-dwi_noisemap.nii.gz'
>>> dwi_denoise.run()  
in_filea pathlike object or string representing an existing file

Input diffusion-weighted image filename. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug (position: 5).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

extent_windowa list of from 3 to 3 items which are a float

Three comma-separated numbers giving the window size of the denoising filter. Maps to a command-line argument: -extent %s (position: 2).

force_writinga boolean

Force file overwriting. Maps to a command-line argument: -force (position: 4).

maska pathlike object or string representing a file

Only perform computation within the specified binary brain mask image. (optional). Maps to a command-line argument: -mask %s (position: 1).

out_filea pathlike object or string representing a file

Output denoised DWI image filename. Maps to a command-line argument: %s (position: -1).

out_noisemapa pathlike object or string representing a file

Output noise map filename. Maps to a command-line argument: -noise %s (position: 3).

out_filea pathlike object or string representing an existing file

Output denoised DWI image.

out_noisemapa pathlike object or string representing an existing file

Output noise map (if generated).

Erode

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: maskfilter.

Erode (or dilates) a mask (i.e. binary) image using the maskfilter tool.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> erode = mrt.Erode()
>>> erode.inputs.in_file = 'mask.mif'
>>> erode.run()  
in_filea pathlike object or string representing an existing file

Input mask image to be eroded. Maps to a command-line argument: %s (position: -3).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug (position: 1).

dilatea boolean

Perform dilation rather than erosion. Maps to a command-line argument: -dilate (position: 1).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

filtertype‘clean’ or ‘connect’ or ‘dilate’ or ‘erode’ or ‘median’

The type of filter to be applied (clean, connect, dilate, erode, median). Maps to a command-line argument: %s (position: -2).

number_of_passesan integer

The number of passes (default: 1). Maps to a command-line argument: -npass %s.

out_filenamea pathlike object or string representing a file

Output image filename. Maps to a command-line argument: %s (position: -1).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet (position: 1).

out_filea pathlike object or string representing an existing file

The output image.

EstimateResponseForSHMultiTissue

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: dwi2response.

Estimates the fibre response function for use in spherical deconvolution using dwi2response.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> estresp = mrt.EstimateResponseForSH()
>>> estresp.inputs.in_file = 'dwi.mif'
>>> estresp.inputs.mask_image = 'dwi_WMProb.mif'
>>> estresp.inputs.encoding_file = 'encoding.txt'
>>> estresp.run()  
encoding_filea pathlike object or string representing an existing file

Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix. Maps to a command-line argument: -grad %s (position: -2).

in_5tt_filea pathlike object or string representing an existing file

Diffusion-weighted images. Maps to a command-line argument: %s (position: 3).

in_filea pathlike object or string representing an existing file

Diffusion-weighted images. Maps to a command-line argument: %s (position: 2).

algorithm‘dhollander’ or ‘fa’ or ‘manual’ or ‘msmt_5tt’ or ‘tax’ or ‘tournier’

Select the algorithm to be used to derive the response function; additional details and options become available once an algorithm is nominated. Options are: dhollander, fa, manual, msmt_5tt, tax, tournier. Maps to a command-line argument: %s (position: 1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

maximum_harmonic_orderan integer

Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images. Maps to a command-line argument: -lmax %s (position: -3).

out_csf_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: 6).

out_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: 4).

out_gm_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: 5).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet.

responsea pathlike object or string representing an existing file

WM Spherical harmonics image.

response_csfa pathlike object or string representing an existing file

CSF Spherical harmonics image.

response_gma pathlike object or string representing an existing file

GM Spherical harmonics image.

EstimateResponseForSHSingleTissue

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: dwi2response.

Estimates the fibre response function for use in spherical deconvolution using dwi2response.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> estresp = mrt.EstimateResponseForSH()
>>> estresp.inputs.in_file = 'dwi.mif'
>>> estresp.inputs.mask_image = 'dwi_WMProb.mif'
>>> estresp.inputs.encoding_file = 'encoding.txt'
>>> estresp.run()  
encoding_filea pathlike object or string representing an existing file

Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix. Maps to a command-line argument: -grad %s (position: -2).

in_filea pathlike object or string representing an existing file

Diffusion-weighted images. Maps to a command-line argument: %s (position: 2).

mask_imagea pathlike object or string representing an existing file

Only perform computation within the specified binary brain mask image. Maps to a command-line argument: -mask %s (position: -1).

algorithm‘dhollander’ or ‘fa’ or ‘manual’ or ‘msmt_5tt’ or ‘tax’ or ‘tournier’

Select the algorithm to be used to derive the response function; additional details and options become available once an algorithm is nominated. Options are: dhollander, fa, manual, msmt_5tt, tax, tournier. Maps to a command-line argument: %s (position: 1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

maximum_harmonic_orderan integer

Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images. Maps to a command-line argument: -lmax %s (position: -3).

out_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: 3).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet.

responsea pathlike object or string representing an existing file

Spherical harmonics image.

ExtractFSLGrad

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: mrinfo.

Use mrinfo to extract FSL gradient.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> fsl_grad = mrt.ExtractFSLGrad()
>>> fsl_grad.inputs.in_file = 'sub-01_dwi.mif'
>>> fsl_grad.inputs.out_grad_fsl = ['sub-01_dwi.bvecs', 'sub-01_dwi.bvals']
>>> fsl_grad.run()  
in_filea pathlike object or string representing an existing file

Input images to be read. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_grad_fsla tuple of the form: (a pathlike object or string representing a file, a pathlike object or string representing a file)

Export the DWI gradient table to files in FSL (bvecs / bvals) format. Maps to a command-line argument: -export_grad_fsl %s %s.

out_grad_fsla tuple of the form: (a pathlike object or string representing an existing file, a pathlike object or string representing an existing file)

Outputs [bvecs, bvals] DW gradient scheme (FSL format) if set.

ExtractMRTrixGrad

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: mrinfo.

Use mrinfo to extract mrtrix gradient text file.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> mrtrix_grad = mrt.ExtractMRTrixGrad()
>>> mrtrix_grad.inputs.in_file = 'sub-01_dwi.mif'
>>> mrtrix_grad.inputs.out_grad_mrtrix = 'sub-01_gradient.txt'
>>> mrtrix_grad.run()  
in_filea pathlike object or string representing an existing file

Input images to be read. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_grad_mrtrixa pathlike object or string representing a file

Export the DWI gradient table to file in MRtrix format. Maps to a command-line argument: -export_grad_mrtrix %s.

out_grad_mrtrixa pathlike object or string representing a file

Output MRtrix gradient text file if set.

FilterTractogram

Link to code

Bases: MRTrix3Base

Wrapped executable: tcksift.

Spherical-deconvolution informed filtering of tractograms using tcksift [Smith2013SIFT].

References

Smith2013SIFT

R.E. Smith et al., NeuroImage 67 (2013), pp. 298–312, <https://www.ncbi.nlm.nih.gov/pubmed/23238430>.

Example

>>> import cmtklib.interfaces.mrtrix3 as cmp_mrt
>>> mrtrix_sift = cmp_mrt.FilterTractogram()
>>> mrtrix_sift.inputs.in_tracks = 'tractogram.tck'
>>> mrtrix_sift.inputs.in_fod = 'spherical_harmonics_image.nii.gz'
>>> mrtrix_sift.inputs.out_file = 'sift_tractogram.tck'
>>> mrtrix_sift.run()   
in_foda pathlike object or string representing an existing file

Input image containing the spherical harmonics of the fibre orientation distributions. Maps to a command-line argument: %s (position: -2).

in_tracksa pathlike object or string representing an existing file

Input track file in TCK format. Maps to a command-line argument: %s (position: -3).

act_filea pathlike object or string representing an existing file

ACT 5TT image file. Maps to a command-line argument: -act %s (position: -4).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing a file

Output filtered tractogram. Maps to a command-line argument: %s (position: -1).

out_tracksa pathlike object or string representing an existing file

Output filtered tractogram.

Generate5tt

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: 5ttgen.

Generate a 5TT image suitable for ACT using the selected algorithm using 5ttgen.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> gen5tt = mrt.Generate5tt()
>>> gen5tt.inputs.in_file = 'T1.nii.gz'
>>> gen5tt.inputs.algorithm = 'fsl'
>>> gen5tt.inputs.out_file = '5tt.mif'
>>> gen5tt.cmdline                             
'5ttgen fsl T1.nii.gz 5tt.mif'
>>> gen5tt.run()                               
algorithm‘fsl’ or ‘gif’ or ‘freesurfer’ or ‘hsvs’

Tissue segmentation algorithm. Maps to a command-line argument: %s (position: -3).

in_filea pathlike object or string representing an existing file

Input image. Maps to a command-line argument: -nocrop -sgm_amyg_hipp %s (position: -2).

out_filea pathlike object or string representing a file

Output image. Maps to a command-line argument: %s (position: -1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing an existing file

Output image.

GenerateGMWMInterface

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: 5tt2gmwmi.

Generate a grey matter-white matter interface mask from the 5TT image using 5tt2gmwmi.

Example

>>> import cmtklib.interfaces.mrtrix3 as cmp_mrt
>>> genWMGMI = cmp_mrt.Generate5tt()
>>> genWMGMI.inputs.in_file = '5tt.mif'
>>> genWMGMI.inputs.out_file = 'gmwmi.mif'
>>> genGMWMI.run()  
in_filea pathlike object or string representing an existing file

Input 5TT image. Maps to a command-line argument: %s (position: -2).

out_filea pathlike object or string representing a file

Output GW/WM interface image. Maps to a command-line argument: %s (position: -1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing an existing file

Output image.

MRConvert

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: mrconvert.

Perform conversion with mrconvert between different file types and optionally extract a subset of the input image.

If used correctly, this program can be a very useful workhorse. In addition to converting images between different formats, it can be used to extract specific studies from a data set, extract a specific region of interest, flip the images, or to scale the intensity of the images.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> mrconvert = mrt.MRConvert()
>>> mrconvert.inputs.in_file = 'dwi_FA.mif'
>>> mrconvert.inputs.out_filename = 'dwi_FA.nii'
>>> mrconvert.run()  
in_dira pathlike object or string representing an existing directory

Directory containing DICOM files. Maps to a command-line argument: %s (position: -2). Mutually exclusive with inputs: in_file, in_dir.

in_filea pathlike object or string representing an existing file

Voxel-order data filename. Maps to a command-line argument: %s (position: -2). Mutually exclusive with inputs: in_file, in_dir.

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

extension‘mif’ or ‘nii’ or ‘float’ or ‘char’ or ‘short’ or ‘int’ or ‘long’ or ‘double’

“i.e. Bfloat”. Can be “char”, “short”, “int”, “long”, “float” or “double”. (Nipype default value: mif)

extract_at_axis1 or 2 or 3

Extract data only at the coordinates specified.This option specifies the Axis. Must be used in conjunction with extract_at_coordinate. . Maps to a command-line argument: -coord %s (position: 1).

extract_at_coordinatea list of from 1 to 3 items which are an integer

Extract data only at the coordinates specified. This option specifies the coordinates. Must be used in conjunction with extract_at_axis. Three comma-separated numbers giving the size of each voxel in mm. Maps to a command-line argument: %s (position: 2).

force_writinga boolean

Force file overwriting. Maps to a command-line argument: -force.

grada pathlike object or string representing an existing file

Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix. Maps to a command-line argument: -grad %s (position: 9).

grad_fsla tuple of the form: (a pathlike object or string representing an existing file, a pathlike object or string representing an existing file)

[bvecs, bvals] DW gradient scheme (FSL format). Maps to a command-line argument: -fslgrad %s %s.

layout‘nii’ or ‘float’ or ‘char’ or ‘short’ or ‘int’ or ‘long’ or ‘double’

Specify the layout of the data in memory. The actual layout produced will depend on whether the output image format can support it. Maps to a command-line argument: -output %s (position: 5).

offset_biasa float

Apply offset to the intensity values. Maps to a command-line argument: -scale %d (position: 7).

out_filenamea pathlike object or string representing a file

Output filename. Maps to a command-line argument: %s (position: -1).

output_datatype‘float32’ or ‘float32le’ or ‘float32be’ or ‘float64’ or ‘float64le’ or ‘float64be’ or ‘int64’ or ‘uint64’ or ‘int64le’ or ‘uint64le’ or ‘int64be’ or ‘uint64be’ or ‘int32’ or ‘uint32’ or ‘int32le’ or ‘uint32le’ or ‘int32be’ or ‘uint32be’ or ‘int16’ or ‘uint16’ or ‘int16le’ or ‘uint16le’ or ‘int16be’ or ‘uint16be’ or ‘cfloat32’ or ‘cfloat32le’ or ‘cfloat32be’ or ‘cfloat64’ or ‘cfloat64le’ or ‘cfloat64be’ or ‘int8’ or ‘uint8’ or ‘bit’

Specify output image data type. Valid choices are: float32, float32le, float32be, float64, float64le, float64be, int64, uint64, int64le, uint64le, int64be, uint64be, int32, uint32, int32le, uint32le, int32be, uint32be, int16, uint16, int16le, uint16le, int16be, uint16be, cfloat32, cfloat32le, cfloat32be, cfloat64, cfloat64le, cfloat64be, int8, uint8, bit.”. Maps to a command-line argument: -datatype %s (position: 2).

prsa boolean

Assume that the DW gradients are specified in the PRS frame (Siemens DICOM only). Maps to a command-line argument: -prs (position: 3).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet.

replace_nan_with_zeroa boolean

Replace all NaN values with zero. Maps to a command-line argument: -zero (position: 8).

resamplea float

Apply scaling to the intensity values. Maps to a command-line argument: -scale %d (position: 6).

stridea list of from 3 to 4 items which are an integer

Three to four comma-separated numbers specifying the strides of the output data in memory. The actual strides produced will depend on whether the output image format can support it.. Maps to a command-line argument: -stride %s (position: 3).

voxel_dimsa list of from 3 to 3 items which are a float

Three comma-separated numbers giving the size of each voxel in mm. Maps to a command-line argument: -vox %s (position: 3).

converteda pathlike object or string representing an existing file

Path/name of 4D volume in voxel order.

MRCrop

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: mrcrop.

Crops a NIFTI image using the mrcrop tool.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> mrcrop = mrt.MRCrop()
>>> mrcrop.inputs.in_file = 'sub-01_dwi.nii.gz'
>>> mrcrop.inputs.in_mask_file = 'sub-01_mod-dwi_desc-brain_mask.nii.gz'
>>> mrcrop.inputs.out_filename = 'sub-01_desc-cropped_dwi.nii.gz'
>>> mrcrop.run()  
in_filea pathlike object or string representing an existing file

Input image. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug (position: 1).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

in_mask_filea pathlike object or string representing an existing file

Input mask. Maps to a command-line argument: -mask %s (position: -3).

out_filenamea pathlike object or string representing a file

Output cropped image. Maps to a command-line argument: %s (position: -1).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet (position: 1).

croppeda pathlike object or string representing an existing file

The output cropped image.

MRThreshold

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: mrthreshold.

Threshold an image using the mrthreshold tool.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> mrthresh = mrt.MRCrop()
>>> mrthresh.inputs.in_file = 'sub-01_dwi.nii.gz'
>>> mrthresh.inputs.out_file = 'sub-01_desc-thresholded_dwi.nii.gz'
>>> mrthresh.run()  
in_filea pathlike object or string representing an existing file

The input image to be thresholded. Maps to a command-line argument: %s (position: -3).

out_filea pathlike object or string representing a file

the output binary image mask.

Maps to a command-line argument: %s (position: -2).

abs_valuea float

Specify threshold value as absolute intensity. Maps to a command-line argument: -abs %s (position: -1).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

force_writinga boolean

Force file overwriting. Maps to a command-line argument: -force.

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet.

thresholdeda pathlike object or string representing an existing file

Path/name of the output binary image mask.

MRTransform

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: mrtransform.

Apply spatial transformations or reslice images using the mrtransform tool.

Example

>>> from cmtklib.interfaces.mrtrix3 import MRTransform
>>> MRxform = MRTransform()
>>> MRxform.inputs.in_files = 'anat_coreg.mif'
>>> MRxform.inputs.interp = 'cubic'
>>> MRxform.run()  
in_filesa list of items which are any value

Input images to be transformed. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug (position: 1).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

flip_xa boolean

Assume the transform is supplied assuming a coordinate system with the x-axis reversed relative to the MRtrix convention (i.e. x increases from right to left). This is required to handle transform matrices produced by FSL’s FLIRT command. This is only used in conjunction with the -reference option. Maps to a command-line argument: -flipx (position: 1).

interp‘nearest’ or ‘linear’ or ‘cubic’ or ‘sinc’

Set the interpolation method to use when reslicing (choices: nearest,linear, cubic, sinc. Default: cubic). Maps to a command-line argument: -interp %s.

inverta boolean

Invert the specified transform before using it. Maps to a command-line argument: -inverse (position: 1).

out_filenamea pathlike object or string representing a file

Output image. Maps to a command-line argument: %s (position: -1).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet (position: 1).

reference_imagea pathlike object or string representing an existing file

In case the transform supplied maps from the input image onto a reference image, use this option to specify the reference. Note that this implicitly sets the -replace option. Maps to a command-line argument: -reference %s (position: 1).

replace_transforma boolean

Replace the current transform by that specified, rather than applying it to the current transform. Maps to a command-line argument: -replace (position: 1).

template_imagea pathlike object or string representing an existing file

Reslice the input image to match the specified template image. Maps to a command-line argument: -template %s (position: 1).

transformation_filea pathlike object or string representing an existing file

The transform to apply, in the form of a 4x4 ascii file. Maps to a command-line argument: -transform %s (position: 1).

out_filea pathlike object or string representing an existing file

The output image of the transformation.

MRTrix3Base

Link to code

Bases: nipype.interfaces.base.core.CommandLine

“MRtrix3Base base class inherited by FilterTractogram class.

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

MRtrix_mul

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: mrcalc.

Multiply two images together using mrcalc tool.

Examples

>>> from cmtklib.interfaces.mrtrix3 import MRtrix_mul
>>> multiply = MRtrix_mul()
>>> multiply.inputs.input1  = 'image1.nii.gz'
>>> multiply.inputs.input2  = 'image2.nii.gz'
>>> multiply.inputs.out_filename = 'result.nii.gz'
>>> multiply.run()  
input1a pathlike object or string representing an existing file

Input1 file. Maps to a command-line argument: %s (position: 1).

input2a pathlike object or string representing an existing file

Input2 file. Maps to a command-line argument: %s (position: 2).

out_filenamea string

Out filename. Maps to a command-line argument: -mult %s (position: 3).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing a file

Multiplication result file.

SIFT2

Link to code

Bases: MRTrix3Base

Wrapped executable: tcksift2.

Determine an appropriate cross-sectional area multiplier for each streamline using tcksift2 [Smith2015SIFT2].

References

Smith2015SIFT2

Smith RE et al., Neuroimage, 2015, 119:338-51. <https://doi.org/10.1016/j.neuroimage.2015.06.092>.

Example

>>> import cmtklib.interfaces.mrtrix3 as cmp_mrt
>>> mrtrix_sift2 = cmp_mrt.SIFT2()
>>> mrtrix_sift2.inputs.in_tracks = 'tractogram.tck'
>>> mrtrix_sift2.inputs.in_fod = 'spherical_harmonics_image.nii.gz'
>>> mrtrix_sift2.inputs.out_file = 'sift2_fiber_weights.txt'
>>> mrtrix_sift2.run()  
in_foda pathlike object or string representing an existing file

Input image containing the spherical harmonics of the fibre orientation distributions. Maps to a command-line argument: %s (position: -2).

in_tracksa pathlike object or string representing an existing file

Input track file in TCK format. Maps to a command-line argument: %s (position: -3).

act_filea pathlike object or string representing an existing file

ACT 5TT image file. Maps to a command-line argument: -act %s (position: -4).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filea pathlike object or string representing a file

Output filtered tractogram. Maps to a command-line argument: %s (position: -1).

out_tracksa pathlike object or string representing an existing file

Output filtered tractogram.

StreamlineTrack

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: tckgen.

Performs tractography using tckgen.

It can use one of the following models:

'dt_prob', 'dt_stream', 'sd_prob', 'sd_stream'

where ‘dt’ stands for diffusion tensor, ‘sd’ stands for spherical deconvolution, and ‘prob’ stands for probabilistic.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> strack = mrt.StreamlineTrack()
>>> strack.inputs.inputmodel = 'SD_PROB'
>>> strack.inputs.in_file = 'data.Bfloat'
>>> strack.inputs.seed_file = 'seed_mask.nii'
>>> strack.run()  
in_filea pathlike object or string representing an existing file

The image containing the source data.The type of data required depends on the type of tracking as set in the preceeding argument.For DT methods, the base DWI are needed.For SD methods, the SH harmonic coefficients of the FOD are needed. Maps to a command-line argument: %s (position: 2).

act_filea pathlike object or string representing an existing file

Use the Anatomically-Constrained Tractography framework during tracking; provided image must be in the 5TT (five - tissue - type) format. Maps to a command-line argument: -act %s.

anglea float

Set the maximum angle between successive steps (default is 90deg x stepsize / voxelsize). Maps to a command-line argument: -angle %s.

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

backtracka boolean

Allow tracks to be truncated. Maps to a command-line argument: -backtrack.

crop_at_gmwmia boolean

Crop streamline endpoints more precisely as they cross the GM-WM interface. Maps to a command-line argument: -crop_at_gmwmi.

cutoff_valuea float

Set the FA or FOD amplitude cutoff for terminating tracks (default is 0.5). Maps to a command-line argument: -cutoff %s.

desired_number_of_tracksan integer

Sets the desired number of tracks.The program will continue to generate tracks until this number of tracks have been selectedand written to the output file (default is 100 for *_STREAM methods, 1000 for *_PROB methods). Maps to a command-line argument: -select %d.

do_not_precomputea boolean

Turns off precomputation of the legendre polynomial values.Warning: this will slow down the algorithm by a factor of approximately 4. Maps to a command-line argument: -noprecomputed.

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

gradient_encoding_filea pathlike object or string representing an existing file

Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ]where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-valuein units (1000 s/mm^2). See FSL2MRTrix. Maps to a command-line argument: -grad %s.

initial_cutoff_valuea float

Sets the minimum FA or FOD amplitude for initiating tracks (default is twice the normal cutoff). Maps to a command-line argument: -seed_cutoff %s.

initial_directiona list of from 2 to 2 items which are an integer

Specify the initial tracking direction as a vector. Maps to a command-line argument: -seed_direction %s.

inputmodel‘FACT’ or ‘iFOD1’ or ‘iFOD2’ or ‘Nulldist1’ or ‘Nulldist2’ or ‘SD_Stream’ or ‘Seedtest’ or ‘Tensor_Det’ or ‘Tensor_Prob’

Specify the tractography algorithm to use. Valid choices are:FACT, iFOD1, iFOD2, Nulldist1, Nulldist2, SD_Stream, Seedtest, Tensor_Det, Tensor_Prob (default: iFOD2). Maps to a command-line argument: -algorithm %s (position: -3). (Nipype default value: FACT)

mask_filea pathlike object or string representing an existing file

Mask file. Only tracks within mask. Maps to a command-line argument: -mask %s.

maximum_number_of_seedsan integer

Sets the maximum number of tracks to generate.The program will not generate more tracks than this number,even if the desired number of tracks hasn’t yet been reached(default is 1000 x number of streamlines). Maps to a command-line argument: -seeds %d.

maximum_tract_lengtha float

Sets the maximum length of any track in millimeters (default is 500 mm). Maps to a command-line argument: -maxlength %s.

minimum_tract_lengtha float

Sets the minimum length of any track in millimeters (default is 5 mm). Maps to a command-line argument: -minlength %s.

out_filea pathlike object or string representing a file

Output data file. Maps to a command-line argument: %s (position: -1).

rk4a boolean

Use 4th-order Runge-Kutta integration (slower, but eliminates curvature overshoot in 1st-order deterministic methods). Maps to a command-line argument: -rk4.

seed_filea pathlike object or string representing an existing file

Seed file. Maps to a command-line argument: -seed_image %s.

seed_gmwmia pathlike object or string representing an existing file

Seed from the grey matter - white matter interface (only valid if using ACT framework). Maps to a command-line argument: -seed_gmwmi %s. Requires inputs: act_file.

seed_speca list of from 4 to 4 items which are an integer

Seed specification in voxels and radius (x y z r). Maps to a command-line argument: -seed_sphere %s.

step_sizea float

Set the step size of the algorithm in mm (default is 0.5). Maps to a command-line argument: -step %s.

stopa boolean

Stop track as soon as it enters any of the include regions. Maps to a command-line argument: -stop.

unidirectionala boolean

Track from the seed point in one direction only (default is to track in both directions). Maps to a command-line argument: -seed_unidirectional.

trackeda pathlike object or string representing an existing file

Output file containing reconstructed tracts.

Tensor2Vector

Link to code

Bases: nipype.interfaces.base.core.CommandLine

Wrapped executable: tensor2metric.

Generates a map of the major eigenvectors of the tensors in each voxel using tensor2metric.

Example

>>> import cmtklib.interfaces.mrtrix3 as mrt
>>> tensor2vector = mrt.Tensor2Vector()
>>> tensor2vector.inputs.in_file = 'dwi_tensor.mif'
>>> tensor2vector.run()  
in_filea pathlike object or string representing an existing file

Diffusion tensor image. Maps to a command-line argument: %s (position: -2).

argsa string

Additional parameters to the command. Maps to a command-line argument: %s.

debuga boolean

Display debugging messages. Maps to a command-line argument: -debug (position: 1).

environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’

Environment variables. (Nipype default value: {})

out_filenamea pathlike object or string representing a file

Output vector filename. Maps to a command-line argument: -vector %s (position: -1).

quieta boolean

Do not display information messages or progress status. Maps to a command-line argument: -quiet (position: 1).

vectora pathlike object or string representing an existing file

The output image of the major eigenvectors of the diffusion tensor image.

cmtklib.interfaces.pycartool module

The PyCartool module provides Nipype interfaces with Cartool using pycartool.

CartoolInverseSolutionROIExtraction

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Use Pycartool to load inverse solutions estimated by Cartool.

Examples

>>> from cmtklib.interfaces.pycartool import CartoolInverseSolutionROIExtraction
>>> cartool_inv_sol = CartoolInverseSolutionROIExtraction()
>>> cartool_inv_sol.inputs.epochs_file = 'sub-01_task-faces_desc-preproc_eeg.set'
>>> cartool_inv_sol.inputs.invsol_file = 'sub-01_eeg.LORETA.is'
>>> cartool_inv_sol.inputs.mapping_spi_rois_file = 'sub-01_atlas-L2018_res-scale1.pickle.rois'
>>> cartool_inv_sol.inputs.lamd = 6
>>> cartool_inv_sol.inputs.svd_toi_begin = 0
>>> cartool_inv_sol.inputs.svd_toi_end = 0.25
>>> cartool_inv_sol.inputs.out_roi_ts_fname_prefix = 'sub-01_task-faces_atlas-L2008_res-scale1_rec-LORETA_timeseries'
>>> cartool_inv_sol.run()  

References

epochs_filea string or os.PathLike object referring to an existing file

Eeg * epochs in .set format.

invsol_filea string or os.PathLike object referring to an existing file

Inverse solution (.is file loaded with pycartool).

mapping_spi_rois_filea string or os.PathLike object referring to an existing file

Cartool-reconstructed sources / parcellation ROI mapping file, loaded with pickle.

out_roi_ts_fname_prefixa string

Output name prefix (no extension) for rois * time series files.

lamban integer

Regularization weight.

svd_toi_begina float

Start TOI for SVD projection.

svd_toi_enda float

End TOI for SVD projection.

roi_ts_mat_filea string or os.PathLike object

Path to output ROI time series file in .mat format.

roi_ts_npy_filea string or os.PathLike object

Path to output ROI time series file in .npy format.

static CartoolInverseSolutionROIExtraction.apply_inverse_epochs_cartool(epochs_file, invsol_file, lamda, rois_file, svd_params)[source]
CreateSpiRoisMapping

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Create Cartool-reconstructed sources / parcellation ROI mapping file.

Examples

>>> from cmtklib.interfaces.pycartool import CreateSpiRoisMapping
>>> createrois = CreateSpiRoisMapping()
>>> createrois.inputs.roi_volume_file = '/path/to/sub-01_atlas-L2018_res-scale1_dseg.nii.gz'
>>> createrois.inputs.spi_file = '/path/to/sub-01_eeg.spi'
>>> createrois.inputs.out_mapping_spi_rois_fname = 'sub-01_atlas-L2018_res-scale1_eeg.pickle.rois'
>>> createrois.run()  
out_mapping_spi_rois_fnamea string

Name of output sources / parcellation ROI mapping file in .pickle.rois format.

roi_volume_filea string or os.PathLike object

Parcellation file in nifti format.

spi_filea string or os.PathLike object

Cartool reconstructed sources file in spi format.

mapping_spi_rois_filea string or os.PathLike object

Path to output Cartool-reconstructed sources / parcellation ROI mapping file in .pickle.rois format.

Submodules
cmtklib.carbonfootprint module
cmtklib.config module

Module that defines methods for handling CMP3 configuration files.

cmtklib.config.anat_load_config_json(pipeline, config_path)[source]

Load the JSON configuration file of an anatomical pipeline.

Parameters
cmtklib.config.anat_save_config(pipeline, config_path)[source]

Save the configuration file of an anatomical pipeline.

Parameters
cmtklib.config.check_configuration_format(config_path)[source]

Check format of the configuration file.

Parameters

config_path (string) – Path to pipeline configuration file

Returns

ext – Format extension of the pipeline configuration file

Return type

‘.ini’ or ‘.json’

cmtklib.config.check_configuration_version(config)[source]

Check the version of CMP3 used to generate a configuration.

Parameters

config (Dict) – Dictionary of configuration parameters loaded from JSON file

Returns

is_sameTrue if the version used to generate the configuration matches the version currently used (cmp.info.__version__).

Return type

bool

cmtklib.config.convert_config_ini_2_json(config_ini_path)[source]

Convert a configuration file in old INI format to new JSON format.

Parameters

config_ini_path (string) – Path to configuration file in old INI format

Returns

config_json_path – Path to converted configuration file in new JSON format

Return type

string

cmtklib.config.create_configparser_from_pipeline(pipeline, debug=False)[source]

Create a ConfigParser object from a Pipeline instance.

Parameters
  • pipeline (Instance(Pipeline)) – Instance of pipeline

  • debug (bool) – If True, show additional prints

Returns

config – Instance of ConfigParser

Return type

Instance(configparser.ConfigParser)

cmtklib.config.create_subject_configuration_from_ref(project, ref_conf_file, pipeline_type, multiproc_number_of_cores=1)[source]

Create the pipeline configuration file for an individual subject from a reference given as input.

Parameters
  • project (cmp.project.ProjectInfo) – Instance of cmp.project.CMP_Project_Info

  • ref_conf_file (string) – Reference configuration file

  • pipeline_type ('anatomical', 'diffusion', 'fMRI', 'EEG') – Type of pipeline

  • multiproc_number_of_cores (int) – Number of threads used by Nipype

Returns

subject_conf_file – Configuration file of the individual subject

Return type

string

cmtklib.config.dmri_load_config_json(pipeline, config_path)[source]

Load the JSON configuration file of a diffusion pipeline.

Parameters
  • pipeline (Instance(cmp.pipelines.diffusion.diffusion.DiffusionPipeline)) – Instance of DiffusionPipeline

  • config_path (string) – Path of the JSON configuration file

cmtklib.config.dmri_save_config(pipeline, config_path)[source]

Save the INI configuration file of a diffusion pipeline.

Parameters
  • pipeline (Instance(cmp.pipelines.diffusion.diffusion.DiffusionPipeline)) – Instance of DiffusionPipeline

  • config_path (string) – Path of the JSON configuration file

cmtklib.config.eeg_load_config_json(pipeline, config_path)[source]

Load the JSON configuration file of an EEG pipeline.

Parameters
cmtklib.config.eeg_save_config(pipeline, config_path)[source]

Save the JSON configuration file of a eeg pipeline.

Parameters
cmtklib.config.fmri_load_config_json(pipeline, config_path)[source]

Load the JSON configuration file of a fMRI pipeline.

Parameters
cmtklib.config.fmri_save_config(pipeline, config_path)[source]

Save the INI configuration file of a fMRI pipeline.

Parameters
cmtklib.config.get_anat_process_detail_json(project_info, section, detail)[source]

Get the value for a parameter key (detail) in the stage section of the anatomical JSON config file.

Parameters
  • project_info (Instance(cmp.project.ProjectInfo)) – Instance of cmp.project.ProjectInfo class

  • section (string) – Stage section name

  • detail (string) – Parameter key

Returns

Return type

The parameter value

cmtklib.config.get_dmri_process_detail_json(project_info, section, detail)[source]

Get the value for a parameter key (detail) in the stage section of the diffusion JSON config file.

Parameters
  • project_info (Instance(cmp.project.ProjectInfo)) – Instance of cmp.project.ProjectInfo class

  • section (string) – Stage section name

  • detail (string) – Parameter key

Returns

Return type

The parameter value

cmtklib.config.get_eeg_process_detail_json(project_info, section, detail)[source]

Get the value for a parameter key (detail) in the stage section of the EEG JSON config file.

Parameters
  • project_info (Instance(cmp.project.CMP_Project_Info)) – Instance of cmp.project.CMP_Project_Info class

  • section (string) – Stage section name

  • detail (string) – Parameter key

Returns

Return type

The parameter value

cmtklib.config.get_fmri_process_detail_json(project_info, section, detail)[source]

Get the value for a parameter key (detail) in the stage section of the fMRI JSON config file.

Parameters
  • project_info (Instance(cmp.project.ProjectInfo)) – Instance of cmp.project.ProjectInfo class

  • section (string) – Stage section name

  • detail (string) – Parameter key

Returns

Return type

The parameter value

cmtklib.config.get_process_detail_json(project_info, section, detail)[source]

Get the value for a parameter key (detail) in the global section of the JSON config file.

Parameters
  • project_info (Instance(cmp.project.ProjectInfo)) – Instance of cmp.project.ProjectInfo class

  • section (string) – Stage section name

  • detail (string) – Parameter key

Returns

Return type

The parameter value

cmtklib.config.save_configparser_as_json(config, config_json_path, ini_mode=False, debug=False)[source]

Save a ConfigParser to JSON file.

Parameters
  • config (Instance(configparser.ConfigParser)) – Instance of ConfigParser

  • config_json_path (string) – Output path of JSON configuration file

  • ini_mode (bool) – If True, handles all content stored in strings

  • debug (bool) – If True, show additional prints

cmtklib.config.set_pipeline_attributes_from_config(pipeline, config, debug=False)[source]

Set the pipeline stage attributes given a configuration.

Parameters
  • pipeline (Instance(Pipeline)) – Instance of pipeline

  • config (Dict) – Dictionary of configuration parameter loaded from the JSON configuration file

  • debug (bool) – If True, show additional prints

cmtklib.connectome module

Module that defines CMTK functions and Nipype interfaces for connectome mapping.

DmriCmat

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Creates the structural connectivity matrices for a given parcellation scheme.

Examples

>>> from cmtklib.connectome import DmriCmat
>>> cmat = DmriCmat()
>>> cmat.inputs.base_dir = '/my_directory'
>>> cmat.inputs.track_file = '/path/to/sub-01_tractogram.trk'
>>> cmat.inputs.roi_volumes = ['/path/to/sub-01_space-DWI_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-DWI_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-DWI_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-DWI_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-DWI_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> cmat.inputs.roi_graphmls = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.graphml']
>>> cmat.inputs.parcellation scheme = 'Lausanne2018'
>>> cmat.inputs.output_types = ['gpickle','mat','graphml']
>>> cmat.run()  
track_filea list of items which are a pathlike object or string representing an existing file

Tractography result.

additional_mapsa list of items which are a pathlike object or string representing a file

Additional calculated maps (ADC, gFA, …).

atlas_infoa dictionary with keys which are any value and with values which are any value

Custom atlas information.

compute_curvaturea boolean

Compute curvature. (Nipype default value: True)

output_typesa list of items which are a string

Output types of the connectivity matrices.

parcellation_scheme‘Lausanne2018’ or ‘NativeFreesurfer’ or ‘Custom’

Parcellation scheme. (Nipype default value: Lausanne2018)

roi_graphmlsa list of items which are a pathlike object or string representing an existing file

GraphML description of ROI volumes (Lausanne2018).

roi_volumesa list of items which are a pathlike object or string representing an existing file

ROI volumes registered to diffusion space.

voxel_connectivitya list of items which are a pathlike object or string representing an existing file

ProbtrackX connectivity matrices (# seed voxels x # target ROIs).

connectivity_matricesa list of items which are a pathlike object or string representing a file

Connectivity matrices.

endpoints_filea pathlike object or string representing a file

Numpy files storing the list of fiber endpoint.

endpoints_mm_filea pathlike object or string representing a file

Numpy files storing the list of fiber endpoint in mm.

filtered_fiberslabel_filesa list of items which are a pathlike object or string representing a file

List of fiber start end ROI parcellation label after filtering.

final_fiberlabels_filesa list of items which are a pathlike object or string representing a file

List of fiber start end ROI parcellation label.

final_fiberslength_filesa list of items which are a pathlike object or string representing a file

List of fiber length.

streamline_final_filea pathlike object or string representing a file

Final tractogram of fibers considered in the creation of connectivity matrices.

RsfmriCmat

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Creates the functional connectivity matrices for a given parcellation scheme.

It applies scrubbing (if enabled), computes the average GM ROI time-series and computes

the Pearson’s correlation coefficient between each GM ROI time-series poir.

Examples

>>> from cmtklib.connectome import RsfmriCmat
>>> cmat = RsfmriCmat()
>>> cmat.inputs.base_dir = '/my_directory'
>>> cmat.inputs.func_file = '/path/to/sub-01_task-rest_desc-preproc_bold.nii.gz'
>>> cmat.inputs.roi_volumes = ['/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> cmat.inputs.roi_graphmls = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.graphml']
>>> cmat.inputs.parcellation scheme = 'Lausanne2018'
>>> cmat.inputs.apply_scrubbing = False
>>> cmat.inputs.output_types = ['gpickle','mat','graphml']
>>> cmat.run() 
func_filea pathlike object or string representing an existing file

FMRI volume.

DVARSa pathlike object or string representing an existing file

DVARS file if scrubbing is performed.

DVARS_tha float

DVARS threshold.

FDa pathlike object or string representing an existing file

FD file if scrubbing is performed.

FD_tha float

FD threshold.

apply_scrubbinga boolean

Apply scrubbing.

atlas_infoa dictionary with keys which are any value and with values which are any value

Custom atlas information.

output_typesa list of items which are a string

Output types of the connectivity matrices.

parcellation_scheme‘Lausanne2018’ or ‘NativeFreesurfer’ or ‘Custom’

Parcellation scheme. (Nipype default value: Lausanne2018)

roi_graphmlsa list of items which are a pathlike object or string representing an existing file

GraphML description file for ROI volumes (used only if parcellation_scheme == Lausanne2018).

roi_volumesa list of items which are a pathlike object or string representing an existing file

ROI volumes registered to functional space.

avg_timeseriesa list of items which are a pathlike object or string representing an existing file

ROI average timeseries.

connectivity_matricesa list of items which are a pathlike object or string representing an existing file

Functional connectivity matrices.

scrubbed_idxa pathlike object or string representing an existing file

Scrubbed indices.

cmtklib.connectome.cmat(intrk, roi_volumes=None, roi_graphmls=None, parcellation_scheme=None, compute_curvature=True, additional_maps=None, output_types=None, atlas_info=None)[source]

Create the connection matrix for each resolution using fibers and ROIs.

Parameters
  • intrk (TRK file) – Reconstructed tractogram

  • roi_volumes (list) – List of parcellation files for a given parcellation scheme

  • roi_graphmls (list) – List of graphmls files that describes parcellation nodes

  • parcellation_scheme (['NativeFreesurfer', 'Lausanne2018', 'Custom']) –

  • compute_curvature (Boolean) –

  • additional_maps (dict) – A dictionary of key/value for each additional map where the value is the path to the map

  • output_types (['gpickle','mat','graphml']) –

  • atlas_info (dict) – Dictionary storing information such as path to files related to a parcellation atlas / scheme.

cmtklib.connectome.compute_curvature_array(fib)[source]

Computes the curvature array.

cmtklib.connectome.create_endpoints_array(fib, voxelSize, print_info)[source]

Create the endpoints arrays for each fiber.

Parameters
  • fib (the fibers data) –

  • voxelSize (3-tuple) – It contains the voxel size of the ROI image

  • print_info (bool) – If True, print extra information

Returns

  • (endpoints (matrix of size [#fibers, 2, 3] containing for each fiber the) – index of its first and last point in the voxelSize volume

  • endpointsmm) (endpoints in milimeter coordinates)

cmtklib.connectome.group_analysis_sconn(output_dir, subjects_to_be_analyzed)[source]

Perform group level analysis of structural connectivity matrices.

cmtklib.connectome.save_fibers(oldhdr, oldfib, fname, indices)[source]

Stores a new trackvis file fname using only given indices.

Parameters
  • oldhdr (the tractogram header) – Tractogram header to use as reference

  • oldfib (the fibers data) – Input fibers

  • fname (string) – Output tractogram filename

  • indices (list) – Indices of fibers included

cmtklib.data.parcellation.util module

Module that defines CMTK utility functions for retrieving Lausanne parcellation files.

cmtklib.data.parcellation.util.get_lausanne2018_parcellation_annot(scale='scale1', hemi='lh')[source]

Return the path of the Freesurfer .annot file corresponding to a specific scale and hemisphere.

Parameters
  • scale ({'scale1', 'scale2', 'scale3', 'scale4', 'scale5'}) – Lausanne 2018 parcellation scale

  • hemi ({'lh', 'rh'}) – Brain hemisphere

Returns

annot_file_path – Absolute path to the queried .annot file

Return type

string

cmtklib.data.parcellation.util.get_lausanne2018_parcellation_mni_coords(scale='scale1')[source]

Return label regions cut coordinates in MNI space (mm).

Parameters

scale ({'scale1', 'scale2', 'scale3', 'scale4', 'scale5'}) – Lausanne 2018 parcellation scale

Returns

coords – Label regions cut coordinates in MNI space (mm)

Return type

numpy.array

cmtklib.data.parcellation.viz module

Module that defines CMTK utility functions for plotting Lausanne parcellation files.

cmtklib.data.parcellation.viz.plot_lausanne2018_surface_ctx(roi_values, scale='scale1', cmap='Spectral', save_fig=False, output_dir='./', filename=None, fmt='png')[source]

Plots a set of values on the cortical surface of a given Lausanne 2018 parcellation scale.

Parameters
  • roi_values (numpy array) – The values to be plotted on the surface. The array should have as many values as regions of interest

  • scale ({'scale1', 'scale2', 'scale3', 'scale4', 'scale5'}) – Scale of the Lausanne 2018 atlas to be used

  • cmap (string) – Colormap to use for plotting, default “Spectral”

  • save_fig (bool) – Whether to save the generated figures, default: False

  • output_dir (string) – Directory to save the figure, only used when save_fig == True

  • filename (string) – Filename of the saved figure (without the extension), only used when save_fig == True

  • fmt (string) – Format the figure is saved (Default: “png”, also accepted are “pdf”, “svg”, and others, depending on the backend used)

cmtklib.eeg module

Module that defines CMTK utility functions for the EEG pipeline.

cmtklib.eeg.save_eeg_connectome_file(output_dir, output_basename, con_res, roi_labels, output_types=None)[source]

Save a dictionary of connectivity matrices with corresponding keys to the metrics in the multiple formats of CMP3.

Parameters
  • output_dir (str) – Output directory for the connectome file(s)

  • output_basename (str) – Base name for the connectome file(s) i.e., sub-01_atlas-L20018_res-scale1_conndata-network_connectivity

  • con_res (dict) – Dictionary of connectivity metric / matrix pairs

  • roi_labels (list) – List of parcellation roi labels extracted from the epo.pkl file generated with MNE

  • output_types (['tsv', 'gpickle', 'mat', 'graphml']) – List of output format in which to save the connectome files. (Default: None)

cmtklib.diffusion module

Module that defines CMTK utility functions for the diffusion pipeline.

ExtractPVEsFrom5TT

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Create Partial Volume Estimation maps for CSF, GM, WM tissues from mrtrix3 5TT image.

Examples

>>> from cmtklib.diffusion import ExtractPVEsFrom5TT
>>> pves = ExtractPVEsFrom5TT()
>>> pves.inputs.in_5tt = 'sub-01_desc-5tt_dseg.nii.gz'
>>> pves.inputs.ref_image = 'sub-01_T1w.nii.gz'
>>> pves.inputs.pve_csf_file = '/path/to/output_csf_pve.nii.gz'
>>> pves.inputs.pve_gm_file = '/path/to/output_gm_pve.nii.gz'
>>> pves.inputs.pve_wm_file = '/path/to/output_wm_pve.nii.gz'
>>> pves.run()  
in_5tta pathlike object or string representing an existing file

Input 5TT (4D) image.

pve_csf_filea pathlike object or string representing a file

CSF Partial Volume Estimation volume estimated from.

pve_gm_filea pathlike object or string representing a file

GM Partial Volume Estimation volume estimated from.

pve_wm_filea pathlike object or string representing a file

WM Partial Volume Estimation volume estimated from.

ref_imagea pathlike object or string representing an existing file

Reference 3D image to be used to save 3D PVE volumes.

partial_volume_filesa list of items which are a pathlike object or string representing a file

CSF/GM/WM Partial Volume Estimation images estimated from.

FlipBvec

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Return a diffusion bvec file with flipped axis as specified by flipping_axis input.

Examples

>>> from cmtklib.diffusion import FlipBvec
>>> flip_bvec = FlipBvec()
>>> flip_bvec.inputs.bvecs = 'sub-01_dwi.bvecs'
>>> flip_bvec.inputs.flipping_axis = ['x']
>>> flip_bvec.inputs.delimiter = ' '
>>> flip_bvec.inputs.header_lines = 0
>>> flip_bvec.inputs.orientation = 'h'
>>> flip_bvec.run()  
bvecsa pathlike object or string representing an existing file

Input diffusion gradient bvec file.

delimitera string

Delimiter used in the table.

flipping_axisa list of items which are any value

List of axis to be flipped.

header_linesan integer

Line number of table header.

orientation‘v’ or ‘h’

Orientation of the table.

bvecs_flippeda pathlike object or string representing an existing file

Output bvec file with flipped axis.

FlipTable

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Flip axis and rewrite a gradient table.

Examples

>>> from cmtklib.diffusion import FlipTable
>>> flip_table = FlipTable()
>>> flip_table.inputs.table = 'sub-01_mod-dwi_gradient.txt'
>>> flip_table.inputs.flipping_axis = ['x']
>>> flip_table.inputs.orientation = 'v'
>>> flip_table.inputs.delimiter = ','
>>> flip_table.run()  
delimitera string

Delimiter used in the table.

flipping_axisa list of items which are any value

List of axis to be flipped.

header_linesan integer

Line number of table header.

orientation‘v’ or ‘h’

Orientation of the table.

tablea pathlike object or string representing an existing file

Input diffusion gradient table.

tablea pathlike object or string representing an existing file

Output table with flipped axis.

Tck2Trk

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Convert a tractogram in mrtrix TCK format to trackvis TRK format.

Examples

>>> from cmtklib.diffusion import Tck2Trk
>>> tck_to_trk = Tck2Trk()
>>> tck_to_trk.inputs.in_tracks = 'sub-01_tractogram.tck'
>>> tck_to_trk.inputs.in_image = 'sub-01_desc-preproc_dwi.nii.gz'
>>> tck_to_trk.inputs.out_tracks = 'sub-01_tractogram.trk'
>>> tck_to_trk.run()  
in_imagea pathlike object or string representing an existing file

Input image used to extract the header.

in_tracksa pathlike object or string representing an existing file

Input track file in MRtrix .tck format.

out_tracksa pathlike object or string representing a file

Output track file in Trackvis .trk format.

out_tracksa pathlike object or string representing an existing file

Output track file in Trackvis .trk format.

UpdateGMWMInterfaceSeeding

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Add extra Lausanne2018 structures to the Gray-matter/White-matter interface for tractography seeding.

Examples

>>> from cmtklib.diffusion import UpdateGMWMInterfaceSeeding
>>> update_gmwmi = UpdateGMWMInterfaceSeeding()
>>> update_gmwmi.inputs.in_gmwmi_file = 'sub-01_label-gmwmi_desc-orig_dseg.nii.gz'
>>> update_gmwmi.inputs.out_gmwmi_file = 'sub-01_label-gmwmi_desc-modif_dseg.nii.gz'
>>> update_gmwmi.inputs.in_roi_volumes = ['sub-01_space-DWI_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                                       'sub-01_space-DWI_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                                       'sub-01_space-DWI_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                                       'sub-01_space-DWI_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                                       'sub-01_space-DWI_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> update_gmwmi.run()  
in_gmwmi_filea pathlike object or string representing an existing file

Input GMWM interface image used for streamline seeding.

in_roi_volumesa list of items which are a pathlike object or string representing an existing file

Input parcellation images.

out_gmwmi_filea pathlike object or string representing a file

Output GM WM interface used for streamline seeding.

out_gmwmi_filea pathlike object or string representing an existing file

Output GM WM interface used for streamline seeding.

cmtklib.diffusion.compute_length_array(trkfile=None, streams=None, savefname='lengths.npy')[source]

Computes the length of the fibers in a tractogram and returns an array of length.

Parameters
  • trkfile (TRK file) – Path to the tractogram in TRK format

  • streams (the fibers data) – The fibers from which we want to compute the length

  • savefname (string) – Output filename to write the length array

Returns

fibers_length – Array of fiber lengths

Return type

numpy.array

cmtklib.diffusion.filter_fibers(intrk, outtrk='', fiber_cutoff_lower=20, fiber_cutoff_upper=500)[source]

Filters a tractogram based on lower / upper cutoffs.

Parameters
  • intrk (TRK file) – Path to a tractogram file in TRK format

  • outtrk (TRK file) – Output path for the filtered tractogram

  • fiber_cutoff_lower (int) – Lower number of fibers cutoff (Default: 20)

  • fiber_cutoff_upper (int) – Upper number of fibers cutoff (Default: 500)

cmtklib.functionalMRI module

Module that defines CMTK Nipype interfaces for the Functional MRI pipeline.

Detrending

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Apply linear, quadratic or cubic detrending on the Functional MRI signal.

Examples

>>> from cmtklib.functionalMRI import Detrending
>>> detrend = Detrending()
>>> detrend.inputs.base_dir = '/my_directory'
>>> detrend.inputs.in_file = '/path/to/sub-01_task-rest_desc-preproc_bold.nii.gz'
>>> detrend.inputs.gm_file = ['/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> detrend.inputs.mode = 'quadratic'
>>> detrend.run()  
in_filea string or os.PathLike object referring to an existing file

FMRI volume to detrend.

gm_filea list of items which are a string or os.PathLike object referring to an existing file

ROI files registered to fMRI space.

mode‘linear’ or ‘quadratic’ or ‘cubic’

Detrending order.

out_filea string or os.PathLike object referring to an existing file

Detrended fMRI volume.

DiscardTP

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Discards the n first time frame in functional MRI data.

Examples

>>> from cmtklib.functionalMRI import DiscardTP
>>> discard = DiscardTP()
>>> discard.inputs.base_dir = '/my_directory'
>>> discard.inputs.in_file = '/path/to/sub-01_task-rest_desc-preproc_bold.nii.gz'
>>> discard.inputs.n_discard = 5
>>> discard.run()  
in_filea string or os.PathLike object referring to an existing file

Input 4D fMRI image.

n_discardan integer

Number of n first frames to discard.

out_filea string or os.PathLike object referring to an existing file

Output 4D fMRI image with discarded frames.

NuisanceRegression

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Regress out nuisance signals (WM, CSF, movements) through GLM.

Examples

>>> from cmtklib.functionalMRI import NuisanceRegression
>>> nuisance = NuisanceRegression()
>>> nuisance.inputs.base_dir = '/my_directory'
>>> nuisance.inputs.in_file = '/path/to/sub-01_task-rest_desc-preproc_bold.nii.gz'
>>> nuisance.inputs.wm_file = '/path/to/sub-01_task-rest_desc-preproc_bold.nii.gz'
>>> nuisance.inputs.csf_file = '/path/to/sub-01_task-rest_desc-preproc_bold.nii.gz'
>>> nuisance.inputs.motion_file = '/path/to/sub-01_motions.par'
>>> nuisance.inputs.gm_file = ['/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                            '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> nuisance.inputs.global_nuisance = False
>>> nuisance.inputs.csf_nuisance = True
>>> nuisance.inputs.wm_nuisance = True
>>> nuisance.inputs.motion_nuisance = True
>>> nuisance.inputs.nuisance_motion_nb_reg = 36
>>> nuisance.inputs.n_discard = 5
>>> nuisance.run()  
brainfilea string or os.PathLike object

Eroded brain mask registered to fMRI space.

csf_filea string or os.PathLike object

Eroded CSF mask registered to fMRI space.

csf_nuisancea boolean

If True perform CSF nuisance regression.

global_nuisancea boolean

If True perform global nuisance regression.

gm_filea list of items which are a string or os.PathLike object

GM atlas files registered to fMRI space.

in_filea string or os.PathLike object referring to an existing file

Input fMRI volume.

motion_filea string or os.PathLike object

Motion nuisance effect.

motion_nuisancea boolean

If True perform motion nuisance regression.

n_discardan integer

Number of volumes discarded from the fMRI sequence during preprocessing.

nuisance_motion_nb_regan integer

Number of reg to use in motion nuisance regression.

wm_filea string or os.PathLike object

Eroded WM mask registered to fMRI space.

wm_nuisancea boolean

If True perform WM nuisance regression.

averageCSF_mata string or os.PathLike object

Output matrix of CSF regression.

averageCSF_npya string or os.PathLike object

Output of CSF regression in npy format.

averageGlobal_mata string or os.PathLike object

Output matrix of global regression.

averageGlobal_npya string or os.PathLike object

Output of global regression in npy format.

averageWM_mata string or os.PathLike object

Output matrix of WM regression.

averageWM_npya string or os.PathLike object

Output of WM regression in npy format.

out_filea string or os.PathLike object referring to an existing file

Output fMRI Volume.

Scrubbing

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Computes scrubbing parameters: FD and DVARS.

Examples

>>> from cmtklib.functionalMRI import Scrubbing
>>> scrub = Scrubbing()
>>> scrub.inputs.base_dir = '/my_directory'
>>> scrub.inputs.in_file = '/path/to/sub-01_task-rest_desc-preproc_bold.nii.gz'
>>> scrub.inputs.gm_file = ['/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                         '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                         '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                         '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                         '/path/to/sub-01_space-meanBOLD_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> scrub.inputs.wm_mask = '/path/to/sub-01_space-meanBOLD_label-WM_dseg.nii.gz'
>>> scrub.inputs.gm_file = '/path/to/sub-01_space-meanBOLD_label-GM_dseg.nii.gz'
>>> scrub.inputs.mode = 'quadratic'
>>> scrub.run()  
in_filea string or os.PathLike object referring to an existing file

FMRI volume to scrubb.

gm_filea list of items which are a string or os.PathLike object referring to an existing file

ROI volumes registered to fMRI space.

motion_parametersa string or os.PathLike object referring to an existing file

Motion parameters from preprocessing stage.

wm_maska string or os.PathLike object referring to an existing file

WM mask registered to fMRI space.

dvars_mata string or os.PathLike object referring to an existing file

DVARS matrix for scrubbing.

dvars_npya string or os.PathLike object referring to an existing file

DVARS in .npy format.

fd_mata string or os.PathLike object referring to an existing file

FD matrix for scrubbing.

fd_npya string or os.PathLike object referring to an existing file

FD in .npy format.

cmtklib.parcellation module

Module that defines CMTK utility functions and Nipype interfaces for anatomical parcellation.

CombineParcellations

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Creates the final parcellation.

It combines the original cortico sub-cortical parcellation with the following extra segmented structures:

  • Segmentation of the 8 thalamic nuclei per hemisphere

  • Segmentation of 14 hippocampal subfields per hemisphere

  • Segmentation of 3 brainstem sub-structures

It also generates by defaults the corresponding (1) description of the nodes in graphml format and (2) color lookup tables in FreeSurfer format that can be displayed in freeview.

Examples

>>> parc_combine = CombineParcellations()
>>> parc_combine.inputs.input_rois = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                                  '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                                  '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                                  '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                                  '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> parc_combine.inputs.lh_hippocampal_subfields = '/path/to/lh_hippocampal_subfields.nii.gz'
>>> parc_combine.inputs.rh_hippocampal_subfields = '/path/to/rh_hippocampal_subfields.nii.gz'
>>> parc_combine.inputs.brainstem_structures = '/path/to/brainstem_structures.nii.gz'
>>> parc_combine.inputs.thalamus_nuclei = '/path/to/thalamus_nuclei.nii.gz'
>>> parc_combine.inputs.create_colorLUT = True
>>> parc_combine.inputs.create_graphml = True
>>> parc_combine.inputs.subjects_dir = '/path/to/output_dir/freesurfer')
>>> parc_combine.inputs.subject_id = 'sub-01'
>>> parc_combine.run()  
brainstem_structuresa pathlike object or string representing a file

Brainstem segmentation file.

create_colorLUTa boolean

If True, create the color lookup table in Freesurfer format.

create_graphmla boolean

If True, create the parcellation node description files in graphml format.

input_roisa list of items which are a pathlike object or string representing an existing file

Input parcellation files.

lh_hippocampal_subfieldsa pathlike object or string representing a file

Input hippocampal subfields file for left hemisphere.

rh_hippocampal_subfieldsa pathlike object or string representing a file

Input hippocampal subfields file for right hemisphere.

subject_ida string

Freesurfer subject id.

subjects_dira pathlike object or string representing a directory

Freesurfer subjects dir.

thalamus_nucleia pathlike object or string representing a file

Thalamic nuclei segmentation file.

verbose_level1 or 2

Verbose level (1: partial (default) / 2: full).

aparc_asega pathlike object or string representing a file

Modified Freesurfer aparc+aseg file.

colorLUT_filesa list of items which are a pathlike object or string representing an existing file

Color lookup table files in Freesurfer format.

graphML_filesa list of items which are a pathlike object or string representing an existing file

Parcellation node description files in graphml format.

output_roisa list of items which are a pathlike object or string representing an existing file

Output parcellation with all structures combined.

CombineParcellations.ismember(b)[source]
ComputeParcellationRoiVolumes

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Computes the volumes of each ROI for each parcellation scale.

Examples

>>> compute_vol = ComputeParcellationRoiVolumes()
>>> compute_vol.inputs.roi_volumes = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.nii.gz',
>>>                                   '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.nii.gz',
>>>                                   '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.nii.gz',
>>>                                   '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.nii.gz',
>>>                                   '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.nii.gz']
>>> compute_vol.inputs.roi_graphmls = ['/path/to/sub-01_atlas-L2018_desc-scale1_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale2_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale3_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale4_dseg.graphml',
>>>                             '/path/to/sub-01_atlas-L2018_desc-scale5_dseg.graphml']
>>> compute_vol.inputs.parcellation_scheme = ['Lausanne2018']
>>> compute_vol.run()  
parcellation_scheme‘NativeFreesurfer’ or ‘Lausanne2018’ or ‘Custom’

Parcellation scheme. (Nipype default value: Lausanne2018)

roi_graphMLsa list of items which are a pathlike object or string representing an existing file

GraphML description of ROI volumes (Lausanne2018).

roi_volumesa list of items which are a pathlike object or string representing an existing file

ROI volumes registered to diffusion space.

roi_volumes_statsa list of items which are a pathlike object or string representing a file

TSV files with computed parcellation ROI volumes.

Parcellate

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Subdivides segmented ROI file into smaller subregions.

This interface interfaces with the CMTK parcellation functions

available in cmtklib.parcellation module for all parcellation resolutions of a given scheme.

Example

>>> from cmtklib.parcellation import Parcellate
>>> parcellate = Parcellate()
>>> parcellate.inputs.subjects_dir = '/path/to/output_dir/freesurfer'
>>> parcellate.inputs.subject_id = 'sub-01'
>>> parcellate.inputs.parcellation_scheme = 'Lausanne2018'
>>> parcellate.run()  
subject_ida string

Subject ID.

erode_masksa boolean

If True erode the masks.

parcellation_scheme‘Lausanne2018’ or ‘NativeFreesurfer’

Parcellation scheme. (Nipype default value: Lausanne2018)

subjects_dira pathlike object or string representing a directory

Freesurfer main directory.

T1a pathlike object or string representing a file

T1 image file.

aparc_asega pathlike object or string representing a file

APArc+ASeg image file (in native space).

asega pathlike object or string representing a file

ASeg image file (in native space).

braina pathlike object or string representing a file

Brain-masked T1 image file.

brain_erodeda pathlike object or string representing a file

Eroded brain file in original space.

brain_maska pathlike object or string representing a file

Brain mask file.

csf_erodeda pathlike object or string representing a file

Eroded csf file in original space.

csf_mask_filea pathlike object or string representing a file

Cerebrospinal fluid (CSF) mask file.

gray_matter_mask_filea pathlike object or string representing a file

Cortical gray matter (GM) mask file.

ribbon_filea pathlike object or string representing an existing file

Image file detailing the cortical ribbon.

roi_files_in_structural_spacea list of items which are a pathlike object or string representing an existing file

ROI image resliced to the dimensions of the original structural image.

white_matter_mask_filea pathlike object or string representing a file

White matter (WM) mask file.

wm_erodeda pathlike object or string representing a file

Eroded wm file in original space.

ParcellateBrainstemStructures

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Parcellates the brainstem sub-structures using Freesurfer [Iglesias2015Brainstem].

References

Iglesias2015Brainstem

Iglesias et al., NeuroImage, 113, June 2015, 184-195. <http://www.nmr.mgh.harvard.edu/~iglesias/pdf/Neuroimage_2015_brainstem.pdf>

Examples

>>> parc_bstem = ParcellateBrainstemStructures()
>>> parc_bstem.inputs.subjects_dir = '/path/to/derivatives/freesurfer'
>>> parc_bstem.inputs.subject_id = 'sub-01'
>>> parc_bstem.run()  
subject_ida string

Subject ID.

subjects_dira pathlike object or string representing a directory

Freesurfer main directory.

brainstem_structuresa pathlike object or string representing a file

Parcellated brainstem structures file.

ParcellateHippocampalSubfields

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Parcellates the hippocampal subfields using Freesurfer [Iglesias2015Hippo].

References

Iglesias2015Hippo

Iglesias et al., Neuroimage, 115, July 2015, 117-137. <http://www.nmr.mgh.harvard.edu/~iglesias/pdf/subfieldsNeuroimage2015preprint.pdf>

Examples

>>> parc_hippo = ParcellateHippocampalSubfields()
>>> parc_hippo.inputs.subjects_dir = '/path/to/derivatives/freesurfer'
>>> parc_hippo.inputs.subject_id = 'sub-01'
>>> parc_hippo.run()  
subject_ida string

Subject ID.

subjects_dira pathlike object or string representing a directory

Freesurfer main directory.

lh_hipposubfieldsa pathlike object or string representing a file

Left hemisphere hippocampal subfields file.

rh_hipposubfieldsa pathlike object or string representing a file

Right hemisphere hippocampal subfields file.

ParcellateThalamus

Link to code

Bases: nipype.interfaces.base.core.BaseInterface

Parcellates the thalamus into 8 nuclei using an atlas-based method [Najdenovska18].

References

Najdenovska18

Najdenovska et al., Sci Data 5, 180270 (2018). <https://doi.org/10.1038/sdata.2018.270>

Examples

>>> parc_thal = ParcellateThalamus()
>>> parc_thal.inputs.T1w_image = File(mandatory=True, desc='T1w image to be parcellated')
>>> parc_thal.inputs.bids_dir = Directory(desc='BIDS root directory')
>>> parc_thal.inputs.subject = '01'
>>> parc_thal.inputs.template_image = '/path/to/atlas/T1w.nii.gz'
>>> parc_thal.inputs.thalamic_nuclei_maps = '/path/to/atlas/nuclei/probability/map.nii.gz'
>>> parc_thal.inputs.subjects_dir = '/path/to/output_dir/freesurfer'
>>> parc_thal.inputs.subject_id = 'sub-01'
>>> parc_thal.inputs.ants_precision_type = 'float'
>>> parc_thal.run()  
T1w_imagea pathlike object or string representing a file

T1w image to be parcellated.

subject_ida string

Subject ID.

subjects_dira pathlike object or string representing a directory

Freesurfer main directory.

template_imagea pathlike object or string representing a file

Template T1w.

thalamic_nuclei_mapsa pathlike object or string representing a file

Probability maps of thalamic nuclei (4D image) in template space.

ants_precision_type‘double’ or ‘float’

Precision type used during computation.

bids_dira pathlike object or string representing a directory

BIDS root directory.

sessiona string

Session id.

subjecta string

Subject id.

inverse_warped_imagea pathlike object or string representing a file

Inverse warped template.

max_prob_registereda pathlike object or string representing a file

Max probability label image (native).

prob_maps_registereda pathlike object or string representing a file

Probabilistic map of thalamus nuclei (native).

thalamus_maska pathlike object or string representing a file

Thalamus mask.

transform_filea pathlike object or string representing a file

Transform file.

warp_filea pathlike object or string representing a file

Deformation file.

warped_imagea pathlike object or string representing a file

Template registered to T1w image (native).

cmtklib.parcellation.create_T1_and_Brain(subject_id, subjects_dir)[source]

Generates T1, T1 masked and aseg+aparc Freesurfer images in NIFTI format.

Parameters
  • subject_id (string) – Freesurfer subject id

  • subjects_dir (string) – Freesurfer subjects dir (Typically /path/to/output_dir/freesurfer)

cmtklib.parcellation.create_roi(subject_id, subjects_dir, v=True)[source]

Iteratively creates the ROI_%s.nii.gz files using the given Lausanne2018 parcellation information from networks.

Parameters
  • subject_id (string) – Freesurfer subject id

  • subjects_dir (string) – Freesurfer subjects dir (Typically /path/to/output_dir/freesurfer)

  • v (Boolean) – Verbose mode

cmtklib.parcellation.create_wm_mask(subject_id, subjects_dir, v=True)[source]

Creates the white-matter mask using the Freesurfer ribbon as basis in the Lausanne2018 framework.

Parameters
  • subject_id (string) – Freesurfer subject id

  • subjects_dir (string) – Freesurfer subjects dir (Typically /path/to/output_dir/freesurfer)

  • v (Boolean) – Verbose mode

cmtklib.parcellation.crop_and_move_WM_and_GM(subject_id, subjects_dir)[source]

Convert Freesurfer images back to original native space when NativeFreesurfer parcellation scheme is used.

Parameters
  • subject_id (string) – Freesurfer subject id

  • subjects_dir (string) – Freesurfer subjects dir (Typically /path/to/output_dir/freesurfer)

cmtklib.parcellation.crop_and_move_datasets(subject_id, subjects_dir)[source]

Convert Freesurfer images back to original native space when Lausanne2018 parcellation schemes are used.

Parameters
  • subject_id (string) – Freesurfer subject id

  • subjects_dir (string) – Freesurfer subjects dir (Typically /path/to/output_dir/freesurfer)

cmtklib.parcellation.erode_mask(fsdir, mask_file)[source]

Erodes the mask and saves it the Freesurfer subject directory.

Parameters
  • fsdir (string) – Freesurfer subject directory

  • mask_file (string) – Path to mask file

cmtklib.parcellation.extract(Z, shape, position, fill)[source]

Extract voxel neighbourhood.

Parameters
  • Z (numpy.array) – The original data

  • shape (tuple) – Tuple containing neighbourhood dimensions

  • position (tuple) – Tuple containing central point indexes

  • fill (value) – Value for the padding of Z

Returns

R – The output neighbourhood of the specified point in Z

Return type

numpy.array

cmtklib.parcellation.generate_WM_and_GM_mask(subject_id, subjects_dir)[source]

Generates the white-matter and gray-matter masks when NativeFreesurfer parcellation is used.

Parameters
  • subject_id (string) – Freesurfer subject id

  • subjects_dir (string) – Freesurfer subjects dir (Typically /path/to/output_dir/freesurfer)

cmtklib.parcellation.get_parcellation(parcel='NativeFreesurfer')[source]

Returns a dictionary containing atlas information.

Note

atlas_info often used in the code refers to such a dictionary.

Parameters

parcel (parcellation scheme) – It can be: ‘NativeFreesurfer’ or ‘Lausanne2018’

cmtklib.util module

Module that defines CMTK Utility functions.

class cmtklib.util.BColors[source]

Bases: object

Utility class for color unicode.

BOLD = '\x1b[1m'
ENDC = '\x1b[0m'
FAIL = '\x1b[91m'
HEADER = '\x1b[95m'
OKBLUE = '\x1b[94m'
OKGREEN = '\x1b[92m'
UNDERLINE = '\x1b[4m'
WARNING = '\x1b[93m'
cmtklib.util.check_directory_exists(mandatory_dir)[source]

Makes sure the mandatory directory exists.

Raises

FileNotFoundError – Raised when the directory is not found.

cmtklib.util.convert_list_to_tuple(lists)[source]

Convert list of files to tuple of files.

(Duplicated with preprocessing, could be moved to utils in the future)

Parameters

lists ([bvecs, bvals]) – List of files containing bvecs and bvals

Returns

out_tuple – Tuple of files containing bvecs and bvals

Return type

(bvecs, bvals)

cmtklib.util.extract_freesurfer_subject_dir(reconall_report, local_output_dir=None, debug=False)[source]

Extract Freesurfer subject directory from the report created by Nipype Freesurfer Recon-all node.

Parameters
  • reconall_report (string) – Path to the recon-all report

  • local_output_dir (string) – Local output / derivatives directory

  • debug (bool) – If True, show printed outputs

Returns

fs_subject_dir – Freesurfer subject directory

Return type

string

cmtklib.util.extract_reconall_base_dir(file)[source]

Extract Recon-all base directory from a file.

Parameters

file (File) – File generated by Recon-all

Returns

out_path – Recon-all base directory

Return type

string

cmtklib.util.find_toolbox_derivatives_containing_file(bids_dir, fname, debug=False)[source]

Find the toolbox derivatives directory in the derivatives folder of the BIDS dataset containing a file.

This function is used by the EEGPipeline.

Parameters
  • bids_dir (str) – Path the BIDS root directory

  • fname (str) – Filename to find

  • debug (bool) – If True, print the directory found

Returns

out_tuple – Tuple of files containing bvecs and bvals

Return type

(bvecs, bvals)

cmtklib.util.get_basename(path)[source]

Return os.path.basename() of a path.

Parameters

path (os.path) – Path to extract the containing directory

Returns

path – Path to the containing directory

Return type

os.path

cmtklib.util.get_freesurfer_subject_id(file)[source]

Extract Freesurfer subject ID from file generated by recon-all.

Parameters

file (str) – File generated by recon-all

Returns

out – Freesurfer subject ID

Return type

str

cmtklib.util.get_pipeline_dictionary_outputs(datasink_report, local_output_dir=None, debug=False)[source]

Read the Nipype datasink report and return a dictionary of pipeline outputs.

Parameters
  • datasink_report (string) – Path to the datasink report

  • local_output_dir (string) – Local output / derivatives directory

  • debug (bool) – If True, print output dictionary

Returns

dict_outputs – Dictionary of pipeline outputs

Return type

dict

cmtklib.util.isavailable(file)[source]

Check if file is available and return the file if it is.

Used for debugging.

Parameters

file (File) – Input file

Returns

file – Output file

Return type

File

cmtklib.util.length(xyz, along=False)[source]

Euclidean length of track line.

Parameters
  • xyz (array-like shape (N,3)) – array representing x,y,z of N points in a track

  • along (bool, optional) – If True, return array giving cumulative length along track, otherwise (default) return scalar giving total length.

Returns

L – scalar in case of along == False, giving total length, array if along == True, giving cumulative lengths.

Return type

scalar or array shape (N-1,)

Examples

>>> xyz = np.array([[1,1,1],[2,3,4],[0,0,0]])
>>> expected_lens = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2])
>>> length(xyz) == expected_lens.sum()
True
>>> len_along = length(xyz, along=True)
>>> np.allclose(len_along, expected_lens.cumsum())
True
>>> length([])
0
>>> length([[1, 2, 3]])
0
>>> length([], along=True)
array([0])
cmtklib.util.magn(xyz, n=1)[source]

Returns the vector magnitude

Parameters
  • xyz (vector) – Input vector

  • n (int) – Tile by n if n>1 before return

cmtklib.util.mean_curvature(xyz)[source]

Calculates the mean curvature of a curve.

Parameters

xyz (array-like shape (N,3)) – array representing x,y,z of N points in a curve

Returns

m – float representing the mean curvature

Return type

float

Examples

Create a straight line and a semi-circle and print their mean curvatures

>>> from dipy.tracking import metrics as tm
>>> import numpy as np
>>> x=np.linspace(0,1,100)
>>> y=0*x
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> m=tm.mean_curvature(xyz)  # mean curvature straight line
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> m=tm.mean_curvature(xyz)  # mean curvature for semi-circle
cmtklib.util.print_blue(message)[source]

Print blue-colored message

Parameters

message (string) – The string of the message to be printed

cmtklib.util.print_error(message)[source]

Print red-colored error message

Parameters

message (string) – The string of the message to be printed

cmtklib.util.print_warning(message)[source]

Print yellow-colored warning message

Parameters

message (string) – The string of the message to be printed

cmtklib.util.return_button_style_sheet(image, image_disabled=None, verbose=False)[source]

Return Qt style sheet for QPushButton with image

Parameters
  • image (string) – Path to image to use as icon when button is enabled

  • image_disabled (string) – Path to image to use as icon when button is disabled

  • verbose (Bool) – Print the style sheet if True Default: False

Returns

button_style_sheet – Qt style sheet for QPushButton with image

Return type

string

cmtklib.util.unicode2str(text)[source]

Convert a unicode to a string using system’s encoding.

Parameters

text (bytes) – Unicode bytes representation of a string

Returns

out_str – Output string

Return type

str

Adopting Datalad for collaboration

Datalad is a powerful tool for the versioning and sharing of raw and processed data as well as for the tracking of data provenance (i.e. the recording on how data was processed). This page was created with the intention to share with the user how we adopted datalad to manage and process datasets with Connectome Mapper 3 in our lab, following the YODA principles to our best.

You may ask “What are the YODA principles?”. They are basic principles behind creating, sharing, and publishing reproducible, understandable, and open data analysis projects with DataLad.

For more details and tutorials on Datalad and YODA, please check the recent Datalad Handbook and the YODA principles.

Happy Collaborative and Reproducible Connectome Mapping!

Prerequisites

  • Python3 must be installed with Datalad and all dependencies. You can use the conda environment py39cmp-gui for instance. See Installation of py39cmp-gui for more installation details.

  • A recent version of git-annex and liblzma (included in py39cmp-gui for Ubuntu/Debian).

  • Docker must be installed on systems running Connectome Mapper 3. See Prerequisites of Connectome Mapper 3 for more installation instructions.

Copy BIDS dataset to server

Copy the raw BIDS dataset using rsync:

rsync -P -avz -e 'ssh' \
--exclude 'derivatives' \
--exclude 'code' \
--exclude '.datalad' \
--exclude '.git' \
--exclude '.gitattributes' \
/path/to/ds-example/* \
<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example

where:

  • -P is used to show progress during transfer

  • -v increases verbosity

  • -e specifies the remote shell to use (ssh)

  • -a indicates archive mode

  • -z enables file data compression during the transfer

  • --exclude DIR_NAME exclude the specified DIR_NAME from the copy

Remote datalad dataset creation on Server

Connect to Server

To connect with SSH:

ssh <SERVER_USERNAME>@<SERVER_IP_ADDRESS>
Creation of Datalad dataset

Go to the source dataset directory:

cd /archive/data/ds-example

Initialize the Datalad dataset:

datalad create -f -c text2git -D "Original example dataset on lab server" -d .

where:

  • -f forces to create the datalad dataset if not empty

  • -c text2git configures Datalad to use git to manage text file

  • -D gives a brief description of the dataset

  • -d specify the location where the Datalad dataset is created

Track all files contained in the dataset with Datalad:

datalad save -m "Source (Origin) BIDS dataset" --version-tag origin

where:

  • -m MESSAGE is the description of the state or the changes made to the dataset

  • --version-tag tags the state of the Dataset

Report on the state of dataset content:

datalad status -r
git log

Processing using the Connectome Mapper BIDS App on Alice’s workstation

Processed dataset creation

Initialize a datalad dataset with the YODA procedure:

datalad create -c text2git -c yoda \
-D "Processed example dataset by Alice with CMP3" \
/home/alice/data/ds-example-processed

This will create a datalad dataset with:

  • a code directory in your dataset

  • three files for human consumption (README.md, CHANGELOG.md)

  • everything in the code/ directory configured to be tracked by Git, not git-annex

  • README.md and CHANGELOG.md configured in the root of the dataset to be tracked by Git

  • Text files configured to be tracked by Git

Go to the created dataset directory:

cd /home/alice/data/ds-example-processed

Create the derivatives output directory:

mkdir derivatives
Raw BIDS dataset installation

Install the remove datalad dataset ds-example in /home/alice/data/ds-example-processed/input/:

datalad install -d . -s ssh://<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example \
/home/alice/data/ds-example-processed/input/

where:

  • -s SOURCE specifies the URL or local path of the installation source

Get T1w and Diffusion images to be processed

For reproducibility, create and write datalad get commands to get_required_files_for_analysis.sh:

echo "datalad get input/sub-*/ses-*/anat/sub-*_T1w.nii.gz" > code/get_required_files_for_analysis.sh
echo "datalad get input/sub-*/ses-*/dwi/sub-*_dwi.nii.gz" >> code/get_required_files_for_analysis.sh
echo "datalad get input/sub-*/ses-*/dwi/sub-*_dwi.bvec" >> code/get_required_files_for_analysis.sh
echo "datalad get input/sub-*/ses-*/dwi/sub-*_dwi.bval" >> code/get_required_files_for_analysis.sh

Save the script to the dataset’s history:

datalad save -m "Add script to get the files required for analysis by Alice"

Execute the script:

sh code/get_required_files_for_analysis.sh
Run Connectome Mapper with Datalad

Run Connectome Mapper on all subjects:

datalad containers-run --container-name connectomemapper-bidsapp-<VERSION_TAG> \
--input code/ref_anatomical_config.json \
--input code/ref_diffusion_config.json \
--output derivatives \
/bids_dir /output_dir participant \
--anat_pipeline_config '/bids_dir/{inputs[0]}' \
--dwi_pipeline_config '/bids_dir/{inputs[1]}'

Note

datalad containers-run will take of replacing the {inputs[i]} by the value specified by the i --input flag (Indexing start at 0).

Save the state:

datalad save -m "Alice's test dataset on local \
workstation processed by connectomemapper-bidsapp:<VERSION_TAG>, {Date/Time}" \
--version-tag processed-<date>-<time>

Report on the state of dataset content:

datalad status -r
git log
Configure a datalad dataset target on the Server

Create a remote dataset repository and configures it as a dataset sibling to be used as a publication target:

datalad create-sibling --name remote -d . \
<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example-processed

See the documentation of datalad create-sibling command for more details.

Update the remote datalad dataset

Push the datalad dataset with data derivatives to the server:

datalad push -d . --to remote

Note

--to remote specifies the remote dataset sibling i.e. ssh://<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example-processed previously configured.

Uninstall all files accessible from the remote

With DataLad we don’t have to keep those inputs around so you can safely uninstall them without losing the ability to reproduce an analysis:

datalad uninstall input/sub-*/*

Local collaboration with Bob for Electrical Source Imaging

Processed dataset installation on Bob’s workstation

Install the processed datalad dataset ds-example-processed in /home/bob/data/ds-example-processed:

datalad install -s ssh://<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example-processed  \
/home/bob/data/ds-example-processed

Go to datalad dataset clone directory:

cd /home/bob/data/ds-example-processed
Get connectome mapper output files (Brain Segmentation and Multi-scale Parcellation) used by Bob in his analysis

For reproducibility, write datalad get commands to get_required_files_for_analysis_by_bob.sh:

echo "datalad get derivatives/cmp/sub-*/ses-*/anat/sub-*_mask.nii.gz" \
> code/get_required_files_for_analysis_by_bob.sh
echo "datalad get derivatives/cmp/sub-*/ses-*/anat/sub-*_class-*_dseg.nii.gz" \
>> code/get_required_files_for_analysis_by_bob.sh
echo "datalad get derivatives/cmp/sub-*/ses-*/anat/sub-*_scale*_atlas.nii.gz" \
>> code/get_required_files_for_analysis_by_bob.sh

Save the script to the dataset’s history:

datalad save -m "Add script to get the files required for analysis by Bob"

Execute the script:

sh code/get_required_files_for_analysis_by_bob.sh
Update derivatives

Update derivatives with data produced by Cartool:

cd /home/bob/data/ds-example
mkdir derivatives/cartool
cp [...]

Save the state:

datalad save -m "Bob's test dataset on local \
workstation processed by cartool:<CARTOOL_VERSION>, {Date/Time}" \
--version-tag processed-<date>-<time>

Report on the state of dataset content:

datalad status -r
git log
Update the remote datalad dataset

Update the remote datalad dataset with data derivatives:

datalad push -d . --to origin

Note

--to origin specifies the origin dataset sibling i.e. ssh://<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example-processed from which it was cloned.

Uninstall all files accessible from the remote

Again, with DataLad we don’t have to keep those inputs around so you can safely uninstall them without losing the ability to reproduce an analysis:

datalad uninstall derivatives/cmp-*/*
datalad uninstall derivatives/freesurfer-*/*
datalad uninstall derivatives/nipype-*/*

Authors

Sebastien Tourbier

Version

Revision: 2.1 (Last modification: 2022 Feb 09)

Running on a cluster (HPC)

Connectome Mapper 3 BIDS App can be run on a cluster using Singularity.

For your convenience, the Singularity image is automatically built along the docker image using Singularity 3.8.4 and deployed to Sylabs.io (equivalent of DockerHub for Singularity) during continuous integration on CircleCI. It can be freely downloaded with the following command:

$ singularity pull library://connectomicslab/default/connectomemapper-bidsapp:latest

If you prefer, you can still build the Singularity image on your side using one of the 2 methods described in Conversion to a Singularity image.

A list of useful singularity command can be found in Useful singularity commands. For more documentation about Singularity, please check the official documentation website.

Happy Large-Scale Connectome Mapping!

Prerequisites

Running the singularity image

The following example shows how to call from the terminal the Singularity image of the CMP3 BIDS App to perform both anatomical and diffusion pipelines for sub-01, sub-02 and sub-03 of a BIDS dataset whose root directory is located at ${localDir}:

$ singularity run --containall \
        --bind ${localDir}:/bids_dir --bind ${localDir}/derivatives:/output_dir \
            library://connectomicslab/default/connectomemapper-bidsapp:|release| \
            /bids_dir /output_dir participant --participant_label 01 02 03 \
            --anat_pipeline_config /bids_dir/code/ref_anatomical_config.json \
            --dwi_pipeline_config /bids_dir/code/ref_diffusion_config.json \
            --fs_license /bids_dir/code/license.txt \
            --number_of_participants_processed_in_parallel 3

Note

As you can see, the singularity run command is slightly different from the docker run. The docker option flag -v is replaced by the singularity --bind to map local folders inside the container. Last but not least, while docker containers are executed in total isolation, singularity images MUST run with the option flag --containall. Otherwise your $HOME and $TMP directories or your local environment variables might be shared inside the container.

Conversion to a Singularity image

It actually exists two options for Docker to Singularity container image conversion. Let’s say we want to store Singularity-compatible image file in ~/Softwares/singularity/.

Option 2 : Using singularity directly
$ singularity build ~/Softwares/singularity/cmp-v3.2.0.simg  \
        docker://sebastientourbier/connectomemapper-bidsapp:v3.2.0

This command will directly download the latest version release of the Docker image from the DockerHub and convert it to a Singularity image.

Advantage(s): Can be executed on the cluster directly

Disadvantage(s): Has shown to fail because of some docker / singularity version incompatibilities

Useful singularity commands

  • Display a container’s metadata:

    $ singularity inspect ~/Softwares/singularity/cmp-v3.2.0.simg
  • Clean cache:

    $ singularity cache clean
    

Authors

Sebastien Tourbier

Version

Revision: 2 (Last modification: 2021 Jan 04)

Tutorial notebooks

BSD 3-Clause License

Copyright (C) 2009-2022, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Hospital Center and University of Lausanne (UNIL-CHUV), Switzerland, & Contributors, All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • Neither the name of the Ecole Polytechnique Fédérale de Lausanne (EPFL) and Hospital Center and University of Lausanne (UNIL-CHUV) nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL “Ecole Polytechnique Fédérale de Lausanne (EPFL) and Hospital Center and University of Lausanne (UNIL-CHUV), Switzerland & Contributors” BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Changes

Version 3.1.0

Date: MM DD, 2022

This version fully integrates the new pipeline dedicated to EEG modality inside the BIDS App and the GUI.

What’s Changed

Updates

  • The conda environment files for cmpbidsappmanager (conda/environment.yml and conda/environment_macosx.yml) have been greatly modified (PR #212). This includes the following updates:

    • python: from 3.7 to 3.9

    • pip: from 21.3.1 to 22.2

    • indexed_gzip: from 1.6.4 to 1.6.13

    • git-annex (conda/environment.yml only): from 8.20211123 to 10.20220724

    • qt/pyqt 5.15.4 installed via conda-forge

    • pyqt5-sip 12.9.0 (version compatible with qt/pyqt 5.15.4) installed via conda-forge

    In addition, the created environment has been renamed py39cmp-gui to be consistent with the new python version installed in the environment.

  • In all conda environment *.yml and requirements.txt files, datalad and its container extension have been updated to the following versions (PR #209):

    • datalad: from 0.15.4 to 0.17.2 (See Datalad changelog for more details).

    • datalad-container: from 1.1.5 to 1.1.6

New features

  • The new pipeline dedicated to EEG modality has been integrated into the BIDS App and cmpbidsappmanager (PR #201 and PR #205). EEG pipeline configuration files are passed to the BIDS App or its docker/singularity python wrapper via the option flag --eeg_pipeline. A new tab has been added to the configurator window of cmpbidsappmanager for the setup and saving of configuration files for the EEG pipeline. A new tab has also been added to the output inspector window of cmpbidsappmanager to enable the visual inspection of outputs generated by the EEG pipeline. The EEG configuration file can now be specified in the BIDS App interface window of cmpbidsappmanager and the command to run the BIDS has been updated. A new EEGConnectomeStage stage has been implemented that builds the connectivity matrices from the extracted ROI time-series using the function spectral_connectivity_epochs of MNE Connectivity. A new utility script visualize_eeg_pipeline_outputs.py has been implemented in the cmp/cli module, which is called by the output inspector window of cmpbidsappmanager.

  • Option to apply or not band-pass filtering in fMRI pipeline. (PR #200)

Code refactoring

  • Major refactoring of all the code related to the EEG pipeline (PR #198). This includes:

    • Renaming EEGLoaderStage to EEGPreprocessingStage,

    • Refactoring inputs/outputs of all interfaces of cmtklib.eeg, cmtklib.interfaces.mne, and cmtklib.interfaces.pycartool modules

    • Refactoring of all inputs, outputs, and config traits of the different stages

    • Modification of (1) cmp.pipelines.functional.eeg.py and (2) the tutorial notebook for the EEG pipeline that integrates all previously mentioned changes

Bug fix

  • Problems to install and launch cmpbidsappmanager on Ubuntu. (PR #212)

  • Fix nibabel to 3.2.2 as the imported functions of nibabel.trackvis has been moved since 4.0.0 and caused errors. (PR #XX)

  • Fix problem of traits not updated while making the diffusion pipeline config with ACT. (PR #200)

Documentation

  • Update/add documentation for the EEG pipeline (PR #208). This includes:

    • Update the BIDS flowchart displayed in README and in docs/index.rst with the EEG pipeline. The SVG can be found inside the docs/images/svg directory.

    • Make appropriate changes to docs/index.rst and README around the EEG pipeline

    • Show call to --eeg_pipeline in docs/usage.rst

    • Show how to configure and check outputs of EEG pipeline in docs/bidsappmanager.rst

    • Add link to VEPCON dataset as example with EEG in docs/cmpbids.rst

Software development life cycle

  • Optimization of resources stored in the cache and in the workspace. (PR #201)

  • Add tests 10 and 11 that run the EEG pipeline with the MNE and Cartool ESI workflow respectively. (PR #201)

Contributors

More…

Please check the main PR #149 page for more details.

Version 3.0.4

Date: June 15, 2022

This version mainly addresses all points raised by the JOSS review (https://github.com/openjournals/joss-reviews/issues/4248).

What’s Changed

Updates

Bug fix

  • Add missing cmp.stages.eeg to setup_pypi.py. (PR #166)

  • Add missing package data for parcellation in setup_pypi.py. (PR #182)

  • Use HTTPS instead of SSH for datalad clone in notebooks . (PR #181)

  • Add missing condition to handle custom BIDS files with session. (PR #183)

  • Integrate fix from Napari project for issues with menubar on Mac. (PR #174)

  • Use the most recent PyQt5 instead of PySide2 (older) for graphical backend of cmpbidsappmanager, which provides a fix to run Qt-based GUI on MacOSX Big Sur. (PR #188)

Documentation

  • Correct conda env create instruction in the README. (PR #164)

  • Refer to contributing guidelines in the README. (PR #167)

  • Use sphinx-copybutton extension in the docs. (PR #168)

  • Add notes about docker image and conda environment size and time to download. (PR #169)

JOSS paper

  • Integrate minor wording tweaks by @jsheunis. (PR #162)

  • Add higher level summary and rename the old summary to “Overview of Functionalities”. (PR #175)

License

  • The license has been updated to a pure 3-clause BSD license to comply with JOSS. (PR #163)

Software development life cycle

  • Migrate ubuntu 16.04 (now deprecated) to 20.04 on CircleCI. (PR #172)

Contributors

Version 3.0.3

Date: Feb 18, 2022

This version introduces the new pipeline dedicated to EEG modality with a tutorial, updates Freesurfer to 7.1.1, and adds a new tutorial that shows how to analyze the CMP3 connectomes.

What’s Changed

New features

Updates

  • Freesurfer has been updated from 6.1.0 to 7.1.1. See PR #147 for more details.

Bug fix

  • FIX: List of outputs are empty in inspector window of the parcellation and fmri_connectome stages. See PR #145 for more details.

  • Correct way GM mask is generated and clean code in cmtklib/parcellation.py.

  • Add interface to copy 001.mgz using hardlink.

Documentation

  • Add documentation of new classes and functions introduced by the EEG pipeline.

  • Add two ipython notebooks in docs/notebooks that are integrated directly in the docs with nbsphinx:

    • analysis_tutorial.ipynb: Show how to interact, analyze, and visualize CMP3 outputs.

    • EEG_pipeline_tutorial.ipynb: Show how to use the new API dedicated to the EEG pipeline.

Contributors

More…

Please check the main PR #146 page for more details.

Version 3.0.2

Date: Jan 31, 2021

This version mostly introduces the capability to estimate carbon footprint of CMP3 execution and fix problem of conflicts during the creation of the conda environment. It incorporates in particular the following changes.

New features

  • Allow the estimation of the carbon footprint while using the BIDS App python wrappers and the GUI. Estimations are conducted using codecarbon. All functions supporting this features have been implemented in the new module cmtklib.carbonfootprint. See PR #136 for more details.

Code changes

  • Creation of init_subject_derivatives_dirs() for AnatomicalPipeline, DifusionPipeline, and fMRIPipeline that return the paths to Nipype and CMP derivatives folders of a given subject / session for a given pipeline. This removed all the implicated code from the process() method and improve modularity and readability. In the future, the different functions could be merged as there is a lot of code duplication between them.

  • AnatomicalPipeline, DiffusionPipeline, and fMRIPipeline workflows are run with the MultiProc plugin.

Bug fix

  • Major update of the conda/environment.yml and conda/environment_macosx.yml to correct the problems of conflicts in the previous version, as reported in issue #137. This has resulted in the following package updates:

    • pip: 20.1.1 -> 21.3.1

    • numpy: 1.19.2 -> 1.21.5

    • matplotlib: 3.2.2 -> 3.5.1

    • traits: 6.2.0 -> 6.3.2

    • traitsui: 7.0.0 -> 7.2.0

    • graphviz: 2.40.1 -> 2.50.0

    • configparser: 5.0.0 -> 5.2.0

    • git-annex: 8.20210127 -> 8.20211123

    • pyside2: 5.9.0a1 -> 5.13.2

    • indexed_gzip: 1.2.0 -> 1.6.4

    • cvxpy: 1.1.7 -> 1.1.18

    • fsleyes: 0.33.0 -> 1.3.3

    • mrtrix3: 3.0.2 -> 3.0.3

    • duecredit: 0.8.0 -> 0.9.1

    • mne: 0.20.7 -> 0.24.1

    • datalad: 0.14.0 -> 0.15.4

    • datalad-container: 1.1.2 -> 1.1.5

    • statsmodels: 0.11.1 -> 0.13.1

    • networkx: 2.4 -> 2.6.3

    • pydicom: 2.0.0 -> 2.2.2

    See commit 483931f for more details.

Documentation

  • Add description of carbon footprint estimation feature.

  • Improve description on how to use already computed Freesurfer derivatives.

Misc

  • Add bootstrap CSS and jquery JS as resources to cmtklib/data/report/carbonfootprint. They are used to display the carbon footprint report in the GUI.

  • Clean the resources related to parcellation in cmtklib/data/parcellation and rename all files and mentions of lausanne2008 to lausanne2018.

  • Removed unused cmtklib.interfaces.camino, cmtklib.interfaces.camino2trackvis, and cmtklib.interfaces.diffusion modules

  • Specify to Coverage.py with # pragma: no cover part of the code we know it won’t be executed

  • Create and use a coveragerc file to set the run of Coverage.py with --concurrency=multiprocessing to be allow to track code inside Nipype interfaces, now managed by multiprocessing.

Code style

  • Correct a number of code style issues with class names.

Contributors

More…

Please check the main PR #140 page for more details.

Version 3.0.1

Date: Jan 05, 2021

This version is mostly a bug fix release that allows the python packages of Connectome Mapper 3 to be available on PyPI. It incorporates Pull Request #132 which includes the following changes.

Bug fix

  • Rename the project name in setup.py and setup_pypi.py from "cmp" to "connectomemapper". Such a "cmp" project name was already existing on PyPI, that caused continuous integration on CircleCI to fail during the last v3.0.0 release, while uploading the python packages of CMP3 to PyPI.

Code refactoring

  • Make cmp.bidsappmanager.gui.py more lightweight by splitting the classes defined there in different files. (See Issue #129 for more discussion details)

  • Split the create_workflow() method of the RegistrationStage into the create_ants_workflow(), create_flirt_workflow(), and create_bbregister_workflow(). (See Issue #95 for more discussion details)

Code style

  • Correct a number of code style issues with class names

Contributors

Please check the main pull request 132 page for more details.

Version 3.0.0

Date: Dec 24, 2021

This version corresponds to the first official release of Connectome Mapper 3 (CMP3). It incorporates Pull Request #88 (>450 commits) which includes the following changes.

Updates

  • traits has been updated from 6.0.0 to 6.2.0.

  • traitsui has been updated from 6.1.3 to 7.0.0.

  • pybids has been updated from 0.10.2 to 0.14.0.

  • nipype has been updated to 1.5.1 to 1.7.0.

  • dipy has been updated from 1.1.0 to 1.3.0.

  • obspy has been updated from 1.2.1 to 1.2.2.

New features

  • CMP3 can take custom segmentation (brain, white-matter, gray-matter and CSF masks, Freesurfer’s aparcaseg - used for ACT for PFT) and parcellation files as long as they comply to BIDS Derivatives specifications, by providing the label value for the different entity in the filename. This has led to the creation of the new module cmtklib.bids.io, which provides different classes to represent the diversity of custom input BIDS-formatted files. (PR #88)

  • CMP3 generates generic label-index mapping tsv files along with the parcellation files, in accordance to BIDS derivatives. This has led to the creation of the CreateBIDSStandardParcellationLabelIndexMappingFile and CreateCMPParcellationNodeDescriptionFilesFromBIDSFile interfaces, which allows us to create the BIDS label-index mapping file from the parcellation node description files employed by CMP3 (that includes _FreeSurferColorLUT.txt and _dseg.graphml), and vice versa.

  • CMP3 provide python wrappers to the Docker and Singularity container images (connectomemapper3_docker and connectomemapper3_singularity) that will generate and execute the appropriate command to run the BIDS App. (PR #109,

Major changes

  • Lausanne2018 parcellation has completely replaced the old Lausanne2008 parcellation. In brief, the new parcellation was introduced to provide (1) symmetry of labels between hemispheres, and (2) a more optimal generation of the volumetric parcellation images, that now are generated at once from annot files. This fixes the issue of overwritten labels encountered by in the process of creating the Lausanne2008 parcellation. Any code and data related to Lausanne2008 has been removed. If one still wish to use this old parcellation scheme, one should use CMP3 (v3.0.0-RC4).

Output updates

  • Directories for the derivatives produced by cmp (cmp, freesurfer, nipype) were renamed to cmp-, freesurfer-, and nipype- to comply with BIDS 1.4.0+. (PR #3 (fork))

Code refactoring

  • Creation in AnatomicalPipeline, DiffusionPipeline, fMRIPipeline of create_datagrabber_node() and create_datasinker_node() methods to reduce the code in create_workflow().

  • The run(command) function of cmp.bidsappmanager.core has been moved to cmtklib.process, which is used by the python wrappers in cmp.cli.

Pipeline Improvements

  • Better handle of existing Freesurfer outputs. In this case, CMP3 does not re-create the mri/orig/001.mgz and connect the reconall interface anymore.

  • Creation of 5TT, gray-matter / white-matter interface, and partial volume maps images are performed in the preprocessing stage of the diffusion pipeline only if necessary

Code Style

  • Clean code and remove a number of commented lines that are now obsolete. Code related to the connection of nodes in the Nipype Workflow adopts a specific format and are protected from being reformatted by BLACK with the # fmt: off and # fmt: on tags.

Documentation

  • Add instructions to use custom segmentation and parcellation files as inputs.

  • Add description in contributing page of format for code related to the connection of the nodes in a Nipype Workflow.

  • Add instructions to use the python wrappers for running the BIDS App. (PR #115)

  • Add notification about the removal of the old Lausanne2008 parcellation, and remove any other mentions in the documentation.

Software container

  • Define multiple build stages in Dockerfile, which can be run in parallel at build with BUILDKIT. (PR #88)

Software development life cycle

  • Update the list of outputs of circleci tests with the new names of directories produced by cmp in output_dir/.

  • Following major changes in the pricing plans of CircleCI but also to improve its readability, circleci/config.yml has been dramatically refactored, including: * Use BUILDKIT in docker build to take advantage of the multi-stage build * Reordering and modularization of the tests:

    • tests 01-02 (Docker): anatomical pipeline for each parcellation scheme

    • tests 03-06 (Docker): diffusion pipeline for dipy/mrtrix deterministic/probabilistic tractography

    • tests 07-08 (Docker): fMRI pipeline for FLIRT and BBRegistration registrations

    • test 09 (Singularity): anatomical pipeline for Lausanne2018 scheme

    • Creation of commands for steps that are shared between jobs to reduce code duplication

    (PR #88)

Contributors

Please check the main pull request 88 page for more details.

Version 3.0.0-RC4

Date: March 07, 2021

This version corresponds to the fourth and final release candidate of Connectome Mapper 3 (CMP3). It incorporates the relatively large Pull Request #74 (~270 commits) which includes the following changes such that it marks the end of the release candidate phase.

New features

  • CMP3 pipeline configuration files adopt JSON as new format. (PR #76)

  • CMP3 is compatible with PyPI for installation. (PR #78)

  • BIDS convention naming of data derived from parcellation atlas adopt now the new BIDS entity atlas-<atlas_label> to distinguish data derived from different parcellation atlases. The use of the entity desc-<scale_label> to distinguish between parcellation scale has been replaced by the use of the entity res-<scale_label>. (PR #79)

Updates

  • Content of dataset_description.json for each derivatives folder has been updated to conform to BIDS version 1.4.0. (PR #79)

Code refactoring

  • Major refactoring of the cmtklib.config module with the addition and replacement of a number of new methods to handle JSON configuration files. (See full diff on GitHub) Configuration files in the old INI format can be converted automatically with the help of the two new methods check_configuration_format() and convert_config_ini_2_json() to detect if configuration files are in the INI format and to make the conversion. (PR #76)

  • Major changes to make cmp and cmpbidsappmanager compatible with the Python Package Index (pip) for package distribution and installation. This includes the merge of setup.py and setup_gui.py, which have been merged into one setup.py and a major refactoring to make pip happy, as well as the creation of a new cmp.cli module, migration to cmp.cli module and refactoring of the scripts connectomemapper3, showmatrix_gpickle, and cmpbidsappmanager with correction of code style issues and addition of missing docstrings. (PR #78)

Improvements

  • Clean parameters to be saved in configuration files with the new API. (PR #74)

  • Clean output printed by the cmpbidsappmanager Graphical User Interface. (PR #74)

  • Add in cmtklib.config the three new functions print_error, print_blue, and print_warning to use different colors to differentiate general info (default color), error (red), command or action (blue), and highlight or warning (yellow). (PR #74)

  • Clean code and remove a number of commented lines that are now obsolete. (PR #74, PR #79)

Documentation

  • Review usage and add a note regarding the adoption of the new JSON format for configuration files. (PR #76)

  • Update tutorial on using CMP3 and Datalad for collaboration. (PR #77)

  • Update installation instruction of cmpbidsappmanager using pip install .. (PR #78)

  • Update list of outputs following the new BIDS derivatives naming convention introduced. (PR #79)

Bug fixes

  • Correct attributes related to the diffusion imaging model type multishell. (PR #74)

  • Review code in cmtklib/connectome.py for saving functional connectome files in GRAPHML format. (PR #74)

Software Updates

  • Update version of datalad and dependencies (PR #77):

    • datalad[full]==0.13.0 to datalad[full]==0.14.0.

    • datalad-container==0.3.1 to datalad-container==1.1.2.

    • datalad_neuroimaging==0.2.0 to datalad-neuroimaging==0.3.1.

    • git-annex=8.20200617 to git-annex=8.20210127.

    • datalad-revolution was removed.

Software development life cycle

  • Improve code coverage by calling the methods check_stages_execution() and fill_stages_outputs() on each pipeline when executed with coverage. (PR #75)

  • Improve code coverage by saving in test-01 structural connectome files in MAT and GRAPHML format. (PR #74)

  • Improve code coverage by saving in test-07 functional connectome files in GRAPHML format. (PR #74)

  • Update the list of outputs for all tests. (PR #74)

  • Add test-python-install job that test the build and installation of cmp and cmpbidsappmanager packages compatible with pip. (PR #78)

Please check the main pull request 74 page for more details.

Version 3.0.0-RC3

Date: February 05, 2021

This version corresponds to the third release candidate of Connectome Mapper 3. In particular, it integrates Pull Request #62 which includes:

Updates

  • MRtrix3 has been updated from 3.0_RC3_latest to 3.0.2.

  • Numpy has been updated from 1.18.5 to 1.19.2.

  • Nipype has been updated to 1.5.0 to 1.5.1.

  • Dipy has been updated from 1.0.0 to 1.3.0.

  • CVXPY has been updated from 1.1.5 to 1.1.7.

Documentation

  • Update outdated screenshots for GUI documentation page at readthedocs reported at CMTK user-group.

  • Correction of multiple typos.

Bug fixes

  • Update code for Dipy tracking with DTI model following major changes in Dipy 1.0 (Fix reported issue #54).

  • Update to Dipy 1.3.0 has removed the deprecated warnings related to CVXPY when using MAP_MRI (#63)

  • Do not set anymore OMP_NUM_THREADS at execution due to allocation errors raised when using numpy function dot in Dipy.

Software development life cycle

  • Add Test 08 that runs anatomical and fMRI pipelines with: Lausanne2018 parcellation, FSL FLIRT co-registration, all nuisance regression, linear detrending and scrubbing

  • Add Test 09 that runs anatomical and dMRI pipelines with: Lausanne2018 parcellation, FSL FLIRT, Dipy SHORE, MRtrix SD_Stream tracking, MRtrix SIFT tractogram filtering

  • Remove deploy_singularity_latest from the workflow for the sake of space on Sylabs.io.

Please check the main pull request 62 page for more details.

Version 3.0.0-RC2-patch1

Date: February 4, 2021

This version fixes bugs in the second release candidate of Connectome Mapper 3 (v3.0.0-RC2). In particular, it includes:

Bug fixes

Software development life cycle

  • Remove publication of the Singularity image to sylabs.io when the master branch is updated for the sake of space (11GB limit)

Commits

  • CI: remove publication of latest tag image on sylabs.io for space (2 days ago) - commit c765f79

  • Merge pull request #66 from connectomicslab/v3.0.0-RC2-hotfix1 (3 days ago) - commit 0a2603e

  • FIX: update g2.node to g2.nodes when saving connectomes as graphml (fix #65) (6 days ago) - commit d629eef

  • FIX: enabled/disabled gray-out button “Run BIDS App” with Qt Style sheet [skip ci] (3 weeks ago) - commit 10e78d9

  • MAINT: removed commented lines in cmpbidsappmanager/gui.py [skip ci] (3 weeks ago) - commit 4cc11e7

  • FIX: check availability of modalities in the BIDS App manager window [skip ci] (3 weeks ago) - commit 80fbee2

  • MAINT: update copyright year [skip ci] (3 weeks ago) - commit f7d0ffb

  • CI: delete previous container with latest TAG on sylabs.io [skip ci] (4 weeks ago) - commit 15c9b18

  • DOC: update tag to latest in runonhpc.rst [skip ci] (4 weeks ago) - commit 3165bcc

  • CI: comment lines related to version for singularity push (4 weeks ago) - commit 3952d46

Version 3.0.0-RC2

Date: December 24, 2020

This version corresponds to the second release candidate of Connectome Mapper 3. In particular, it integrates Pull Request #45 which includes:

New feature

  • Add SIFT2 tractogram filtering (requested in #48, PR #52).

  • Add a tracker to support us seeking for new funding. User is still free to opt-out and disable it with the new option flag --notrack.

  • Add options suggested by Theaud G et al. (2020) to better control factors having impacts on reproducibility. It includes:

    • Set the number of ITK threads used by ANTs for registration (option flag --ants_number_of_threads).

    • Set the seed of the random number generator used by ANTs for registration (option flag --ants_random_seed).

    • Set the seed of the random number generator used by MRtrix for tractography seeding and track propagation (option flag --mrtrix_random_seed).

  • Full support of Singularity (see Software development life cycle).

Code refactoring

  • A number of classes describing interfaces to fsl and mrtrix3 have been moved from cmtklib/interfaces/util.py to cmtklib/interfaces/fsl.py and cmtklib/interfaces/mrtrix3.py.

  • Capitalize the first letter of a number of class names.

  • Lowercase a number of variable names in cmtklib/parcellation.py.

Graphical User Interface

  • Improve display of qpushbuttons with images in the GUI (PR #52).

  • Make the window to control BIDS App execution scrollable.

  • Allow to specify a custom output directory.

  • Tune new options in the window to control BIDS App multi-threading (OpenMP and ANTs) and random number generators (ANTs and MRtrix).

Documentation

  • Full code documentation with numpydoc-style docstrings.

  • API documentation page at readthedocs.

Bug fixes

  • Fix the error reported in #17 if it is still occuring.

  • Review statements for creating contents of BIDS App entrypoint scripts to fix issue with Singularity converted images reported in #47.

  • Install dc package inside the BIDS App to fix the issue with FSL BET reported in #50.

  • Install libopenblas package inside the BIDS App to fix the issue with FSL EDDY_OPENMP reported in #49.

Software development life cycle

  • Add a new job test_docker_fmri that test the fMRI pipeline.

  • Add build_singularity, test_singularity_parcellation, deploy_singularity_latest, and deploy_singularity_release jobs to build, test and deploy the Singularity image in CircleCI (PR #56).

Please check the main pull request 45 page for more details.

Version 3.0.0-RC1

Date: August 03, 2020

This version corresponds to the first release candidate of Connectome Mapper 3. In particular, it integrates Pull Request #40 where the last major changes prior to its official release have been made, which includes in particular:

Migration to Python 3

  • Fixes automatically with 2to3 and manually a number of Python 2 statements invalid in python 3 including the print() function

  • Correct automatically PEP8 code style issues with autopep8

  • Correct manually a number of code stly issues reported by Codacy (bandits/pylints/flake8)

  • Major dependency upgrades including:

    • dipy 0.15 -> 1.0 and related code changes in cmtklib/interfaces/dipy (Check here for more details about Dipy 1.0 changes)

    Warning

    Interface for tractography based on Dipy DTI model and EuDX tractography, which has been drastically changed in Dipy 1.0, has not been updated yet, It will be part of the next release candidate.

    • nipype 1.1.8 -> 1.5.0

    • pybids 0.9.5 -> 0.10.2

    • pydicom 1.4.2 -> 2.0.0

    • networkX 2.2 -> 2.4

    • statsmodels 0.9.0 -> 0.11.1

    • obspy 1.1.1 -> 1.2.1

    • traits 5.1 -> 6.0.0

    • traitsui 6.0.0 -> 6.1.3

    • numpy 1.15.4 -> 1.18.5

    • matplotlib 1.1.8 -> 1.5.0

    • fsleyes 0.27.3 -> 0.33.0

    • mne 0.17.1 -> 0.20.7

    • sphinx 1.8.5 -> 3.1.1

    • sphinx_rtd_theme 0.4.3 -> 0.5.0

    • recommonmark 0.5.0 -> 0.6.0

New feature

  • Option to run Freesurfer recon-all in parallel and to specify the number of threads used by not only Freesurfer but also all softwares relying on OpenMP for multi-threading. This can be achieved by running the BIDS App with the new option flag --number_of_threads.

Changes in BIDS derivatives

Code refactoring

  • Functions to save and load pipeline configuration files have been moved to cmtklib/config.py

Bug fixes

  • Major changes in how inspection of stage/pipeline outputs with the graphical user interface (cmpbidsappmanager) which was not working anymore after migration to Python3

  • Fixes to compute the structural connectivity matrices following migration to python 3

  • Fixes to computes ROI volumetry for Lausanne2008 and NativeFreesurfer parcellation schemes

  • Add missing renaming of the ROI volumetry file for the NativeFreesurfer parcellation scheme following BIDS

  • Create the mask used for computing peaks from the Dipy CSD model when performing Particle Filtering Tractography (development still on-going)

  • Add missing renaming of Dipy tensor-related maps (AD, RD, MD) following BIDS

  • Remove all references to use Custom segmentation / parcellation / diffusion FOD image / tractogram, inherited from CMP2 but not anymore functional following the adoption of BIDS standard inside CMP3.

Software development life cycle

  • Use Codacy to support code reviews and monitor code quality over time.

  • Use coveragepy in CircleCI during regression tests of the BIDS app and create code coverage reports published on our Codacy project page.

  • Add new regression tests in CircleCI to improve code coverage:
    • Test 01: Lausanne2018 (full) parcellation + Dipy SHORE + Mrtrix3 SD_STREAM tractography

    • Test 02: Lausanne2018 (full) parcellation + Dipy SHORE + Mrtrix3 ACT iFOV2 tractography

    • Test 03: Lausanne2018 (full) parcellation + Dipy SHORE + Dipy deterministic tractography

    • Test 04: Lausanne2018 (full) parcellation + Dipy SHORE + Dipy Particle Filtering tractography

    • Test 05: Native Freesurfer (Desikan-Killiany) parcellation

    • Test 06: Lausanne2008 parcellation (as implemented in CMP2)

  • Moved pipeline configurations for regression tests in CircleCI from config/ to .circle/tests/configuration_files

  • Moved lists of expected regression test outputs in CircleCI from .circle/ to .circle/tests/expected_outputs

Please check the pull request 40 page for more details.

Version 3.0.0-beta-RC2

Date: June 02, 2020

This version integrates Pull Request #33 which corresponds to the last beta release that still relies on Python 2.7. It includes in particular:

Upgrade

  • Uses fsleyes instead of fslview (now deprecated), which now included in the conda environment of the GUI (py27cmp-gui).

New feature

  • Computes of ROI volumetry stored in <output_dir>/sub-<label>(/ses<label>)/anat folder, recognized by their _stats.tsv file name suffix.

Improved replicability

  • Sets the MATRIX_RNG_SEED environment variable (used by MRtrix) and seed for the numpy random number generator (numpy.random.seed())

Bug fixes

  • Fixes the output inspector window of the cmpbidsappmanager (GUI) that fails to find existing outputs, after adoption of /bids_dir and /output_dir in the bidsapp docker image.

  • Fixes the way to get the list of networkx edge attributes in inspect_outputs() of ConnectomeStage for the output inspector window of the cmpbidsappmanager (GUI)

  • Added missing package dependencies (fury and vtk) that fixes dipy_CSD execution error when trying to import module actor from dipy.viz to save the results in a png

  • Fixes a number of unresolved references identified by pycharm code inspection tool

Code refactoring

  • Interfaces for fMRI processing were moved to cmtklib/functionalMRI.py.

  • Interface for fMRI connectome creation (rsfmri_conmat) moved to cmtklib/connectome.py

Please check the pull request 33 page for change details.

Version 3.0.0-beta-RC1

Date: March 26, 2020

This version integrates Pull Request #28 which includes in summary:

  • A major revision of continuous integration testing and deployment with CircleCI which closes Issue 14 integrates an in-house dataset published and available on Zenodo @ https://doi.org/10.5281/zenodo.3708962.

  • Multiple bug fixes and enhancements incl. close Issue 30 , update mrtrix3 to RC3 version, bids-app run command generated by the GUI, location of the configuration and log files to be more BIDS compliant.

  • Change in tagging beta version which otherwise might not be meaningfull in accordance with the release date (especially when the expected date is delayed due to unexpected errors that might take longer to be fixed than expected).

Please check the pull request 28 page for a full list of changes.

Version 3.0.0-beta-20200227

Date: February 27, 2020

This version addresses multiple issues to make successful conversion and run of the CMP3 BIDS App on HPC (Clusters) using Singularity.

  • Revised the build of the master and BIDS App images:

    • Install locales and set $LC_ALL and $LANG to make freesurfer hippocampal subfields and brainstem segmentation (matlab-based) modules working when run in the converted SIngularity image

    • BIDS input and output directories inside the BIDS App container are no longer the /tmp and /tmp/derivatives folders but /bids_dir and /output_dir. .. warning:: this might affect the use of Datalad container (To be confirmed.)

    • Fix the branch of mrtrix3 to check out

    • Updated metadata

  • Fix the configuration of CircleCI to not use Docker layer cache feature anymore as this feature is not included anymore in the free plan for open source projects.

  • Improved documentation where the latest version should be dynamically generated everywhere it should appear.

Version 3.0.0-beta-20200206

Date: February 06, 2020

  • Implementation of an in-house Nipype interface to AFNI 3DBandPass which can handle to check output as ..++orig.BRIK or as ..tlrc.BRIK (The later can occur with HCP preprocessed fmri data)

Version 3.0.0-beta-20200124

Date: January 24, 2020

  • Updated multi-scale parcellation with a new symmetric version:

    1. The right hemisphere labels were projected in the left hemisphere to create a symmetric version of the multiscale cortical parcellation proposed by Cammoun2012.

    2. For scale 1, the boundaries of the projected regions over the left hemisphere were matched to the boundaries of the original parcellation for the left hemisphere.

    3. This transformation was applied for the rest of the scales.

  • Updated documentation with list of changes

Citing

Important

  • If your are using the Connectome Mapper 3 in your work, please acknowledge this software with the following two entries:

    1. Tourbier S, Aleman-Gomez Y, Mullier E, Griffa A, Wirsich J, Tuncel MA, Jancovic J, Bach Cuadra M, Hagmann P, (2022). Connectome Mapper 3: A Flexible and Open-Source Pipeline Software for Multiscale Multimodal Human Connectome Mapping. Journal of Open Source Software, 7(74), 4248, https://doi.org/10.21105/joss.04248

      @article{TourbierJOSS2022,
          doi = {10.21105/joss.04248},
          url = {https://doi.org/10.21105/joss.04248},
          year = {2022},
          publisher = {{The Open Journal}},
          volume = {7},
          number = {74},
          pages = {4248},
          author = {Tourbier, Sebastien and
                    Rue Queralt, Joan and
                    Glomb, Katharina and
                    Aleman-Gomez, Yasser and
                    Mullier, Emeline and
                    Griffa, Alessandra and
                    Schöttner, Mikkel and
                    Wirsich, Jonathan and
                    Tuncel, Anil and
                    Jancovic, Jakub and
                    Bach Cuadra, Meritxell and
                    Hagmann, Patric},
          title = {{Connectome Mapper 3: A Flexible and Open-Source
                    Pipeline Software for Multiscale Multimodal Human
                    Connectome Mapping},
          journal = {{Journal of Open Source Software}}
      }
      
    2. Tourbier S, Aleman-Gomez Y, Mullier E, Griffa A, Wirsich J, Tuncel MA, Jancovic J, Bach Cuadra M, Hagmann P. (2022). Connectome Mapper 3: A Flexible and Open-Source Pipeline Software for Multiscale Multimodal Human Connectome Mapping (v3.2.0). Zenodo. http://doi.org/10.5281/zenodo.6645256.

      @software{TourbierZenodo6645256,
          author = {Tourbier, Sebastien and
                    Rue Queralt, Joan and
                    Glomb, Katharina and
                    Aleman-Gomez, Yasser and
                    Mullier, Emeline and
                    Griffa, Alessandra and
                    Schöttner, Mikkel and
                    Wirsich, Jonathan and
                    Tuncel, Anil and
                    Jancovic, Jakub and
                    Bach Cuadra, Meritxell and
                    Hagmann, Patric},
          title = {{Connectome Mapper 3: A Flexible and Open-Source
                    Pipeline Software for Multiscale Multimodal Human
                    Connectome Mapping}},
          month = jun,
          year = 2022,
          publisher = {Zenodo},
          version = {v3.0.4},
          doi = {10.5281/zenodo.6645256},
          url = {https://doi.org/10.5281/zenodo.6645256}
      }
      

Poster

_images/Tourbier_et_al_poster1892_OHBM2020.png

Contributors

Thanks goes to these wonderful people (emoji key):


Sébastien Tourbier

💻 🎨 🚇 ⚠️ 💡 🤔 🧑‍🏫 📆 👀 📢

joanrue

🐛 💻 ⚠️ 🤔

Katharina Glomb

🐛 💻 ⚠️ 🤔

anilbey

💻 ⚠️ 🤔 📖

jwirsich

🐛 💻 🤔

kuba-fidel

💻 📖 🤔

Stefan

💻 🤔

Mikkel Schöttner

💻 🤔

yasseraleman

💻 🤔

agriffa

💻 🤔

Emeline Mullier

💻

Patric Hagmann

🤔 🔍

Thanks also goes to all these wonderful people that contributed to the two first versions of Connectome Mapper:

  • Collaborators from Signal Processing Laboratory (LTS5), EPFL, Lausanne:

    • Jean-Philippe Thiran

    • Leila Cammoun

    • Adrien Birbaumer (abirba)

    • Alessandro Daducci (daducci)

    • Stephan Gerhard (unidesigner)

    • Christophe Chênes (Cwis)

    • Oscar Esteban (oesteban)

    • David Romascano (davidrs06)

    • Alia Lemkaddem (allem)

    • Xavier Gigandet

  • Collaborators from Children’s Hospital, Boston:

    • Ellen Grant

    • Daniel Ginsburg (danginsburg)

    • Rudolph Pienaar (rudolphpienaar)

    • Nicolas Rannou (NicolasRannou)

This project follows the all-contributors specification. Contributions of any kind are welcome!

See contributing page for more details about how to join us!

Contributing to Connectome Mapper 3

Philosophy

The development philosophy for this new version of the Connectome Mapper is to:

  1. Enhance interoperability by working with datasets structured following the Brain Imaging Data Structure structured dataset.

  2. Keep the code of the processing as much as possible outside of the actual main Connectome Mapper code, through the use and extension of existing Nipype interfaces and an external library (dubbed cmtklib).

  3. Separate the code of the graphical interface and the actual main Connectomer Mapper code through inheritance of the classes of the actual main stages and pipelines.

  4. Enhance portability by freezing the computing environment with all software dependencies installed, through the adoption of the BIDS App framework relying on light software container technologies.

  5. Adopt best modern open-source software practices that includes to continuously test the build and execution of the BIDS App with code coverage and to follow the PEP8 and PEP257 conventions for python code and docstring style conventions. The use of an integrated development environment such as PyCharm or SublimeText with a python linter (code style checker) is strongly recommended.

  6. Follow the all contributors specification to acknowledge any kind of contribution.

This means that contributions in many different ways (discussed in the following subsections) are welcome and will be properly acknowledged! If you have contributed to CMP3 and are not listed as contributor, please add yourself and make a pull request.

This also means that further development, typically additions of other tools and configuration options should go in this direction.

Contribution Types

Report Bugs

Report bugs at https://github.com/connectomicslab/connectomemapper3/issues.

If you are reporting a bug, please include:

  • Your operating system name and version.

  • Any details about your local setup that might be helpful in troubleshooting.

  • Detailed steps to reproduce the bug.

Fix Bugs

Look through the GitHub issues for bugs. Anything tagged with “bug” and “help wanted” is open to whoever wants to implement it.

Implement Features

Look through the GitHub issues for features. Anything tagged with “enhancement” and “help wanted” is open to whoever wants to implement it.

Possible enhancements are probably to be included in the following list:

  1. Adding of a configuration option to an existing stage

  2. Adding a new interface to cmtklib

  3. Adding of a new stage

  4. Adding of a new pipeline

The adding of newer configuration options to existing stages should be self-understandable. If the addition is large enough to be considered a “sub-module” of an existing stage, see the Diffusion stage example.

Adding a new stage implies the addition of the stage folder to the cmp/stages and cmp/bidsappmanager/stages directory and according modification of the parent pipeline along with insertion of a new image in cmp/bidsappmanager/stages. Copy-paste of existing stage (such as segmentation stage) is recommended. Note that CMP3 adopts a specific style for code dedicated to the connection of stages and interfaces, which is as follows:

[...]
# fmt: off
anat_flow.connect(
    [
        (seg_flow, parc_flow, [("outputnode.subjects_dir", "inputnode.subjects_dir"),
                               ("outputnode.subject_id", "inputnode.subject_id")]),
        (seg_flow, anat_outputnode, [("outputnode.subjects_dir", "subjects_dir"),
                                     ("outputnode.subject_id", "subject_id")]),
        [...]
    ]
)
# fmt: on
[...]

The # fmt: off and # fmt: on flags protect the lines to be reformatted by BLACK.

Adding a new pipeline implies the creation of a new pipeline script and folder in the cmp/pipelines and cmp/bidsappmanager/pipelines directories Again copy-pasting an existing pipeline is the better idea here. Modification of cmp/project.py and cmp/bidsappmanager/project.py file is also needed.

Each new module, class or function should be properly documented with a docstring in accordance to the Numpy docstring style.

Write Documentation

CMP3 could always use more documentation, whether as part of the official CMP3 docs, in docstrings, or even on the web in blog posts, articles, and such.

When you commit changes related to the documentation, please always insert at then end of your message [skip ci] to not perform continuous integration of the whole project with CircleCI.

Submit Feedback

The best way to send feedback is to create an issue at https://github.com/connectomicslab/connectomemapper3/issues.

If you are proposing a feature:

  • Explain in detail how it would work.

  • Keep the scope as narrow as possible, to make it easier to implement.

  • Remember that this is a volunteer-driven project, and that contributions are welcome :)

Get Started!

Ready to contribute? Here’s how to set up Connectome Mapper 3 for local development.

  1. Fork the connectomemapper3 repo on GitHub.

  2. Clone your fork locally:

    git clone git@github.com:your_name_here/connectomemapper3.git
    cd connectomemapper3
    
  3. Create a branch for local development:

    git checkout -b name-of-your-bugfix-or-feature
    
  4. Now you can make your changes locally. If you add a new node in a pipeline or a completely new pipeline, we encourage you to rebuild the BIDS App Docker image (See BIDS App build instructions).

Note

Please keep your commit the most specific to a change it describes. It is highly advice to track un-staged files with git status, add a file involved in the change to the stage one by one with git add <file>. The use of git add . is highly discouraged. When all the files for a given change are staged, commit the files with a brief message using git commit -m "[COMMIT_TYPE]: Your detailed description of the change." that describes your change and where [COMMIT_TYPE] can be [FIX] for a bug fix, [ENH] for a new feature, [MAINT] for code maintenance and typo fix, [DOC] for documentation, [CI] for continuous integration testing, [UPD] for dependency update, [MISC] for miscellaneous.

  1. When you’re done making changes, push your branch to GitHub:

    git push origin name-of-your-bugfix-or-feature
    
  2. Submit a pull request through the GitHub website.

Pull Request Guidelines

Before you submit a pull request, check that it meets these guidelines:

  1. If the pull request adds functionality, the docs and tests should be updated (See documentation build instructions).

  2. Python code and docstring should comply with PEP8 and PEP257 standards.

  3. The pull request should pass all tests on GitHub.

How to build the BIDS App locally
  1. Go to the clone directory of your fork and run the script build_bidsapp.sh

    cd connectomemapper3
    sh scripts/build_bidsapp.sh
    

Note

Tag of the version of the image is extracted from cmp/info.py. You might want to change the version in this file to not overwrite an other existing image with the same version.

How to build the documentation locally

To generate the documentation:

  1. Install the CMP3 conda environment py39cmp-gui:

    $ cd connectomemapper3
    $ conda env create -f environment.yml
    
  2. Activate CMP3 conda environment py39cmp-gui:

    $ conda activate py39cmp-gui
    
  3. Install all dependencies such as sphinx and its extensions, required for the build:

    (py39cmp-gui)$ pip install -r docs/requirements.txt
    
  4. Install connectomemapper3:

    (py39cmp-gui)$ pip install .
    
  5. Run the script scripts/build_docs.sh to generate the HTML documentation in docs/_build/html:

    (py39cmp-gui)$ sh scripts/build_docs.sh
    

Note

Make sure to have (1) activated the conda environment py39cmp-gui and (2) reinstalled connectomemapper3 with pip before running build_docs.sh.


Authors

Sebastien Tourbier, Adrien Birbaumer

Version

Revision: 2

Acknowledgments

We thanks the authors of these great contributing guidelines, from which part of this document has been inspired and adapted.

Support, Bugs and New Feature Requests

If you need any support or have any questions, or for all bugs, concerns and enhancement requests for this software are managed on GitHub and can be submitted at https://github.com/connectomicslab/connectomemapper3/issues. (See Contribute to Connectome Mapper for more details)

Funding

Work supported by the SNF Sinergia Grant 170873 (http://p3.snf.ch/Project-170873).