This neuroimaging processing pipeline software is developed by the Connectomics Lab at the University Hospital of Lausanne (CHUV) for use within the SNF Sinergia Project 170873, as well as for open-source software distribution. Source code is hosted on GitHub.
Warning
THIS SOFTWARE IS FOR RESEARCH PURPOSES ONLY AND SHALL NOT BE USED FOR ANY CLINICAL USE. THIS SOFTWARE HAS NOT BEEN REVIEWED OR APPROVED BY THE FOOD AND DRUG ADMINISTRATION OR EQUIVALENT AUTHORITY, AND IS FOR NON-CLINICAL, IRB-APPROVED RESEARCH USE ONLY. IN NO EVENT SHALL DATA OR IMAGES GENERATED THROUGH THE USE OF THE SOFTWARE BE USED IN THE PROVISION OF PATIENT CARE.
Connectome Mapper 3 is an open-source Python3 image processing pipeline software,
with a Graphical User Interface (GUI), that implements full anatomical, diffusion and
resting-state MRI processing pipelines, from raw T1 / Diffusion / BOLD / preprocessed
EEG data to multi-resolution connection matrices based on a new version of the Lausanne
parcellation atlas, aka Lausanne2018.
ConnectomeMapper3 pipelines use a combination of tools from
well-known software packages, including FSL, FreeSurfer, ANTs,
MRtrix3, Dipy, AFNI, MNE, MNEcon, and PyCartool empowered by
the Nipype dataflow library.
These pipelines are designed to provide the best software implementation
for each state of processing at the time of conception, and can be
easily updated as newer and better neuroimaging software become available.
To enhance reproducibility and replicatibility, the processing pipelines
with all dependencies are encapsulated in a Docker image container, which handles datasets
organized following the BIDS standard and is distributed as a BIDS App @
Docker Hub.
For execution on high-performance computing cluster, a Singularity image is also made freely available @
Sylabs Cloud.
To enhanced accessibility and reduce the risk of misconfiguration,
Connectome Mapper 3 comes with an interactive GUI, aka cmpbidsappmanager,
which supports the user in all the steps involved in the configuration of
the pipelines, the configuration and execution of the BIDS App, and
the control of the output quality. In addition, to facilitate the use
by users not familiar with Docker and Singularity containers,
Connectome Mapper 3 provides two Python commandline wrappers
(connectomemapper3_docker and connectomemapper3_singularity) that will
generate and run the appropriate command.
Since v3.1.0, CMP3 provides full support to EEG.
Please check this notebook for a demonstration
using the public VEPCON dataset.
Carbon footprint estimation of BIDS App run 🌍🌳✨
In support to the Organisation for Human Brain Mapping (OHBM) Sustainability and Environmental
Action (OHBM-SEA) group, CMP3 enables you since v3.0.3 to be more aware about the adverse impact
of your processing on the environment!
With the new --track_carbon_footprint option of the connectomemapper3_docker and connectomemapper3_singularity BIDS App python wrappers, and
the new "Trackcarbonfootprint" option of the BIDS Interface Window of cmpbidsappmanager, you can estimate the carbon footprint incurred by the
execution of the BIDS App. Estimations are conducted using codecarbon to estimate the amount of carbon dioxide (CO2)
produced to execute the code by the computing resources and save the results in <bids_dir>/code/emissions.csv.
Then, to visualize, interpret and track the evolution of the emitted CO2 emissions, you can use the visualization tool of codecarbon aka carbonboard that takes as input the csv created:
If you run into any problems or have any questions,
you can post to the CMTK-users group.
Code bugs can be reported by creating a “New Issue” on the
source code repository.
Connectome Mapper 3 is open-source and all kind of contributions
(bug reporting, documentation, code,…) are welcome!
See Contributing to Connectome Mapper for more details.
This software is for research purposes only and shall not be used for
any clinical use. This software has not been reviewed or approved by
the Food and Drug Administration or equivalent authority, and is for
non-clinical, IRB-approved Research Use Only. In no event shall data
or images generated through the use of the Software be used in the
provision of patient care.
The Connectome Mapper 3 is composed of a Docker image, namely the Connectome Mapper 3 BIDS App, and a Python Graphical User Interface, namely the Connectome Mapper BIDS App Manager.
Installation instructions for the Connectome mapper 3 BIDS App are found in Installation.
Installation instructions for the Connectome mapper 3 BIDS App Manager are found in Installation.
Make sure that you have installed the following prerequisites.
Important
On Mac and Windows, if you want to track the carbon emission incurred by the processing with the --track_carbon_footprint option flag, you will need to install in addition the IntelPowerGadget tool available here.
Connectome Mapper 3 BIDSApp has been tested only on Ubuntu and MacOSX.
In principles, it should also run on Windows but it might require a few patches
to make it work.
Manage Docker as a non-root user
Open a terminal
Create the docker group:
$ sudo groupadd docker
Add the current user to the docker group:
$ sudo usermod -G docker -a $USER
Reboot
After reboot, test if docker is managed as non-root:
Installation of the Connectome Mapper 3 has been facilitated through the distribution of a BIDSApp relying on the Docker software container technology.
Open a terminal
Download and extract the latest release (v3.2.0) of the BIDS App:
This can take some time depending on your connection speed and your machine.
The docker image of the BIDSApp has a compressed size of 6.28 GB on DockerHub and should take 17.6 GB of space on your machine after download and extraction.
To display all docker images available:
$ docker images
You should see the docker image “connectomemapper-bidsapp” with tag “v3.2.0” is now available.
You are ready to use the Connectome Mapper 3 BIDS App from the terminal. See its commandline usage.
Download the Python 3 installer of miniconda3 corresponding to your 32/64bits MacOSX/Linux/Win system and
install it following the instructions at https://conda.io/miniconda.html.
The installation of the Connectome Mapper 3, including cmpbidsappmanager, consists of the creation of conda environment with all python dependencies installed, and the installation of connectomemapper via the Python Package Index (PyPI) as follows:
It seems there is no conda package for git-annex available on Mac.
For your convenience, we created an additional conda/environment_macosx.yml
miniconda3 environment where the line -git-annex=XXXXXXX has been removed.
Git-annex should be installed on MacOSX using brew
i.e. brewinstallgit-annex. See https://git-annex.branchable.com/install/ for more details.
Note that git-annex is only necessary if you wish to use BIDS datasets managed by Datalad (https://www.datalad.org/).
Open a terminal.
Create a miniconda3 environment where all python dependencies will be installed:
This can take some time depending on your connection speed and your machine.
It should take around 2.8GB of space on your machine.
Activate the conda environment:
$ source activate py39cmp-gui
or:
$ conda activate py39cmp-gui
Install finally the latest released version of Connectome Mapper 3 with the Python Package Index (PyPI) using pip:
(py39cmp-gui)$ pip install connectomemapper
You are ready to use the Connectome Mapper 3 (1) via its Graphical User Interface (GUI) aka CMP BIDS App Manager
(See Graphical User Interface for the user guide), (2) via its python connectomemapper3_docker and
connectomemapper3_singularity wrappers (See With the wrappers for commandline usage), or (3) by
interacting directly with the Docker / Singularity Engine (See With the Docker / Singularity Engine for commandline usage).
In the future
If you wish to update Connectome Mapper 3 and the Connectome Mapper 3 BIDS App Manager,
this could be easily done by running pipinstallconnectomemapper==v3.X.Y.
If you run into any problems or have any questions, you can post to the CMTK-users group.
Code bugs can be reported by creating a “New Issue” on the source code repository.
ConnectomeMapper3(CMP3) adopts the BIDS standard for data organization and is developed following the BIDS App standard with a Graphical User Interface (GUI).
This means CMP3 can be executed in two different ways:
By running the BIDS App container image directly from the terminal or a script (See Commandline Usage section for more details).
By using its Graphical User Interface, designed to facilitate the configuration of all pipeline stages, the configuration of the BIDS App run and its execution, and the inspection of the different stage outputs with appropriate viewers (See Graphical User Interface section for more details) .
ConnectomeMapper3(CMP3) is distributed as a BIDS App which adopts the BIDS standard for data organization and takes as principal input the path of the dataset that is to be processed. The input dataset is required to be in valid BIDS format, and it must include at least a T1w or MPRAGE structural image and a DWI and/or resting-state fMRI image and/or preprocessed EEG data. See Connectome Mapper 3 and the BIDS standard page that provides links for more information about BIDS and BIDS-Apps as well as an example for dataset organization and naming.
Warning
As of CMP3v3.0.0-RC2, the BIDS App includes a tracking system that anonymously reports the run of the BIDS App. This feature has been introduced to support us in the task of fund finding for the development of CMP3 in the future. However, users are still free to opt-out using the --notrack commandline argument.
Important
Since v3.0.0-RC4, configuration files adopt the JSON format. If you have your configuration files still in the oldINI format,
do not worry, the CMP3 BIDS App will convert them to the new JSON format automatically for you.
The directory with the input dataset formatted according to the BIDS standard.
output_dir
The directory where the output files should be stored. If you are running group level analysis this folder should be prepopulated with the results of the participant level analysis.
analysis_level
Possible choices: participant, group
Level of the analysis that will be performed. Multiple participant level analyses can be run independently (in parallel) using the same output_dir.
The label(s) of the participant(s) that should be analyzed. The label corresponds to sub-<participant_label> from the BIDS spec (so it does not include “sub-“). If this parameter is not provided all subjects should be analyzed. Multiple participants can be specified with a space separated list.
--session_label
The label(s) of the session that should be analyzed.
The label corresponds to ses-<session_label> from the
BIDS spec (so it does not include “ses-“). If this
parameter is not provided all sessions should be
analyzed. Multiple sessions can be specified
with a space separated list.
--anat_pipeline_config
Configuration .json file for processing stages of the anatomical MRI processing pipeline
--dwi_pipeline_config
Configuration .json file for processing stages of the diffusion MRI processing pipeline
--func_pipeline_config
Configuration .json file for processing stages of the fMRI processing pipeline
--eeg_pipeline_config
Configuration .json file for processing stages of the eeg processing pipeline
--number_of_threads
The number of OpenMP threads used for multi-threading by Freesurfer (Set to [Number of available CPUs -1] by default).
--number_of_participants_processed_in_parallel
The number of subjects to be processed in parallel (One by default).
Default: 1
--mrtrix_random_seed
Fix MRtrix3 random number generator seed to the specified value
--ants_random_seed
Fix ANTS random number generator seed to the specified value
--ants_number_of_threads
Fix number of threads in ANTs. If not specified ANTs will use the same number as the number of OpenMP threads (see —-number_of_threads option flag)
--fs_license
Freesurfer license.txt
--coverage
Run connectomemapper3 with coverage
Default: False
--notrack
Do not send event to Google analytics to report BIDS App execution, which is enabled by default.
Default: False
-v, --version
show program’s version number and exit
Important
Before using any BIDS App, we highly recommend you to validate your BIDS structured dataset with the free, online BIDS Validator.
You can run CMP3 using the lightweight Docker or Singularity wrappers we created for convenience or you can interact directly with the Docker / Singularity Engine via the docker or singularity run command.
New in v3.0.2 ✨
You can now be aware about the adverse impact of your processing on the environment 🌍🌳!
With the new –track_carbon_footprint option of the connectomemapper3_docker and connectomemapper3_singularity BIDS App python wrappers, you can use codecarbon to estimate the amount of carbon dioxide (CO2) produced to execute the code by the computing resources and save the results in <bids_dir>/code/emissions.csv.
Then, to visualize, interpret and track the evolution of the CO2 emissions incurred, you can use the visualization tool of codecarbon aka carbonboard that takes as input the .csv created:
When you run connectomemapper3_docker, it will generate a Docker command line for you, print it out for reporting purposes, and then execute it without further action needed, e.g.:
When you run connectomemapper3_singularity, it will generate a Singularity command line for you, print it out for reporting purposes, and then execute it without further action needed, e.g.:
If you need a finer control over the container execution, or you feel comfortable with the Docker or Singularity Engine, avoiding the extra software layer of the wrapper might be a good decision.
The local directory of the input BIDS dataset (here: /home/user/data/ds001) and the output directory (here: /home/user/data/ds001/derivatives) used to process have to be mapped to the folders /bids_dir and /output_dir respectively using the docker -v / singularity --bind run option.
Important
The user is requested to use its own Freesurfer license (available here). CMP expects by default to find a copy of the FreeSurfer license.txt in the code/ folder of the BIDS directory. However, one can also mount a freesurfer license.txt with the docker -v / singularity --bind run option. This file can be located anywhere on the computer (as in the example above, i.e. /usr/local/freesurfer/license.txt) to the code/ folder of the BIDS directory inside the docker container (i.e. /bids_dir/code/license.txt).
Note
At least a configuration file describing the processing stages of the anatomical pipeline should be provided. Diffusion and/or Functional MRI pipeline are performed only if a configuration file is set. The generation of such configuration files, the execution of the BIDS App docker image and output inpection are facilitated through the use of the Connectome Mapper GUI, i.e. cmpbidsappmanager (see dedicated documentation page)
If you have already Freesurfer v5 / v6 output data available, CMP3 can use them if there are properly placed in your output / derivatives directory.
Since v3.0.3, CMP3 expects to find a freesurfer-7.1.1, so make sure that your derivatives are organized as
follows:
If you intend to run CMP3 on a remote system such as a high-performance computing cluster where Docker is not available due to root privileges, a Singularity image is also built for your convenience and available on Sylabs.io. Please see instructions at Running on a cluster (HPC).
Also, you will need to make your data available within that system first. Comprehensive solutions such as Datalad will handle data transfers with the appropriate settings and commands. Datalad also performs version control over your data. A tutorial is provided in Adopting Datalad for collaboration.
Connectome Mapper 3 comes with a Graphical User Interface, the Connectome Mapper BIDS App manager, designed to facilitate the configuration of all pipeline stages, the configuration of the BIDS App run and its execution, and the inspection of the different stage outputs with appropriate viewers.
Main window of the Connectome Mapper BIDS App Manager
Click on File->LoadBIDSdataset... in the menu bar of the main window. Note that on Mac, Qt turns this menu bar into the native menu bar (top of the screen).
The ConnectomeMapper3 BIDS App Manager gives you two different options:
LoadBIDSdataset: load a BIDS dataset stored locally.
You only have to select the root directory of your valid BIDS dataset (see note below)
InstallDataladBIDSdataset: create a new datalad/BIDS dataset locally from an existing local or remote datalad/BIDS dataset (This is a feature under development)
If ssh connection is used, make sure to enable the “install via ssh” and to provide all connection details (IP address / Remote host name, remote user, remote password)
Note
The input dataset MUST be a valid BIDS structured dataset and must include at least one T1w or MPRAGE structural image. We highly recommend that you validate your dataset with the free, online BIDS Validator.
From the main window, click on the left button to start the Configurator Window.
The window of the Connectome Mapper BIDS App Configurator will appear, which will assist you note only in configuring the pipeline stages (each pipeline has a tab panel), but also in creating appropriate configuration files which could be used outside the Graphical User Interface.
Prior to Lausanne parcellation, CMP3 relies on Freesurfer for the segmentation of the different brain tissues and the reconstruction of the cortical surfaces.
If you plan to use a custom parcellation, you will be required here to specify the pattern of the different existing segmentation files
that follows BIDS derivatives (See Custom segmentation).
Freesurfer
Number of threads: used to specify how many threads are used for parallelization
Brain extraction tools: alternative brain extraction methods injected in Freesurfer
Freesurfer args: used to specify extra Freesurfer processing options
Note
If you have already Freesurfer v5 / v6 / v7 output data available, CMP3 can use them if there are placed in your output / derivatives directory.
Note however that since v3.0.3, CMP3 expects to find a freesurfer-7.1.1, so make sure that your derivatives are organized as
follows:
You can use any parcellation scheme of your choice as long as you provide a list of segmentation files organized following the BIDS derivatives specifications for segmentation files, provide appropriate .tsv sidecar files that describes the index/label/color mapping of the parcellation, and adopt the atlas-<label> entity to encode the name of the atlas, i.e:
The desc BIDS entity can be used to target specific mask and segmentation files.
For instance, the configuration above would allows us to re-use the outputs of the anatomical pipeline obtained with the previous v3.0.2 version of CMP3:
If you plan to use either Anatomically Constrained or Particle Filtering tractography, you will still require to have Freesurfer 7 output data available in your output / derivatives directory, as described the above note in *Freesurfer*.
Generates the Native Freesurfer or Lausanne2018 parcellation from Freesurfer data. Alternatively, since v3.0.3 you can use your own custom parcellation files.
Parcellation scheme
NativeFreesurfer:
Atlas composed of 83 regions from the Freesurfer aparc+aseg file
Lausanne2018:
New version of Lausanne parcellation atlas, corrected, and extended with 7 thalamic nuclei, 12 hippocampal subfields, and 4 brainstem sub-structure per hemisphere
Since v3.0.0, Lausanne2018 parcellation has completely replaced the old Lausanne2008 parcellation.
As it provides improvements in the way Lausanne parcellation label are generated,
any code and data related to Lausanne2008 has been removed. If you still wish to
use this old parcellation scheme, please use v3.0.0-RC4 which is the last version
that supports it.
Custom:
You can use any parcellation scheme of your choice as long as they follow the BIDS derivatives specifications for segmentation files, provide appropriate .tsv sidecar files that describes the index/label/color mapping of the parcellation, and adopt the atlas-<label> entity to encode the name of the atlas, i.e:
The res BIDS entity allows the differentiation between multiple scales of the same atlas.
For instance, the above configuration would allows us to re-use the scale 1 of the Lausanne parcellation generated by the anatomical pipeline obtained of the previous v3.0.2 version of CMP3:
Preprocessing includes denoising, bias field correction, motion and eddy current correction for diffusion data.
Denoising
Remove noise from diffusion images using (1) MRtrix3 MP-PCA method or (2) Dipy Non-Local Mean (NLM) denoising with Gaussian or Rician noise models
Bias field correction
Remove intensity inhomogeneities due to the magnetic resonance bias field using (1) MRtrix3 N4 bias field correction or (2) the bias field correction provided by FSL FAST
Motion correction
Aligns diffusion volumes to the b0 volume using FSL’s MCFLIRT
Eddy current correction
Corrects for eddy current distortions using FSL’s Eddy correct tool
Resampling
Resample morphological and diffusion data to F0 x F1 x F2 mm^3
Perform diffusion reconstruction and local deterministic or probabilistic tractography based on several tools. ROI dilation is required to map brain connections when the tracking only operates in the white matter.
Probabilistic PFT tracking performed on SHORE or CSD reconstruction. Seeding from the gray matter / white matter interface is possible.
Note
We noticed a shift of the center of tractograms obtained by dipy. As a result, tractograms visualized in TrackVis are not commonly centered despite the fact that the tractogram and the ROIs are properly aligned.
MRtrix: perform deterministic and probabilistic fiber tracking as well as anatomically-constrained tractography. ROI dilation is required to map brain connections when the tracking only operates in the white matter.
Deterministic tractography:
Deterministic tractography (SD_STREAM) performed on single tensor or CSD reconstruction
Deterministic ACT tracking performed on single tensor or CSD reconstruction. Seeding from the gray matter / white matter interface is possible. Backtrack option is not available in deterministic tracking.
Probabilistic tractography:
Probabilistic tractography (iFOD2) performed on SHORE or CSD reconstruction
Compute fiber length connectivity matrices. If DTI data is processed, FA additional map is computed. In case of DSI, additional maps include GFA and RTOP. In case of MAP-MRI, additional maps are RTPP, RTOP, …
Output types
Select in which formats the connectivity matrices should be saved.
Preprocessing refers to processing steps prior to registration. It includes discarding volumes, despiking, slice timing correction and motion correction for fMRI (BOLD) data.
Discard n volummes
Discard n volumes from further analysis
Despiking
Perform despiking of the BOLD signal using AFNI.
Slice timing and Repetition time
Perform slice timing correction using FSL’s slicetimer.
Motion correction
Align BOLD volumes to the mean BOLD volume using FSL’s MCFLIRT.
Performs detrending, nuisance regression, bandpass filteringdiffusion reconstruction and local deterministic or probabilistic tractography based on several tools. ROI dilation is required to map brain connections when the tracking only operates in the white matter.
Detrending
Detrending of BOLD signal using:
linear trend removal algorithm provided by the scipy library
quadratic trend removal algorithm provided by the obspy library
Nuisance regression
A number of options for removing nuisance signals is provided. They consist of:
Global signal regression
CSF regression
WM regression
Motion parameters regression
Bandpass filtering
Perform bandpass filtering of the time-series using FSL’s slicetimer
EEG Preprocessing refers to steps that loads, crop, and save preprocessed EEG epochs data of a given task in fif format, the harmonized format used further in the pipeline.
EEG data can be provided as:
A mne.Epochs object already saved in fif format:
A set of the following files and parameters:
Preprocessed EEG recording: store the Epochs * Electrodes dipole time-series in eeglab set format
Recording events file in BIDS *_events.tsv format: describe timing and other properties of events recorded during the task
EEG Source Imaging refers to the all the steps necessary to obtain the inverse solutions and extract ROI time-series for a given parcellation scheme.
Structural parcellation: specify the cmp derivatives directory, the parcellation scheme, and the scale (for Lausanne 2018) to retrieve the parcellation files
Tool: CMP3 can either leverage MNE to compute the inverse solutions or take inverse solutions already pre-computed with Cartool as input.
MNE
If MNE is selected, all steps necessary to reconstruct the inverse solutions are performed by leveraging MNE. In this case, the following files and parameters need to be provided:
MNE ESI method: Method to compute the inverse solutions
MNE ESI method SNR: SNR level used to regularize the inverse solutions
MNE electrode transform: Additional transform in MNE trans.fif format to be applied to electrode coordinates when Apply electrode transform is enabled
Cartool
If Cartool is selected, the following files (generated by this tool) and parameters need to be provided:
Source space file: *.spi text file with 3D-coordinates (x, y and z-dimension) with possible solution points necessary to obtain the sources or generators of ERPs
Inverse solution file: *.is binary file that includes number of electrodes and solution points
Cartool esi method: Method used to compute the inverse solutions (Cartool esi method)
Cartool esi lamb: Regularization level of inverse solutions
SVD for ROI time-courses extraction: Start and end TOI parameters for the SVD algorithm that extract single ROI time-series from dipoles.
You can save the pipeline stage configuration files in two different way:
You can save all configuration files at once by clicking on the SaveAllPipelineConfigurationFiles. This will save automatically the configuration file of the anatomical / diffusion / fMRI pipeline to
<bids_dataset>/code/ref_anatomical_config.ini / <bids_dataset>/code/ref_diffusion_config.ini / <bids_dataset>/code/ref_fMRI_config.ini, <bids_dataset>/code/ref_EEG_config.ini respectively.
You can save individually each of the pipeline configuration files and edit its filename in the File menu (File -> Save anatomical/diffusion/fMRI/EEG configuration file as…)
Connectome Mapper relies on Nipype.
All intermediate steps for the processing are saved in the corresponding
<bids_dataset/derivatives>/nipype/sub-<participant_label>/<pipeline_name>/<stage_name>
stage folder (See Nipype workflow outputs for more details).
Tune the number of subjects to be processed in parallel
Tune the advanced execution settings for each subject process. This include finer control on the number of threads used by each process as well as on the seed value of ANTs and MRtrix random number generators.
Important
Make sure the number of threads multiplied by the number of subjects being processed in parallel do not exceed the number of CPUs available on your system.
Check/Uncheck the pipelines to be performed
Note
The list of pipelines might vary as it is automatically updated based on the availability of raw diffusion MRI, resting-state fMRI, and preprocessed EEG data.
Specify your Freesurfer license
Note
Your Freesurfer license will be copied to your dataset directory as <bids_dataset>/code/license.txt which will be mounted inside the BIDS App container image.
When the run is set up, you can click on the Checksettings button.
If the setup is complete and valid, this will enable the RunBIDSApp button.
For each subject, the execution output of the pipelines are redirected to a log file, written as <bids_dataset/derivatives>/cmp-v3.X.Y/sub-<subject_label>_log.txt. Execution progress can be checked by the means of these log files.
From the main window, click on the right button to start the Inspector Window.
The Connectome Mapper 3 Inspector Window will appear, which will assist you in inspecting outputs of the different pipeline stages (each pipeline has a tab panel).
Connectome Mapper 3 introduced a new BIDS entity atlas-<atlas_label>
(where <atlas_label>: Desikan/ L2018), that is used
in combination with the res-<atlas_scale> (where <atlas_scale>:
scale1 / scale2 / scale3 / scale4 / scale5) entity to
distinguish data derived from different parcellation atlases and
different scales.
Main outputs produced by Connectome Mapper 3 are written to
cmp/sub-<subject_label>/. In this folder, a configuration file
generated for each modality pipeline (i.e. anatomical/diffusion/fMRI/EEG)
and used for processing each participant is saved as
sub-<subject_label>_anatomical/diffusion/fMRI/EEG_config.json.
It summarizes pipeline workflow options and parameters used for processing.
An execution log of the full workflow is saved as sub-<subject_label>_log.txt`.
<atlas_label>: Desikan / L2018
is the parcellation scheme used
<scale_label>: scale1, scale2, scale3, scale4, scale5
corresponds to the parcellation scale if applicable
with two tsv side-car files that follow the BIDS derivatives, one describing the parcel label/index mapping (_dseg.tsv), one reporting volumetry of the different parcels (_stats.tsv), and two files used internally by CMP3, one describing the parcel labels in graphml format (dseg.graphml), one providing the color lookup table of the parcel labels in Freesurfer format which can used directly in freeview (_FreeSurferColorLUT.txt):
The patial volume maps for white matter (WM), gray matter (GM), and Cortical Spinal Fluid (CSF) used
for Particale Filtering Tractography (PFT), generated from 5TT image:
with derived Generalized Fractional Anisotropic (GFA),
Mean Squared Displacement (MSD), Return-to-Origin Probability (RTOP)
and Return-to-Plane Probability (RTPP) maps:
The execution of each Nipype workflow (pipeline) dedicated to the processing of one modality (i.e. anatomical/diffusion/fMRI/EEG) involves the creation of a number of intermediate outputs which are written to <bids_dataset/derivatives>/nipype/sub-<subject_label>/<anatomical/diffusion/fMRI/eeg>_pipeline respectively:
To enhance transparency on how data is processed, outputs include a pipeline execution graph saved as <anatomical/diffusion/fMRI/eeg>_pipeline/graph.svg which summarizes all processing nodes involves in the given processing pipeline:
Execution details (data provenance) of each interface (node) of a given pipeline are reported in <anatomical/diffusion/fMRI/eeg>_pipeline/<stage_name>/<interface_name>/_report/report.rst
Note
Connectome Mapper 3 outputs are currently being updated to conform to BIDS v1.4.0.
Check if inputs of the anatomical pipeline are available.
Parameters
layout (bids.BIDSLayout) – Instance of BIDSLayout
gui (traits.Bool) – Boolean used to display different messages
but not really meaningful anymore since the GUI
components have been migrated to cmp.bidsappmanager
Class that extends a Pipeline and represents the processing pipeline for structural MRI.
It is composed of:
the preprocessing stage that can perform slice timing correction, deskiping and motion correction
the registration stage that co-registered the anatomical T1w scan to the mean BOLD image
and projects the parcellations to the native fMRI space
the extra-preprocessing stage (FunctionalMRIStage) that can perform nuisance regression
and bandpass filtering
the connectome stage that extracts the time-series of each parcellation ROI and
computes the Pearson’s correlation coefficient between ROI time-series to create
the functional connectome.
Check if input of the diffusion pipeline are available.
Parameters
layout (bids.BIDSLayout) – Instance of BIDSLayout
gui (traits.Bool) – Boolean used to display different messages
but not really meaningful anymore since the GUI
components have been migrated to cmp.bidsappmanager
Visualization of the connectivity matrix using a circular layout
that might be obsolete as this has been detached after creation
of the bidsappmanager (Default: False)
Visualization of the connectivity matrix using a circular layout
that might be obsolete as this has been detached after creation
of the bidsappmanager (Default: False)
Class that represents the preprocessing stage of a EEGPipeline.
This stage consists of converting EEGLab set EEG files to MNE Epochs in fif format, the format used in the rest of the pipeline
by calling, if necessary the following interface:
EEGLAB2fif: Reads eeglab data and converts them to MNE format (fif file extension).
Slice acquisition order for slice timing correction that can be:
“bottom-top interleaved”, “bottom-top interleaved”, “top-bottom interleaved”,
“bottom-top”, and “top-bottom”
(Default: “none”)
Interpolation type used by ANTs that can be:
‘Linear’, ‘NearestNeighbor’, ‘CosineWindowedSinc’, ‘WelchWindowedSinc’,
‘HammingWindowedSinc’, ‘LanczosWindowedSinc’, ‘BSpline’, ‘MultiLabel’,
or ‘Gaussian’
(Default: ‘Linear’)
A list of connectivity metrics to stored. Valid connectivity_metrics are
‘Fiber number’, ‘Fiber length’, ‘Fiber density’, ‘Fiber proportion’,
‘Normalized fiber density’, ‘ADC’, ‘gFA’
A list of time/frequency connectivity metrics to stored. Valid connectivity_metrics are
‘coh’, ‘cohy’, ‘imcoh’, ‘plv’, ‘ciplv’, ‘ppc’, ‘pli’, ‘wpli’, and ‘wpli2_debiased’
Visualization of the connectivity matrix using a circular layout
that might be obsolete as this has been detached after creation
of the bidsappmanager (Default: False)
Creates CMP graphml and FreeSurfer colorLUT files describing parcellation nodes from a list of BIDS TSV files
roi_bids_tsvs : a list of items which are a pathlike object or string representing an existing file
roi_colorluts : a list of items which are a pathlike object or string representing a file
roi_graphmls : a list of items which are a pathlike object or string representing a file
Highpass.
Maps to a command-line argument: %f (position: -3).
in_filea pathlike object or string representing an existing file
Input file to 3dBandpass.
Maps to a command-line argument: %s (position: -1).
lowpassa float
Lowpass.
Maps to a command-line argument: %f (position: -2).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
automaska boolean
Create a mask from the input dataset.
Maps to a command-line argument: -automask.
blura float
Blur (inside the mask only) with a filter width (FWHM) of ‘fff’ millimeters.
Maps to a command-line argument: -blur%f.
despikea boolean
Despike each time series before other processing. Hopefully, you don’t actually need to do this, which is why it is optional.
Maps to a command-line argument: -despike.
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
localPVa float
Replace each vector by the local Principal Vector (AKA first singular vector) from a neighborhood of radius ‘rrr’ millimiters. Note that the PV time series is L2 normalized. This option is mostly for Bob Cox to have fun with.
Maps to a command-line argument: -localPV%f.
maska pathlike object or string representing an existing file
Mask file.
Maps to a command-line argument: -mask%s (position: 2).
nfftan integer
Set the FFT length [must be a legal value].
Maps to a command-line argument: -nfft%d.
no_detrenda boolean
Skip the quadratic detrending of the input that occurs before the FFT-based bandpassing. ++ You would only want to do this if the dataset had been detrended already in some other program.
Maps to a command-line argument: -nodetrend.
normalizea boolean
Make all output time series have L2 norm = 1 ++ i.e., sum of squares = 1.
Maps to a command-line argument: -norm.
notransa boolean
Don’t check for initial positive transients in the data: The test is a little slow, so skipping it is OK, if you KNOW the data time series are transient-free.
Maps to a command-line argument: -notrans.
num_threadsan integer
Set number of threads.
(Nipype default value: 1)
orthogonalize_dseta pathlike object or string representing an existing file
Orthogonalize each voxel to the corresponding voxel time series in dataset ‘fset’, which must have the same spatial and temporal grid structure as the main input dataset. At present, only one ‘-dsort’ option is allowed.
Maps to a command-line argument: -dsort%s.
orthogonalize_filea list of items which are a pathlike object or string representing an existing file
Also orthogonalize input to columns in f.1D Multiple ‘-ort’ options are allowed.
Maps to a command-line argument: -ort%s.
out_filea pathlike object or string representing a file
Output file from 3dBandpass.
Maps to a command-line argument: -prefix%s (position: 1).
outputtype‘NIFTI’ or ‘AFNI’ or ‘NIFTI_GZ’
AFNI output filetype.
tra float
Set time step (TR) in sec [default=from dataset header].
Maps to a command-line argument: -dt%f.
out_filea pathlike object or string representing an existing file
reference_image : a string or os.PathLike object referring to an existing file
transforms : a list of items which are a string or os.PathLike object referring to an existing file
Transform files: will be applied in reverse order. For example, the last specified transform will be applied first.
default_value : a float
input_images : a list of items which are a string or os.PathLike object referring to an existing file
interpolation : ‘Linear’ or ‘NearestNeighbor’ or ‘CosineWindowedSinc’ or ‘WelchWindowedSinc’ or ‘HammingWindowedSinc’ or ‘LanczosWindowedSinc’ or ‘MultiLabel’ or ‘Gaussian’ or ‘BSpline’
(Nipype default value: Linear)
out_postfixa string
(Nipype default value: _transformed)
output_images : a list of items which are a string or os.PathLike object
in_file : a pathlike object or string representing an existing file
subject_dir : a pathlike object or string representing an existing directory
out_brainmask_file : a pathlike object or string representing an existing file
out_brainmaskauto_file : a pathlike object or string representing an existing file
Use fslorient to get/set orientation information from an image’s header.
Advanced tool that reports or sets the orientation information in a file.
Note that only in NIfTI files can the orientation be changed -
Analyze files are always treated as “radiological” (meaning that they could be
simply rotated into the same alignment as the MNI152 standard images - equivalent
to the appropriate sform or qform in a NIfTI file having a negative determinant).
Gets the 16 elements of the qform matrix.
Maps to a command-line argument: -getqform (position: 1).
Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.
get_qformcodea boolean
Gets the qform integer code.
Maps to a command-line argument: -getqformcode (position: 1).
Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.
get_sforma boolean
Gets the 16 elements of the sform matrix.
Maps to a command-line argument: -getsform (position: 1).
Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.
get_sformcodea boolean
Gets the sform integer code.
Maps to a command-line argument: -getsformcode (position: 1).
Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.
output_type‘NIFTI’ or ‘NIFTI_PAIR’ or ‘NIFTI_GZ’ or ‘NIFTI_PAIR_GZ’
FSL output type.
set_qforma list of from 16 to 16 items which are a float
<m11 m12 … m44> sets the 16 elements of the qform matrix.
Maps to a command-line argument: -setqform%f (position: 1).
Mutually exclusive with inputs: get_orient, get_sform, get_qform, set_sform, set_qform, get_sformcode, get_qformcode, set_sformcode, set_qformcode, copy_sform2qform, copy_qform2sform, delete_orient, force_radiological, force_neurological, swap_orient.
input1 : a pathlike object or string representing an existing file
input2 : a pathlike object or string representing an existing file
out_tuple : a tuple of the form: (a pathlike object or string representing an existing file, a pathlike object or string representing an existing file)
Change the name of a file based on a mapped format string.
To use additional inputs that will be defined at run-time, the class
constructor must be called with the format template, and the fields
identified will become inputs to the interface.
Additionally, you may set the parse_string input, which will be run
over the input filename with a regular expressions search, and will
fill in additional input fields from matched groups. Fields set with
inputs have precedence over fields filled in with the regexp match.
It corresponds to the nipype.interfaces.utility.base.Rename interface
that has been modified to force hard link during copy
Examples
>>> fromnipype.interfaces.utilityimportRename>>> rename1=Rename()>>> rename1.inputs.in_file=os.path.join(datadir,"zstat1.nii.gz")# datadir is a directory with exemplary files, defined in conftest.py>>> rename1.inputs.format_string="Faces-Scenes.nii.gz">>> res=rename1.run()>>> res.outputs.out_file'Faces-Scenes.nii.gz"
eeg_ts_filea string or os.PathLike object referring to an existing file
Eeg * epochs in .set format.
events_filea string or os.PathLike object referring to an existing file
Epochs metadata in _behav.txt.
out_epochs_fif_fnamea string
Output filename for eeg * epochs in .fif format, e.g. sub-01_epo.fif.
electrodes_filea string or os.PathLike object referring to an existing file
Positions of EEG electrodes in a txt file.
event_idsa dictionary with keys which are any value and with values which are any value
The id of the events to consider in dict form. The keys of the dict can later be used to access associated events. If None, all events will be used and a dict is created with string integer names corresponding to the event id integers.
t_maxa float
End time of the epochs in seconds, relative to the time-locked event.
t_mina float
Start time of the epochs in seconds, relative to the time-locked event.
epochs_filea string or os.PathLike object referring to an existing file
extension‘mif’ or ‘nii’ or ‘float’ or ‘char’ or ‘short’ or ‘int’ or ‘long’ or ‘double’
“i.e. Bfloat”. Can be “char”, “short”, “int”, “long”, “float” or “double”.
(Nipype default value: mif)
in_filesa list of items which are a pathlike object or string representing an existing file
Files to be registered.
output_datatype‘float32’ or ‘float32le’ or ‘float32be’ or ‘float64’ or ‘float64le’ or ‘float64be’ or ‘int64’ or ‘uint64’ or ‘int64le’ or ‘uint64le’ or ‘int64be’ or ‘uint64be’ or ‘int32’ or ‘uint32’ or ‘int32le’ or ‘uint32le’ or ‘int32be’ or ‘uint32be’ or ‘int16’ or ‘uint16’ or ‘int16le’ or ‘uint16le’ or ‘int16be’ or ‘uint16be’ or ‘cfloat32’ or ‘cfloat32le’ or ‘cfloat32be’ or ‘cfloat64’ or ‘cfloat64le’ or ‘cfloat64be’ or ‘int8’ or ‘uint8’ or ‘bit’
stridea list of from 3 to 4 items which are an integer
Three to four comma-separated numbers specifying the strides of the output data in memory. The actual strides produced will depend on whether the output image format can support it..
Maps to a command-line argument: -stride%s (position: 3).
converted_filesa list of items which are a pathlike object or string representing a file
Perform non-negativity constrained spherical deconvolution using dwi2fod.
Note that this program makes use of implied symmetries in the diffusion profile.
First, the fact the signal attenuation profile is real implies that it has conjugate symmetry,
i.e. Y(l,-m) = Y(l,m)* (where * denotes the complex conjugate). Second, the diffusion profile should be
antipodally symmetric (i.e. S(x) = S(-x)), implying that all odd l components should be zero.
Therefore, this program only computes the even elements. Note that the spherical harmonics equations used here
differ slightly from those conventionally used, in that the (-1)^m factor has been omitted. This should be taken
into account in all subsequent calculations. Each volume in the output image corresponds to a different spherical
harmonic component, according to the following convention:
Select the CSD algorithm to be use for FOD estimationOptions are: csd (single shell single tissue) or msmt_csd (multi-shell multi-tissues).
Maps to a command-line argument: %s (position: 0).
in_filea pathlike object or string representing an existing file
Diffusion-weighted image.
Maps to a command-line argument: %s (position: 1).
response_csf_filea pathlike object or string representing an existing file
The diffusion-weighted signal response function for a single fibre population CSF (see EstimateResponse).
Maps to a command-line argument: %s (position: 6).
response_filea pathlike object or string representing an existing file
The diffusion-weighted signal response function for a single fibre population WM (see EstimateResponse).
Maps to a command-line argument: %s (position: 2).
response_gm_filea pathlike object or string representing an existing file
The diffusion-weighted signal response function for a single fibre population GM (see EstimateResponse).
Maps to a command-line argument: %s (position: 4).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
directions_filea pathlike object or string representing an existing file
A text file containing the [ el az ] pairs for the directions: Specify the directions over which to apply the non-negativity constraint (by default, the built-in 300 direction set is used).
Maps to a command-line argument: -directions%s (position: -2).
encoding_filea pathlike object or string representing an existing file
Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix.
Maps to a command-line argument: -grad%s (position: 8).
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
filter_filea pathlike object or string representing an existing file
A text file containing the filtering coefficients for each even harmonic order.the linear frequency filtering parameters used for the initial linear spherical deconvolution step (default = [ 1 1 1 0 0 ]).
Maps to a command-line argument: -filter%s (position: -2).
iterationsan integer
The maximum number of iterations to perform for each voxel (default = 50).
Maps to a command-line argument: -niter%s.
lambda_valuea float
The regularisation parameter lambda that controls the strength of the constraint (default = 1.0).
Maps to a command-line argument: -norm_lambda%s.
mask_imagea pathlike object or string representing an existing file
Only perform computation within the specified binary brain mask image.
Maps to a command-line argument: -mask%s (position: 2).
maximum_harmonic_orderan integer
Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images.
Maps to a command-line argument: -lmax%s.
out_csf_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: 7).
out_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: 3).
out_gm_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: 5).
threshold_valuea float
The threshold below which the amplitude of the FOD is assumed to be zero, expressed as a fraction of the mean value of the initial FOD (default = 0.1).
Maps to a command-line argument: -threshold%s.
csf_spherical_harmonics_imagea pathlike object or string representing an existing file
CSF Spherical harmonics image.
gm_spherical_harmonics_imagea pathlike object or string representing an existing file
GM Spherical harmonics image.
spherical_harmonics_imagea pathlike object or string representing an existing file
Perform non-negativity constrained spherical deconvolution using dwi2fod.
Note that this program makes use of implied symmetries in the diffusion profile.
First, the fact the signal attenuation profile is real implies that it has conjugate symmetry,
i.e. Y(l,-m) = Y(l,m)* (where * denotes the complex conjugate). Second, the diffusion profile should be
antipodally symmetric (i.e. S(x) = S(-x)), implying that all odd l components should be zero.
Therefore, this program only computes the even elements. Note that the spherical harmonics equations used here
differ slightly from those conventionally used, in that the (-1)^m factor has been omitted. This should be taken
into account in all subsequent calculations. Each volume in the output image corresponds to a different spherical
harmonic component, according to the following convention:
Select the CSD algorithm to be use for FOD estimationOptions are: csd (single shell single tissue) or msmt_csd (multi-shell multi-tissues).
Maps to a command-line argument: %s (position: -4).
in_filea pathlike object or string representing an existing file
Diffusion-weighted image.
Maps to a command-line argument: %s (position: -3).
response_filea pathlike object or string representing an existing file
The diffusion-weighted signal response function for a single fibre population (see EstimateResponse).
Maps to a command-line argument: %s (position: -2).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
directions_filea pathlike object or string representing an existing file
A text file containing the [ el az ] pairs for the directions: Specify the directions over which to apply the non-negativity constraint (by default, the built-in 300 direction set is used).
Maps to a command-line argument: -directions%s (position: -2).
encoding_filea pathlike object or string representing an existing file
Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix.
Maps to a command-line argument: -grad%s (position: 1).
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
filter_filea pathlike object or string representing an existing file
A text file containing the filtering coefficients for each even harmonic order.the linear frequency filtering parameters used for the initial linear spherical deconvolution step (default = [ 1 1 1 0 0 ]).
Maps to a command-line argument: -filter%s (position: -2).
iterationsan integer
The maximum number of iterations to perform for each voxel (default = 50).
Maps to a command-line argument: -niter%s.
lambda_valuea float
The regularisation parameter lambda that controls the strength of the constraint (default = 1.0).
Maps to a command-line argument: -norm_lambda%s.
mask_imagea pathlike object or string representing an existing file
Only perform computation within the specified binary brain mask image.
Maps to a command-line argument: -mask%s (position: 2).
maximum_harmonic_orderan integer
Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images.
Maps to a command-line argument: -lmax%s.
out_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: -1).
threshold_valuea float
The threshold below which the amplitude of the FOD is assumed to be zero, expressed as a fraction of the mean value of the initial FOD (default = 0.1).
Maps to a command-line argument: -threshold%s.
spherical_harmonics_imagea pathlike object or string representing an existing file
Diffusion-weighted images.
Maps to a command-line argument: %s (position: -2).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
debuga boolean
Display debugging messages.
Maps to a command-line argument: -debug (position: 1).
encoding_filea pathlike object or string representing a file
Encoding file, , supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix().
Maps to a command-line argument: -grad%s (position: 2).
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
ignore_slice_by_volumea list of from 2 to 2 items which are an integer
Requires two values (i.e. [34 1] for [Slice Volume] Ignores the image slices specified when computing the tensor. Slice here means the z coordinate of the slice to be ignored.
Maps to a command-line argument: -ignoreslices%s (position: 2).
ignore_volumesa list of at least 1 items which are an integer
Requires two values (i.e. [2 5 6] for [Volumes] Ignores the image volumes specified when computing the tensor.
Maps to a command-line argument: -ignorevolumes%s (position: 2).
in_mask_filea pathlike object or string representing an existing file
Input DWI mask.
Maps to a command-line argument: -mask%s (position: -3).
out_filenamea pathlike object or string representing a file
Output tensor filename.
Maps to a command-line argument: %s (position: -1).
quieta boolean
Do not display information messages or progress status.
Maps to a command-line argument: -quiet (position: 1).
tensora pathlike object or string representing an existing file
in_filea pathlike object or string representing an existing file
The input image series to be corrected.
Maps to a command-line argument: %s (position: -2).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
debuga boolean
Display debugging messages.
Maps to a command-line argument: -debug (position: 5).
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
force_writinga boolean
Force file overwriting.
Maps to a command-line argument: -force (position: 4).
maska pathlike object or string representing a file
Manually provide a mask image for bias field estimation (optional).
Maps to a command-line argument: -mask%s (position: 2).
out_biasa pathlike object or string representing a file
Output the estimated bias field.
Maps to a command-line argument: -bias%s (position: 3).
out_filea pathlike object or string representing a file
The output corrected image series.
Maps to a command-line argument: %s (position: -1).
use_antsa boolean
Use ANTS N4 to estimate the inhomogeneity field.
Maps to a command-line argument: ants (position: 1).
Mutually exclusive with inputs: use_ants, use_fsl.
use_fsla boolean
Use FSL FAST to estimate the inhomogeneity field.
Maps to a command-line argument: fsl (position: 1).
Mutually exclusive with inputs: use_ants, use_fsl.
out_biasa pathlike object or string representing an existing file
Output estimated bias field.
out_filea pathlike object or string representing an existing file
encoding_filea pathlike object or string representing an existing file
Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix.
Maps to a command-line argument: -grad%s (position: -2).
in_5tt_filea pathlike object or string representing an existing file
Diffusion-weighted images.
Maps to a command-line argument: %s (position: 3).
in_filea pathlike object or string representing an existing file
Diffusion-weighted images.
Maps to a command-line argument: %s (position: 2).
algorithm‘dhollander’ or ‘fa’ or ‘manual’ or ‘msmt_5tt’ or ‘tax’ or ‘tournier’
Select the algorithm to be used to derive the response function; additional details and options become available once an algorithm is nominated. Options are: dhollander, fa, manual, msmt_5tt, tax, tournier.
Maps to a command-line argument: %s (position: 1).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
debuga boolean
Display debugging messages.
Maps to a command-line argument: -debug.
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
maximum_harmonic_orderan integer
Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images.
Maps to a command-line argument: -lmax%s (position: -3).
out_csf_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: 6).
out_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: 4).
out_gm_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: 5).
quieta boolean
Do not display information messages or progress status.
Maps to a command-line argument: -quiet.
responsea pathlike object or string representing an existing file
WM Spherical harmonics image.
response_csfa pathlike object or string representing an existing file
CSF Spherical harmonics image.
response_gma pathlike object or string representing an existing file
encoding_filea pathlike object or string representing an existing file
Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix.
Maps to a command-line argument: -grad%s (position: -2).
in_filea pathlike object or string representing an existing file
Diffusion-weighted images.
Maps to a command-line argument: %s (position: 2).
mask_imagea pathlike object or string representing an existing file
Only perform computation within the specified binary brain mask image.
Maps to a command-line argument: -mask%s (position: -1).
algorithm‘dhollander’ or ‘fa’ or ‘manual’ or ‘msmt_5tt’ or ‘tax’ or ‘tournier’
Select the algorithm to be used to derive the response function; additional details and options become available once an algorithm is nominated. Options are: dhollander, fa, manual, msmt_5tt, tax, tournier.
Maps to a command-line argument: %s (position: 1).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
debuga boolean
Display debugging messages.
Maps to a command-line argument: -debug.
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
maximum_harmonic_orderan integer
Set the maximum harmonic order for the output series. By default, the program will use the highest possible lmax given the number of diffusion-weighted images.
Maps to a command-line argument: -lmax%s (position: -3).
out_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: 3).
quieta boolean
Do not display information messages or progress status.
Maps to a command-line argument: -quiet.
responsea pathlike object or string representing an existing file
in_filea pathlike object or string representing an existing file
Input images to be read.
Maps to a command-line argument: %s (position: -2).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
out_grad_fsla tuple of the form: (a pathlike object or string representing a file, a pathlike object or string representing a file)
Export the DWI gradient table to files in FSL (bvecs / bvals) format.
Maps to a command-line argument: -export_grad_fsl%s%s.
out_grad_fsla tuple of the form: (a pathlike object or string representing an existing file, a pathlike object or string representing an existing file)
Outputs [bvecs, bvals] DW gradient scheme (FSL format) if set.
Perform conversion with mrconvert between different file types and optionally extract a subset of the input image.
If used correctly, this program can be a very useful workhorse.
In addition to converting images between different formats, it can
be used to extract specific studies from a data set, extract a specific
region of interest, flip the images, or to scale the intensity of the images.
in_dira pathlike object or string representing an existing directory
Directory containing DICOM files.
Maps to a command-line argument: %s (position: -2).
Mutually exclusive with inputs: in_file, in_dir.
in_filea pathlike object or string representing an existing file
Voxel-order data filename.
Maps to a command-line argument: %s (position: -2).
Mutually exclusive with inputs: in_file, in_dir.
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
extension‘mif’ or ‘nii’ or ‘float’ or ‘char’ or ‘short’ or ‘int’ or ‘long’ or ‘double’
“i.e. Bfloat”. Can be “char”, “short”, “int”, “long”, “float” or “double”.
(Nipype default value: mif)
extract_at_axis1 or 2 or 3
Extract data only at the coordinates specified.This option specifies the Axis. Must be used in conjunction with extract_at_coordinate. .
Maps to a command-line argument: -coord%s (position: 1).
extract_at_coordinatea list of from 1 to 3 items which are an integer
Extract data only at the coordinates specified. This option specifies the coordinates. Must be used in conjunction with extract_at_axis. Three comma-separated numbers giving the size of each voxel in mm.
Maps to a command-line argument: %s (position: 2).
force_writinga boolean
Force file overwriting.
Maps to a command-line argument: -force.
grada pathlike object or string representing an existing file
Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ], where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-value in units (1000 s/mm^2). See FSL2MRTrix.
Maps to a command-line argument: -grad%s (position: 9).
grad_fsla tuple of the form: (a pathlike object or string representing an existing file, a pathlike object or string representing an existing file)
[bvecs, bvals] DW gradient scheme (FSL format).
Maps to a command-line argument: -fslgrad%s%s.
layout‘nii’ or ‘float’ or ‘char’ or ‘short’ or ‘int’ or ‘long’ or ‘double’
Specify the layout of the data in memory. The actual layout produced will depend on whether the output image format can support it.
Maps to a command-line argument: -output%s (position: 5).
offset_biasa float
Apply offset to the intensity values.
Maps to a command-line argument: -scale%d (position: 7).
out_filenamea pathlike object or string representing a file
Output filename.
Maps to a command-line argument: %s (position: -1).
output_datatype‘float32’ or ‘float32le’ or ‘float32be’ or ‘float64’ or ‘float64le’ or ‘float64be’ or ‘int64’ or ‘uint64’ or ‘int64le’ or ‘uint64le’ or ‘int64be’ or ‘uint64be’ or ‘int32’ or ‘uint32’ or ‘int32le’ or ‘uint32le’ or ‘int32be’ or ‘uint32be’ or ‘int16’ or ‘uint16’ or ‘int16le’ or ‘uint16le’ or ‘int16be’ or ‘uint16be’ or ‘cfloat32’ or ‘cfloat32le’ or ‘cfloat32be’ or ‘cfloat64’ or ‘cfloat64le’ or ‘cfloat64be’ or ‘int8’ or ‘uint8’ or ‘bit’
Assume that the DW gradients are specified in the PRS frame (Siemens DICOM only).
Maps to a command-line argument: -prs (position: 3).
quieta boolean
Do not display information messages or progress status.
Maps to a command-line argument: -quiet.
replace_nan_with_zeroa boolean
Replace all NaN values with zero.
Maps to a command-line argument: -zero (position: 8).
resamplea float
Apply scaling to the intensity values.
Maps to a command-line argument: -scale%d (position: 6).
stridea list of from 3 to 4 items which are an integer
Three to four comma-separated numbers specifying the strides of the output data in memory. The actual strides produced will depend on whether the output image format can support it..
Maps to a command-line argument: -stride%s (position: 3).
voxel_dimsa list of from 3 to 3 items which are a float
Three comma-separated numbers giving the size of each voxel in mm.
Maps to a command-line argument: -vox%s (position: 3).
converteda pathlike object or string representing an existing file
Input images to be transformed.
Maps to a command-line argument: %s (position: -2).
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
debuga boolean
Display debugging messages.
Maps to a command-line argument: -debug (position: 1).
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
flip_xa boolean
Assume the transform is supplied assuming a coordinate system with the x-axis reversed relative to the MRtrix convention (i.e. x increases from right to left). This is required to handle transform matrices produced by FSL’s FLIRT command. This is only used in conjunction with the -reference option.
Maps to a command-line argument: -flipx (position: 1).
interp‘nearest’ or ‘linear’ or ‘cubic’ or ‘sinc’
Set the interpolation method to use when reslicing (choices: nearest,linear, cubic, sinc. Default: cubic).
Maps to a command-line argument: -interp%s.
inverta boolean
Invert the specified transform before using it.
Maps to a command-line argument: -inverse (position: 1).
out_filenamea pathlike object or string representing a file
Output image.
Maps to a command-line argument: %s (position: -1).
quieta boolean
Do not display information messages or progress status.
Maps to a command-line argument: -quiet (position: 1).
reference_imagea pathlike object or string representing an existing file
In case the transform supplied maps from the input image onto a reference image, use this option to specify the reference. Note that this implicitly sets the -replace option.
Maps to a command-line argument: -reference%s (position: 1).
replace_transforma boolean
Replace the current transform by that specified, rather than applying it to the current transform.
Maps to a command-line argument: -replace (position: 1).
template_imagea pathlike object or string representing an existing file
Reslice the input image to match the specified template image.
Maps to a command-line argument: -template%s (position: 1).
transformation_filea pathlike object or string representing an existing file
The transform to apply, in the form of a 4x4 ascii file.
Maps to a command-line argument: -transform%s (position: 1).
out_filea pathlike object or string representing an existing file
in_filea pathlike object or string representing an existing file
The image containing the source data.The type of data required depends on the type of tracking as set in the preceeding argument.For DT methods, the base DWI are needed.For SD methods, the SH harmonic coefficients of the FOD are needed.
Maps to a command-line argument: %s (position: 2).
act_filea pathlike object or string representing an existing file
Use the Anatomically-Constrained Tractography framework during tracking; provided image must be in the 5TT (five - tissue - type) format.
Maps to a command-line argument: -act%s.
anglea float
Set the maximum angle between successive steps (default is 90deg x stepsize / voxelsize).
Maps to a command-line argument: -angle%s.
argsa string
Additional parameters to the command.
Maps to a command-line argument: %s.
backtracka boolean
Allow tracks to be truncated.
Maps to a command-line argument: -backtrack.
crop_at_gmwmia boolean
Crop streamline endpoints more precisely as they cross the GM-WM interface.
Maps to a command-line argument: -crop_at_gmwmi.
cutoff_valuea float
Set the FA or FOD amplitude cutoff for terminating tracks (default is 0.5).
Maps to a command-line argument: -cutoff%s.
desired_number_of_tracksan integer
Sets the desired number of tracks.The program will continue to generate tracks until this number of tracks have been selectedand written to the output file (default is 100 for *_STREAM methods, 1000 for *_PROB methods).
Maps to a command-line argument: -select%d.
do_not_precomputea boolean
Turns off precomputation of the legendre polynomial values.Warning: this will slow down the algorithm by a factor of approximately 4.
Maps to a command-line argument: -noprecomputed.
environa dictionary with keys which are a bytes or None or a value of class ‘str’ and with values which are a bytes or None or a value of class ‘str’
Environment variables.
(Nipype default value: {})
gradient_encoding_filea pathlike object or string representing an existing file
Gradient encoding, supplied as a 4xN text file with each line is in the format [ X Y Z b ]where [ X Y Z ] describe the direction of the applied gradient, and b gives the b-valuein units (1000 s/mm^2). See FSL2MRTrix.
Maps to a command-line argument: -grad%s.
initial_cutoff_valuea float
Sets the minimum FA or FOD amplitude for initiating tracks (default is twice the normal cutoff).
Maps to a command-line argument: -seed_cutoff%s.
initial_directiona list of from 2 to 2 items which are an integer
Specify the initial tracking direction as a vector.
Maps to a command-line argument: -seed_direction%s.
inputmodel‘FACT’ or ‘iFOD1’ or ‘iFOD2’ or ‘Nulldist1’ or ‘Nulldist2’ or ‘SD_Stream’ or ‘Seedtest’ or ‘Tensor_Det’ or ‘Tensor_Prob’
Specify the tractography algorithm to use. Valid choices are:FACT, iFOD1, iFOD2, Nulldist1, Nulldist2, SD_Stream, Seedtest, Tensor_Det, Tensor_Prob (default: iFOD2).
Maps to a command-line argument: -algorithm%s (position: -3).
(Nipype default value: FACT)
mask_filea pathlike object or string representing an existing file
Mask file. Only tracks within mask.
Maps to a command-line argument: -mask%s.
maximum_number_of_seedsan integer
Sets the maximum number of tracks to generate.The program will not generate more tracks than this number,even if the desired number of tracks hasn’t yet been reached(default is 1000 x number of streamlines).
Maps to a command-line argument: -seeds%d.
maximum_tract_lengtha float
Sets the maximum length of any track in millimeters (default is 500 mm).
Maps to a command-line argument: -maxlength%s.
minimum_tract_lengtha float
Sets the minimum length of any track in millimeters (default is 5 mm).
Maps to a command-line argument: -minlength%s.
out_filea pathlike object or string representing a file
Output data file.
Maps to a command-line argument: %s (position: -1).
rk4a boolean
Use 4th-order Runge-Kutta integration (slower, but eliminates curvature overshoot in 1st-order deterministic methods).
Maps to a command-line argument: -rk4.
seed_filea pathlike object or string representing an existing file
Seed file.
Maps to a command-line argument: -seed_image%s.
seed_gmwmia pathlike object or string representing an existing file
Seed from the grey matter - white matter interface (only valid if using ACT framework).
Maps to a command-line argument: -seed_gmwmi%s.
Requires inputs: act_file.
seed_speca list of from 4 to 4 items which are an integer
Seed specification in voxels and radius (x y z r).
Maps to a command-line argument: -seed_sphere%s.
step_sizea float
Set the step size of the algorithm in mm (default is 0.5).
Maps to a command-line argument: -step%s.
stopa boolean
Stop track as soon as it enters any of the include regions.
Maps to a command-line argument: -stop.
unidirectionala boolean
Track from the seed point in one direction only (default is to track in both directions).
Maps to a command-line argument: -seed_unidirectional.
trackeda pathlike object or string representing an existing file
It combines the original cortico sub-cortical parcellation with
the following extra segmented structures:
Segmentation of the 8 thalamic nuclei per hemisphere
Segmentation of 14 hippocampal subfields per hemisphere
Segmentation of 3 brainstem sub-structures
It also generates by defaults the corresponding (1) description of the nodes in graphml
format and (2) color lookup tables in FreeSurfer format that can be displayed in freeview.
Datalad is a powerful tool for the versioning and sharing of raw
and processed data as well as for the tracking of data provenance
(i.e. the recording on how data was processed). This page was
created with the intention to share with the user how we adopted
datalad to manage and process datasets with Connectome Mapper 3
in our lab, following the YODA principles to our best.
You may ask “What are the YODA principles?”. They are basic principles
behind creating, sharing, and publishing reproducible, understandable,
and open data analysis projects with DataLad.
Python3 must be installed with Datalad and all dependencies.
You can use the conda environment py39cmp-gui for instance.
See Installation of py39cmp-gui
for more installation details.
A recent version of git-annex and liblzma (included in
py39cmp-gui for Ubuntu/Debian).
For reproducibility, create and write datalad get commands to get_required_files_for_analysis.sh:
echo"datalad get input/sub-*/ses-*/anat/sub-*_T1w.nii.gz">code/get_required_files_for_analysis.shecho"datalad get input/sub-*/ses-*/dwi/sub-*_dwi.nii.gz">>code/get_required_files_for_analysis.shecho"datalad get input/sub-*/ses-*/dwi/sub-*_dwi.bvec">>code/get_required_files_for_analysis.shecho"datalad get input/sub-*/ses-*/dwi/sub-*_dwi.bval">>code/get_required_files_for_analysis.sh
Save the script to the dataset’s history:
dataladsave-m"Add script to get the files required for analysis by Alice"
--call-fmt specifies a custom docker run command. The current directory
is assumed to be the BIDS root directory and retrieve with "$(pwd)"/input and the
output directory is inside the derivatives/ folder.
Important
The name of the container-name registered to Datalad cannot have dot
as character so that a <VERSION_TAG> of v3.X.Y would need to be rewritten as v3-X-Y
Copy existing reference pipeline configuration files to code folder:
dataladsave-m"Alice's test dataset on local \workstation ready for analysis with connectomemapper-bidsapp:<VERSION_TAG>" \
--version-tagready4analysis-<date>-<time>
dataladcontainers-run will take of replacing the {inputs[i]} by the value
specified by the i--input flag (Indexing start at 0).
Save the state:
dataladsave-m"Alice's test dataset on local \workstation processed by connectomemapper-bidsapp:<VERSION_TAG>, {Date/Time}" \
--version-tagprocessed-<date>-<time>
Push the datalad dataset with data derivatives to the server:
dataladpush-d.--toremote
Note
--toremote specifies the remote dataset sibling i.e.
ssh://<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example-processed
previously configured.
Get connectome mapper output files (Brain Segmentation and Multi-scale Parcellation) used by Bob in his analysis
For reproducibility, write datalad get commands to get_required_files_for_analysis_by_bob.sh:
echo"datalad get derivatives/cmp/sub-*/ses-*/anat/sub-*_mask.nii.gz" \
>code/get_required_files_for_analysis_by_bob.shecho"datalad get derivatives/cmp/sub-*/ses-*/anat/sub-*_class-*_dseg.nii.gz" \
>>code/get_required_files_for_analysis_by_bob.shecho"datalad get derivatives/cmp/sub-*/ses-*/anat/sub-*_scale*_atlas.nii.gz" \
>>code/get_required_files_for_analysis_by_bob.sh
Save the script to the dataset’s history:
dataladsave-m"Add script to get the files required for analysis by Bob"
Update the remote datalad dataset with data derivatives:
dataladpush-d.--toorigin
Note
--toorigin specifies the origin dataset sibling i.e.
ssh://<SERVER_USERNAME>@<SERVER_IP_ADDRESS>:/archive/data/ds-example-processed
from which it was cloned.
Connectome Mapper 3 BIDS App can be run on a cluster using Singularity.
For your convenience, the Singularity image is automatically built along
the docker image using Singularity 3.8.4 and deployed to
Sylabs.io (equivalent of DockerHub for Singularity)
during continuous integration on CircleCI. It can be freely downloaded
with the following command:
The following example shows how to call from the
terminal the Singularity image of the CMP3 BIDS App
to perform both anatomical and diffusion pipelines for
sub-01, sub-02 and sub-03 of a BIDS dataset whose
root directory is located at ${localDir}:
As you can see, the singularityrun command is slightly different from the dockerrun. The docker option flag -v is replaced by the singularity --bind to map local folders inside the container. Last but not least, while docker containers are executed in total isolation, singularity images MUST run with the option flag --containall. Otherwise your $HOME and $TMP directories or your local environment variables might be shared inside the container.
It actually exists two options for Docker to Singularity container image conversion. Let’s say we want to store Singularity-compatible image file in ~/Softwares/singularity/.
Option 1 (recommended): Using the Docker image docker2singularity
Disadvantage(s): Have to make a-priori the conversion locally on a workstation where docker is installed and then upload the converted image to the cluster
If you want to reproduce all the results of this notebook on your side, a conda environment.yml file can be downloaded at the following link: tutorial_environment.yml. The original .ipynb notebook file can be downloaded at the following link: analysis_tutorial.ipynb.
Once you have downloaded the conda environment file, install the environment as follows:
# General
import os
import sys
import subprocess
import copy
# Dataset management
import datalad.api as dl
# Data handling
import pandas as pd
import numpy as np
import nibabel as nib
import scipy.io as sio
# BIDS dataset handling
from bids import BIDSLayout
# Network / Graph
import pygsp
import networkx as nx
# Visualization
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import nilearn
from nilearn import plotting, image, datasets
/Applications/miniconda3/envs/cmp3-tutorial/lib/python3.7/site-packages/nilearn/datasets/__init__.py:96: FutureWarning: Fetchers from the nilearn.datasets module will be updated in version 0.9 to return python strings instead of bytes and Pandas dataframes instead of Numpy arrays.
"Numpy arrays.", FutureWarning)
For demonstration, we are going to use the latest version of VEPCON dataset, available on Open Neuro that already contains output from Connectome Mapper v3.0.3. A full description of the dataset can be found in Pascucci, Tourbier, et al. 2022.
In case you want to rerun the notebook, make sure to remove any ds003505_demo folder in the directory of the notebook. Otherwise, datalad install will complain.
[2]:
%%time
# Download example dataset with datalad
dataset_dir = os.path.join(".", "ds003505_demo")
vepcon_data = dl.install(
path=dataset_dir,
source="https://github.com/OpenNeuroDatasets/ds003505.git"
)
[INFO] Cloning dataset to Dataset(/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo)
[INFO] Attempting to clone from https://github.com/OpenNeuroDatasets/ds003505.git to /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo
[INFO] Start enumerating objects
[INFO] Start receiving objects
[INFO] Start resolving deltas
[INFO] Completed clone attempts for Dataset(/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo)
[INFO] scanning for annexed files (this may take some time)
[INFO] Remote origin not usable by git-annex; setting annex-ignore
[INFO] https://github.com/OpenNeuroDatasets/ds003505.git/config download failed: Not Found
[INFO] access to 1 dataset sibling s3-PRIVATE not auto-enabled, enable with:
| datalad siblings -d "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo" enable -s s3-PRIVATE
CPU times: user 97 ms, sys: 74.8 ms, total: 172 ms
Wall time: 9.41 s
As the dataset is in BIDS, we can use Pybids to help us with the task of interacting with the files of the dataset.
[3]:
# Represent the BIDS dataset as a PyBIDS BIDSLayout
layout = BIDSLayout(vepcon_data.path)
# Add derivative folder containing the connectivity matrices
layout.add_derivatives(os.path.join(vepcon_data.path, "derivatives", "cmp-v3.0.3"))
Now we can easily query for the filenames of the files we are interested in using layout.get. We will ask for the connectivity matrix of subject 01, scale 3, in tsv format.
It will be returned as a list of file paths (in this case containing just one element). Note that at this stage the Datalad dataset contains mostly file annexes.
As the connectome in TSV format, are just text files, they are directly managed by Git, such that we do have to retrieve its actual content before querying them.
Using Networkx, we can convert this table to a network graph. From that, we can convert individual measures to a Numpy array. The array format is especially useful, as it allows us to plot the edge weights easily.
[6]:
G = nx.from_pandas_edgelist(edges, edge_attr=True)
A_fiber_density = nx.to_numpy_array(G, weight="fiber_density")
A_fiber_density
Alternatively, we can also load the matrices in network format, by reading the gpickle files using Networkx:
[7]:
# Retrieve content of the connectome file with datalad
vepcon_data.get('derivatives/cmp-v3.0.3/sub-01/dwi/sub-01_atlas-L2018_res-scale3_conndata-network_connectivity.gpickle')
# Query the connectome file path
conn_gpickle_scale3 = layout.get(
subject='01',
datatype='dwi',
atlas='L2018',
res='scale3',
suffix='connectivity',
extension='gpickle',
return_type='filename'
)
G = nx.read_gpickle(conn_gpickle_scale3[0]) # same format as with tsv
A_fiber_density = nx.to_numpy_array(G, weight="fiber_density")
A_fiber_density
Note that in these two situations, the connectome files are not directly managed by Git and their actual content need to be first retrieved with the datalad get command.
Let’s plot some of those adjacency matrices using Matplotlib and Seaborn:
[10]:
# Create color map to handle zeros with log visualization
custom_cmap = copy.copy(plt.cm.get_cmap("inferno"))
# Copy the default cmap (0,0,0.5156)
custom_cmap.set_bad((0, 0, 0))
# Define the metrics to plot
cols_to_plot = ["number_of_fibers", "fiber_length_mean",
"fiber_proportion", "fiber_density",
"FA_mean", "normalized_fiber_density"]
# Plot with log-scale for all metrics except FA_mean
fig, axs = plt.subplots(3,2, figsize=(12,15))
axs = axs.flatten()
for c, ax in zip(cols_to_plot, axs):
A = nx.to_numpy_array(G, weight=c)
sns.heatmap(A, ax=ax, cmap=custom_cmap, norm=(LogNorm()
if c != "FA_mean"
else None))
ax.set_title(c)
Graph signal processing with structural connectivity and visualization
The package PyGSP offers a range of graph signal processing tools we can use on our structural connectivity data. In particular, we can do an eigendecomposition of the graph Laplacian to get the Fourier basis - the connectome harmonics.
Even though it is possible to also do this for subcortical regions, for the sake of plotting it is easier just to work with the cortical regions. To identify those, we need the parcellation labels.
[11]:
# Query the BIDS index/label mapping TSV file for the corresponding scale
label_path = layout.get(subject='01',
datatype='anat',
atlas='L2018',
res='scale3',
suffix='dseg',
extension='tsv',
return_type='filename')
print(f' BIDS index/label mapping TSV filepath: {label_path[0]}')
# Load the TSV content
labels = pd.read_csv(label_path[0], sep="\t", index_col=0)
# Reset index to start at 0
labels.reset_index(inplace=True)
# Select cortex labels
labels_ctx = labels["name"][[n.startswith("ctx") for n in labels["name"]]].copy()
idx = list(labels_ctx.index)
# Select rows with cortical areas
# A_fd_ctx = A_fiber_density[idx]
A = nx.to_numpy_array(G, weight="FA_mean")
A_fd_ctx = A[idx]
# Select columns with cortical areas
A_fd_ctx = A_fd_ctx[:,idx]
[13]:
# Display the shape of the matrix
A_fd_ctx.shape
[13]:
(216, 216)
Now we can compute the harmonics:
[14]:
np.fill_diagonal(A_fd_ctx, 0) # PyGSP does not support self-loops
G_fd = pygsp.graphs.Graph(A_fd_ctx) # PyGSP graph
G_fd.compute_laplacian(lap_type="normalized")
G_fd.compute_fourier_basis() # compute connectome harmonics
The harmonics have the same dimensions as our original adjacency matrix.
Nilearn offers a quick and easy way to plot them using plot_markers. For this, we need the center coordinates of each region in the parcellation in MNI space. For your convenience, they have been already computed and can be easily retrieved with the get_lausanne2018_parcellation_mni_coords(scale) utility function of CMP3.
[16]:
# Import the util function
from cmtklib.data.parcellation.util import get_lausanne2018_parcellation_mni_coords
[17]:
# Load coordinates with the utility function provided by CMP3
coords_ctx = get_lausanne2018_parcellation_mni_coords('scale3')
# Plot
plotting.plot_markers(G_fd.U[:,1], coords_ctx)
[17]:
<nilearn.plotting.displays.OrthoProjector at 0x7f847a3e5690>
A prettier version is to plot the connectome harmonics on a brain surface using Nilearn plot_surf_roi(). For your convenience, a multiple view plot can be easily generated and saved with the plot_lausanne2018_surface_ctx() of the cmtklib.data.parcellation.viz module of CMP3, by specifying the scale to use.
These figures take a few minutes to generate, so you might need to be a bit patient.
[18]:
# Import the viz function
from cmtklib.data.parcellation.viz import plot_lausanne2018_surface_ctx
CPU times: user 1min 37s, sys: 5.73 s, total: 1min 43s
Wall time: 1min 23s
Pretty, right? This concludes the tutorial. We hope you enjoy it and any feedback or suggestions to improve it are very welcome! Just please open a new issue on GitHub and share your thoughts with us.
Human brain networks function in connectome-specific harmonic waves (Atasoy et al., 2016, link): Landmark paper that first applied graph signal processing to brain connectivity.
Functional harmonics reveal multi-dimensional basis functions underlying cortical organization (Glomb et al., 2021, link): Connectome harmonics of functional connectivity.
The connectome spectrum as a canonical basis for a sparse representation of fast brain activity (Rué-Queralt et al., 2021, link): EEG and connectome harmonics.
It is important to note that CMP3 does not include preprocessing of EEG data, so it is expected that you have your data ready to be analyzed.
Important: Note that the skull-surfaces provided with the dataset (“bem”, see below) which are needed to create the head model are obtained from non-defaced MRIs. You will not be able to proceed with surfaces created from VEPCON dataset alone.
If you want to reproduce all the results of this notebook on your side, a conda environment.yml file can be downloaded at the following link: EEG_tutorial_environment.yml. The original .ipynb notebook file can be downloaded at the following link: EEG_pipeline_tutorial.ipynb.
Once you have downloaded the conda environment file, install the environment py37cmp-eeg as follows:
For demonstration, we are going to use the latest version of VEPCON dataset, available on Open Neuro that already contains outputs from Connectome Mapper v3.0.3 and Freesurfer 7.1.1. A full description of the dataset can be found in Pascucci, Tourbier, et al. 2022.
In case you want to rerun the notebook, make sure to remove any ds003505_demo folder in the directory of the notebook. Otherwise, datalad install will complain.
[2]:
%%time
# Download example dataset with datalad
bids_dir = os.path.join(".", "ds003505_demo") # Adjust path to your BIDS dataset as needed
vepcon_data = dl.install(
path=bids_dir,
source="https://github.com/OpenNeuroDatasets/ds003505.git"
)
CPU times: user 9.95 ms, sys: 13.8 ms, total: 23.7 ms
Wall time: 58.6 ms
As of now, the EEG pipeline can only be run directly from the application programming interface (API) as demonstrated in this notebook. As soon as possible, we will integrate it into the graphical user interface (GUI) and the command line interface (CLI).
First, we need to configure the following user-defined arguments. Please modify them as needed.
[3]:
# Adjust path to your BIDS dataset as needed
bids_dir = vepcon_data.path
# Adjust path of the output directory as needed
output_dir = os.path.join(bids_dir, 'derivatives')
# Adjust the subject to be processed as needed
participant_label = 'sub-01'
# Adjust the name of the task to be considered
task_label = 'faces'
# Adjust path to the anatomical pipeline configuration file as needed
anat_pipeline_config = os.path.join(bids_dir, 'code', 'ConnectomeMapper-Docker', 'ref_anatomical_config.json')
# Adjust path to the MNE-based pipeline configuration file as needed
eeg_pipeline_config = os.path.join('.', 'ref_mne_eeg_config.json')
The eeg pipeline config .json file contains information that CMP3 needs to correctly load EEG data and associated information like electrode positions, names of conditions, which parcellation to use, etc. as seen below:
Note: If you would like to run another subject (all available subjects except subjects 5 and 15 can be run), you will need to modify the config files (replacing sub-<label> accordingly).
Then, we need to tell datalad to download the actual content of the structural MRI and EEG files that will be input to the pipelines.
[5]:
%%time
# Raw MRI
vepcon_data.get(f'{participant_label}/anat/')
CPU times: user 7.09 ms, sys: 9.96 ms, total: 17 ms
Wall time: 135 ms
[5]:
[{'action': 'get',
'path': '/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/anat',
'type': 'directory',
'refds': '/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo',
'status': 'notneeded',
'message': ('nothing to get from %s',
'/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/anat')}]
[6]:
%%time
# CMP3 and Freesurfer derivatives
vepcon_data.get(f'derivatives/cmp-v3.0.3/{participant_label}/anat/')
vepcon_data.get(f'derivatives/freesurfer-7.1.1/{participant_label}/')
CPU times: user 7.82 ms, sys: 10.2 ms, total: 18.1 ms
Wall time: 165 ms
[6]:
[{'action': 'get',
'path': '/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01',
'type': 'directory',
'refds': '/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo',
'status': 'notneeded',
'message': ('nothing to get from %s',
'/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01')}]
[7]:
%%time
# Electrode position
vepcon_data.get(f'derivatives/{__cartool_directory__}/{participant_label}/' +
f'eeg/{participant_label}_eeg.xyz')
# Preprocessed EEG in EEGLab .fdt/.set format
vepcon_data.get(f'derivatives/{__eeglab_directory__}/{participant_label}/eeg/' +
f'{participant_label}_task-faces_desc-preproc_eeg.fdt')
vepcon_data.get(f'derivatives/{__eeglab_directory__}/{participant_label}/eeg/' +
f'{participant_label}_task-faces_desc-preproc_eeg.set')
CPU times: user 11.1 ms, sys: 14.5 ms, total: 25.6 ms
Wall time: 223 ms
In the latest version of VEPCON (v1.1.1) that we use in this tutorial, we notice that dataset_description.json files in the derivatives/cartool-v3.80 and derivatives/eeglab-v14.1.1 are invalid and will create an error if these directories are added to the BIDSLayout representation of the VEPCON dataset. We need to fix them by running the following helper function provided along this tutorial:
# initialize project
project = cmp.project.ProjectInfo()
project.base_directory = os.path.abspath(bids_dir)
project.output_directory = os.path.abspath(output_dir)
project.subjects = ["{}".format(participant_label)]
project.subject = "{}".format(participant_label)
# VEPCON dataset does not have a subject/sessions structure
project.subject_sessions = [""]
project.subject_session = ""
# Set the path to the anatomical pipeline configuration file
project.anat_config_file = os.path.abspath(anat_pipeline_config)
As the dataset is in BIDS, we can use Pybids to help us with the task of interacting with the files of the dataset.
[10]:
# Represent the BIDS dataset as a PyBIDS BIDSLayout
bids_layout = BIDSLayout(project.base_directory)
Once set, we can run the anatomical pipeline, in order to obtain, among other things, Freesurfer derivatives necessary for the MNE pipeline.
Freesurfer and CMP3 derivatives are indeed provided with the VEPCON dataset, so we do not need to run it, but if run on a fresh dataset
[11]:
%%time
# Do not run again the anatomical pipeline
# You will have to set it to True on a fresh dataset
run = False
# Initialize the anatomical pipeline reading the configuration file
anat_pipeline = cmp.project.init_anat_project(project, False)
if anat_pipeline is not None:
# Check if inputs to anatomical pipeline are valid
anat_valid_inputs = anat_pipeline.check_input(bids_layout, gui=False)
if anat_valid_inputs:
if run:
print(">> Process anatomical pipeline")
anat_pipeline.process()
else:
print_error(" .. ERROR: Invalid inputs")
exit_code = 1
# Check if outputs to anatomical pipeline are valid
if run:
anat_valid_outputs, msg = anat_pipeline.check_output()
else:
anat_valid_outputs = True
# Set the freesurfer subjects directory and the subject id
project.freesurfer_subjects_dir = anat_pipeline.stages['Segmentation'].config.freesurfer_subjects_dir
project.freesurfer_subject_id = anat_pipeline.stages['Segmentation'].config.freesurfer_subject_id
.. LOAD: Load anatomical config file : /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/code/ConnectomeMapper-Docker/ref_anatomical_config.json
.. WARNING: CMP3 version used to generate the configuration files (v3.0.2) and version of CMP3 used (v3.1.0) differ
**** Check Inputs ****
> Looking in /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo for....
/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/anat/sub-01_T1w.nii.gz
... t1_file : /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/anat/sub-01_T1w.nii.gz
/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/anat/sub-01_T1w.json
... t1_json_file : /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/anat/sub-01_T1w.json
Inputs check finished successfully.
Only anatomical data (T1) available.
CPU times: user 29.6 ms, sys: 6.12 ms, total: 35.7 ms
Wall time: 33.1 ms
In VEPCON, the electrode positions are provided in a file in the Cartool-derivatives folder, but CMP3 expects them in the EEGLAB-derivatives folder.
[12]:
# To be Removed !!!
# Copy the file to the appropriate location
#cartool_file_location = os.path.join(
# bids_dir, 'derivatives', __cartool_directory__,
# participant_label,'eeg', participant_label + '_eeg.xyz'
#)
#eeglab_file_location = os.path.join(
# bids_dir, 'derivatives', 'eeglab-v14.1.1',
# participant_label, 'eeg', participant_label + '_eeg.xyz')
# if not os.path.exists(eeglab_file_location):
# _ = shutil.copyfile(cartool_file_location, eeglab_file_location)
Since we are using non-defaced MRIs, which are not exactly the same as the ones provided on OpenNeuro, we need an additional transform that will be applied to the electrode positions.
[13]:
# The following line creates the appropriate file with this transform in derivatives/cmp-v3.0.3:
create_trans_files(bids_dir, participant_label)
Overwriting existing file.
Finally, you can run the EEG pipeline.
[14]:
%%time
from cmtklib import config
# IF on MacOSX, add /usr/sbin to the $PATH
# which contains sysctl
# Otherwise, Nipype raises an "/bin/sh: sysctl: command not found" error
# when trying to get the system memory
if "darwin" in sys.platform:
os.environ["PATH"] = f'/usr/sbin/:{os.environ["PATH"]}'
# Note that "sysctl" can be located in a different place
# than "/usr/sbin".
# To know which path has to be added, you can run
# `locate sysctl`
# Set the path to the anatomical pipeline configurration file
eeg_pipeline_config = 'ref_mne_eeg_config.json'
project.eeg_config_file = os.path.abspath(eeg_pipeline_config)
if anat_valid_outputs:
# Initialize the EEG pipeline reading the configuration file and
# check input validity
eeg_valid_inputs, eeg_pipeline = cmp.project.init_eeg_project(
project, False
)
if eeg_pipeline is not None:
eeg_pipeline.parcellation_scheme = anat_pipeline.parcellation_scheme
eeg_pipeline.atlas_info = anat_pipeline.atlas_info
eeg_pipeline.stages['EEGPreprocessing'].config.task_label = 'faces'
if eeg_valid_inputs:
print(">> Process EEG pipeline")
eeg_pipeline.process()
else:
print(" .. ERROR: Invalid inputs")
exit_code = 1
else:
print_error(f' .. ERROR: Invalid anatomical outputs for eeg pipeline')
print_error(f'{msg}')
exit_code = 1
**** Check Inputs ****
Base dir: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline
.. DEBUG : Generated file name = sub-01_atlas-L2018_res-scale1_dseg.nii.gz
.. DEBUG : Generated file name = sub-01_atlas-L2018_res-scale1_dseg.nii.gz
cmp-v3.0.3
220709-17:01:27,860 nipype.workflow INFO:
[Node] Setting-up "eeg_check_input" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_check_input".
220709-17:01:27,869 nipype.workflow INFO:
[Node] Executing "eeg_check_input" <nipype.interfaces.io.BIDSDataGrabber>
Load dataset_description for: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.0.3
Load dataset_description for: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/eeglab-v14.1.1
Load dataset_description for: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cartool-v3.80
220709-17:01:36,614 nipype.workflow INFO:
[Node] Finished "eeg_check_input", elapsed time 8.742065s.
.. Input file for "eeg_ts_file" key: ['/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/eeglab-v14.1.1/sub-01/eeg/sub-01_task-faces_desc-preproc_eeg.set']
.. Input file for "events_file" key: ['/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/eeg/sub-01_task-faces_events.tsv']
.. Input file for "electrodes_file" key: ['/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/eeg/sub-01_task-faces_electrodes.tsv']
.. Input file for "roi_volume_file" key: ['/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.0.3/sub-01/anat/sub-01_atlas-L2018_res-scale1_dseg.nii.gz', '/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.0.3/sub-01/anat/sub-01_space-DWI_atlas-L2018_res-scale1_dseg.nii.gz']
.. LOAD: Load EEG config file : /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ref_mne_eeg_config.json
.. INFO: Generated with the same CMP3 version
{'Global': {'process_type': 'EEG', 'subjects': ['sub-01'], 'subject': 'sub-01', 'version': 'v3.1.0'}, 'eeg_preprocessing_stage': {'task_label': 'faces', 'eeg_ts_file.extension': 'set', 'eeg_ts_file.toolbox_derivatives_dir': 'eeglab-v14.1.1', 'eeg_ts_file.datatype': 'eeg', 'eeg_ts_file.suffix': 'eeg', 'eeg_ts_file.desc': 'preproc', 'eeg_ts_file.task': 'faces', 'events_file.datatype': 'eeg', 'events_file.suffix': 'events', 'events_file.extension': 'tsv', 'events_file.task': 'faces', 'electrodes_file_fmt': 'Cartool', 'cartool_electrodes_file.toolbox_derivatives_dir': 'cartool-v3.80', 'cartool_electrodes_file.datatype': 'eeg', 'cartool_electrodes_file.suffix': 'eeg', 'cartool_electrodes_file.extension': 'xyz', 't_min': -0.2, 't_max': 0.5}, 'eeg_source_imaging_stage': {'esi_tool': 'MNE', 'mne_apply_electrode_transform': True, 'mne_electrode_transform_file.toolbox_derivatives_dir': 'cmp-v3.0.3', 'mne_electrode_transform_file.datatype': 'eeg', 'mne_electrode_transform_file.suffix': 'trans', 'mne_electrode_transform_file.extension': 'fif', 'parcellation_cmp_dir': 'cmp-v3.0.3', 'parcellation_scheme': 'Lausanne2018', 'lausanne2018_parcellation_res': 'scale1', 'mne_esi_method': 'sLORETA', 'mne_esi_method_snr': 3.0}, 'eeg_connectome_stage': {'connectivity_metrics': ['coh', 'cohy', 'imcoh', 'plv', 'ciplv', 'ppc', 'pli', 'wpli', 'wpli2_debiased'], 'output_types': ['tsv', 'gpickle', 'mat', 'graphml']}, 'Multi-processing': {'number_of_cores': 1}}
>> Process EEG pipeline
220709-17:01:36,634 nipype.interface INFO:
**** Processing ****
.. DEBUG : Generated file name = sub-01_atlas-L2018_res-scale1_dseg.nii.gz
.. DEBUG : Generated file path (no extension) = /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/eeg/sub-01_task-faces_events
.. DEBUG: Event_ids for Epochs extraction: {'SCRAMBLED': '0', 'FACES': '1'}
220709-17:01:37,164 nipype.workflow INFO:
Generated workflow graph: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/graph.svg (graph2use=colored, simple_form=True).
220709-17:01:37,205 nipype.workflow INFO:
Workflow eeg_pipeline settings: ['check', 'execution', 'logging', 'monitoring']
220709-17:01:37,219 nipype.workflow INFO:
Running in parallel.
220709-17:01:37,222 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 3 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:01:37,344 nipype.workflow INFO:
[Node] Outdated cache found for "eeg_pipeline.eeg_datasource".
220709-17:01:37,365 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_datasource" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_datasource".
220709-17:01:37,371 nipype.workflow INFO:
[Node] Outdated cache found for "eeg_pipeline.eeg_datasource".
220709-17:01:37,380 nipype.workflow INFO:
[Node] Executing "eeg_datasource" <nipype.interfaces.io.BIDSDataGrabber>
220709-17:01:39,229 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 2 jobs ready. Free memory (GB): 14.20/14.40, Free processors: 0/1.
Currently running:
* eeg_pipeline.eeg_datasource
Load dataset_description for: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/eeglab-v14.1.1
Load dataset_description for: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cartool-v3.80
Load dataset_description for: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.0.3
220709-17:01:46,693 nipype.workflow INFO:
[Node] Finished "eeg_datasource", elapsed time 9.308715s.
220709-17:01:47,241 nipype.workflow INFO:
[Job 0] Completed (eeg_pipeline.eeg_datasource).
220709-17:01:47,249 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 3 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:01:47,431 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_source_imaging_stage.mne_createbem" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createbem".
220709-17:01:47,437 nipype.workflow INFO:
[Node] Executing "mne_createbem" <cmtklib.interfaces.mne.CreateBEM>
Creating the BEM geometry...
Going from 5th to 4th subdivision of an icosahedron (n_tri: 20480 -> 5120)
Going from 5th to 4th subdivision of an icosahedron (n_tri: 20480 -> 5120)
Going from 5th to 4th subdivision of an icosahedron (n_tri: 20480 -> 5120)
outer skin CM is -1.05 -8.88 11.57 mm
outer skull CM is -1.05 -8.80 11.09 mm
inner skull CM is -1.05 -10.32 19.78 mm
Checking that surface outer skull is inside surface outer skin ...
220709-17:01:49,241 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 2 jobs ready. Free memory (GB): 14.20/14.40, Free processors: 0/1.
Currently running:
* eeg_pipeline.eeg_source_imaging_stage.mne_createbem
Checking that surface inner skull is inside surface outer skull ...
Checking distance between outer skin and outer skull surfaces...
Minimum distance between the outer skin and outer skull surfaces is approximately 1.6 mm
Checking distance between outer skull and inner skull surfaces...
Minimum distance between the outer skull and inner skull surfaces is approximately 1.8 mm
Surfaces passed the basic topology checks.
Complete.
Approximation method : Linear collocation
Three-layer model surfaces loaded.
Computing the linear collocation solution...
Matrix coefficients...
outer skin (2562) -> outer skin (2562) ...
outer skin (2562) -> outer skull (2562) ...
outer skin (2562) -> inner skull (2562) ...
outer skull (2562) -> outer skin (2562) ...
outer skull (2562) -> outer skull (2562) ...
outer skull (2562) -> inner skull (2562) ...
inner skull (2562) -> outer skin (2562) ...
inner skull (2562) -> outer skull (2562) ...
inner skull (2562) -> inner skull (2562) ...
Inverting the coefficient matrix...
IP approach required...
Matrix coefficients (homog)...
inner skull (2562) -> inner skull (2562) ...
Inverting the coefficient matrix (homog)...
Modify the original solution to incorporate IP approach...
Combining...
Scaling...
Solution ready.
BEM geometry computations complete.
220709-17:02:50,820 nipype.workflow INFO:
[Node] Finished "mne_createbem", elapsed time 63.377566s.
220709-17:02:51,340 nipype.workflow INFO:
[Job 1] Completed (eeg_pipeline.eeg_source_imaging_stage.mne_createbem).
220709-17:02:51,346 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 2 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:02:51,455 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_source_imaging_stage.mne_createsrc" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createsrc".
220709-17:02:51,460 nipype.workflow INFO:
[Node] Executing "mne_createsrc" <cmtklib.interfaces.mne.CreateSrc>
Setting up the source space with the following parameters:
SUBJECTS_DIR = /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1
Subject = sub-01
Surface = white
Octahedron subdivision grade 6
>>> 1. Creating the source space...
Doing the octahedral vertex picking...
Loading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/surf/lh.white...
Mapping lh sub-01 -> oct (6) ...
Triangle neighbors and vertex normals...
Loading geometry from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/surf/lh.sphere...
Setting up the triangulation for the decimated surface...
loaded lh.white 4098/149863 selected to source space (oct = 6)
Loading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/surf/rh.white...
Mapping rh sub-01 -> oct (6) ...
Triangle neighbors and vertex normals...
Loading geometry from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/surf/rh.sphere...
Setting up the triangulation for the decimated surface...
loaded rh.white 4098/147183 selected to source space (oct = 6)
Calculating source space distances (limit=inf mm)...
220709-17:02:53,344 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 1 jobs ready. Free memory (GB): 14.20/14.40, Free processors: 0/1.
Currently running:
* eeg_pipeline.eeg_source_imaging_stage.mne_createsrc
Computing patch statistics...
Patch information added...
Computing patch statistics...
Patch information added...
You are now one step closer to computing the gain matrix
Write a source space...
[done]
Write a source space...
[done]
2 source spaces written
220709-17:13:09,333 nipype.workflow INFO:
[Node] Finished "mne_createsrc", elapsed time 617.869614s.
220709-17:13:10,258 nipype.workflow INFO:
[Job 2] Completed (eeg_pipeline.eeg_source_imaging_stage.mne_createsrc).
220709-17:13:10,264 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 1 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:13:10,360 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_preprocessing_stage.eeglab2fif" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_preprocessing_stage/eeglab2fif".
220709-17:13:10,367 nipype.workflow INFO:
[Node] Executing "eeglab2fif" <cmtklib.interfaces.mne.EEGLAB2fif>
eeg_ts_file = /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/eeglab-v14.1.1/sub-01/eeg/sub-01_task-faces_desc-preproc_eeg.set
electrodes_file = /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cartool-v3.80/sub-01/eeg/sub-01_eeg.xyz
event_ids = {'SCRAMBLED': '0', 'FACES': '1'}
events_file = /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/sub-01/eeg/sub-01_task-faces_events.tsv
out_epochs_fif_fname = epo.fif
t_max = 0.5
t_min = -0.2
Extracting parameters from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/eeglab-v14.1.1/sub-01/eeg/sub-01_task-faces_desc-preproc_eeg.set...
Not setting metadata
Not setting metadata
588 matching events found
No baseline correction applied
0 projection items activated
Ready.
Applying baseline correction (mode: mean)
Adding average EEG reference projection.
1 projection items deactivated
Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it.
.. INFO: montage_fname = /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cartool-v3.80/sub-01/eeg/sub-01_eeg.xyz
.. INFO: Create montage from Cartool electrodes file...
220709-17:13:11,693 nipype.workflow INFO:
[Node] Finished "eeglab2fif", elapsed time 1.322598s.
220709-17:13:12,258 nipype.workflow INFO:
[Job 3] Completed (eeg_pipeline.eeg_preprocessing_stage.eeglab2fif).
220709-17:13:12,264 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 2 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:13:12,361 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_source_imaging_stage.mne_createcov" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createcov".
220709-17:13:12,366 nipype.workflow INFO:
[Node] Executing "mne_createcov" <cmtklib.interfaces.mne.CreateCov>
Reading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_preprocessing_stage/eeglab2fif/epo.fif ...
Read a total of 1 projection items:
Average EEG reference (1 x 128) idle
Found the data of interest:
t = -200.00 ... 500.00 ms
0 CTF compensation matrices available
Not setting metadata
Not setting metadata
588 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 1)
1 projection items activated
Computing rank from data with rank=None
Using tolerance 3.2e-11 (2.2e-16 eps * 128 dim * 1.1e+03 max singular value)
Estimated rank (eeg): 127
EEG: rank 127 computed from 128 data channels with 1 projector
Created an SSP operator (subspace dimension = 1)
Setting small EEG eigenvalues to zero (without PCA)
Reducing data rank from 128 -> 127
Estimating covariance using SHRUNK
220709-17:13:14,260 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 1 jobs ready. Free memory (GB): 14.20/14.40, Free processors: 0/1.
Currently running:
* eeg_pipeline.eeg_source_imaging_stage.mne_createcov
Done.
Estimating covariance using EMPIRICAL
Done.
Using cross-validation to select the best estimator.
Number of samples used : 29988
log-likelihood on unseen data (descending order):
shrunk: 71.873
empirical: -373.964
selecting best estimator: shrunk
[done]
220709-17:13:15,592 nipype.workflow INFO:
[Node] Finished "mne_createcov", elapsed time 3.22245s.
220709-17:13:16,262 nipype.workflow INFO:
[Job 4] Completed (eeg_pipeline.eeg_source_imaging_stage.mne_createcov).
220709-17:13:16,270 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 1 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:13:16,364 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_source_imaging_stage.mne_createfwd" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createfwd".
220709-17:13:16,370 nipype.workflow INFO:
[Node] Executing "mne_createfwd" <cmtklib.interfaces.mne.CreateFwd>
Reading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_preprocessing_stage/eeglab2fif/epo.fif ...
Read a total of 1 projection items:
Average EEG reference (1 x 128) idle
Found the data of interest:
t = -200.00 ... 500.00 ms
0 CTF compensation matrices available
Not setting metadata
Not setting metadata
588 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 1)
1 projection items activated
Source space : /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createsrc/src.fif
MRI -> head transform : /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.0.3/sub-01/eeg/sub-01_trans.fif
Measurement data : instance of Info
Conductor model : /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createbem/bem.fif
Accurate field computations
Do computations in head coordinates
Free source orientations
Reading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createsrc/src.fif...
Read 2 source spaces a total of 8196 active source locations
Coordinate transformation: MRI (surface RAS) -> head
1.000000 0.000000 0.000000 0.00 mm
0.000000 1.000000 0.000000 9.00 mm
0.000000 0.000000 1.000000 -11.00 mm
0.000000 0.000000 0.000000 1.00
Read 128 EEG channels from info
Head coordinate coil definitions created.
Source spaces are now in head coordinates.
Setting up the BEM model using /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createbem/bem.fif...
Loading surfaces...
Loading the solution matrix...
Three-layer model surfaces loaded.
Loaded linear_collocation BEM solution from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createbem/bem.fif
Employing the head->MRI coordinate transform with the BEM model.
BEM model bem.fif is now set up
Source spaces are in head coordinates.
Checking that the sources are inside the surface (will take a few...)
Skipping interior check for 1736 sources that fit inside a sphere of radius 53.7 mm
Skipping solid angle check for 0 points using Qhull
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
220709-17:13:18,262 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 0 jobs ready. Free memory (GB): 14.20/14.40, Free processors: 0/1.
Currently running:
* eeg_pipeline.eeg_source_imaging_stage.mne_createfwd
[Parallel(n_jobs=4)]: Done 2 out of 4 | elapsed: 8.9s remaining: 8.9s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 9.0s remaining: 0.0s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 9.0s finished
Skipping interior check for 1721 sources that fit inside a sphere of radius 53.7 mm
Skipping solid angle check for 0 points using Qhull
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=4)]: Done 2 out of 4 | elapsed: 0.3s remaining: 0.3s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 0.3s remaining: 0.0s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 0.3s finished
Setting up for EEG...
Computing EEG at 8196 source locations (free orientations)...
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=4)]: Done 2 out of 4 | elapsed: 1.1s remaining: 1.1s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 1.3s remaining: 0.0s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 1.3s finished
Finished.
Write a source space...
[done]
Write a source space...
[done]
2 source spaces written
220709-17:13:30,321 nipype.workflow INFO:
[Node] Finished "mne_createfwd", elapsed time 13.943436s.
220709-17:13:32,276 nipype.workflow INFO:
[Job 5] Completed (eeg_pipeline.eeg_source_imaging_stage.mne_createfwd).
220709-17:13:32,282 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 1 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:13:32,370 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_source_imaging_stage.mne_invsol" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_invsol".
220709-17:13:32,379 nipype.workflow INFO:
[Node] Executing "mne_invsol" <cmtklib.interfaces.mne.MNEInverseSolutionROI>
Reading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_preprocessing_stage/eeglab2fif/epo.fif ...
Read a total of 1 projection items:
Average EEG reference (1 x 128) idle
Found the data of interest:
t = -200.00 ... 500.00 ms
0 CTF compensation matrices available
Not setting metadata
Not setting metadata
588 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 1)
1 projection items activated
Reading forward solution from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_createfwd/fwd.fif...
Reading a source space...
Computing patch statistics...
Patch information added...
Distance information added...
[done]
Reading a source space...
Computing patch statistics...
Patch information added...
Distance information added...
[done]
2 source spaces read
Desired named matrix (kind = 3523) not available
Read EEG forward solution (8196 sources, 128 channels, free orientations)
Source spaces transformed to the forward solution coordinate frame
128 x 128 full covariance (kind = 1) found.
Read a total of 1 projection items:
Average EEG reference (1 x 128) active
Reading a source space...
Computing patch statistics...
Patch information added...
Distance information added...
[done]
Reading a source space...
Computing patch statistics...
Patch information added...
Distance information added...
[done]
2 source spaces read
Computing inverse operator with 128 channels.
128 out of 128 channels remain after picking
Selected 128 channels
Whitening the forward solution.
Created an SSP operator (subspace dimension = 1)
Computing rank from covariance with rank=None
Using tolerance 1.2e-14 (2.2e-16 eps * 128 dim * 0.43 max singular value)
Estimated rank (eeg): 127
EEG: rank 127 computed from 128 data channels with 1 projector
Setting small EEG eigenvalues to zero (without PCA)
Creating the source covariance matrix
Adjusting source covariance matrix.
Computing SVD of whitened and weighted lead field matrix.
220709-17:13:34,277 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 0 jobs ready. Free memory (GB): 14.20/14.40, Free processors: 0/1.
Currently running:
* eeg_pipeline.eeg_source_imaging_stage.mne_invsol
largest singular value = 6.48933
scaling factor to adjust the trace = 4.66048e+24 (nchan = 128 nzero = 1)
Write inverse operator decomposition in /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_source_imaging_stage/mne_invsol/inv.fif...
Write a source space...
[done]
Write a source space...
[done]
2 source spaces written
Writing inverse operator info...
Writing noise covariance matrix.
Writing source covariance matrix.
Writing orientation priors.
[done]
Preparing the inverse operator for use...
Scaled noise and source covariance from nave = 1 to nave = 588
Created the regularized inverter
Created an SSP operator (subspace dimension = 1)
Created the whitener using a noise covariance matrix with rank 127 (1 small eigenvalues omitted)
Computing noise-normalization factors (sLORETA)...
[done]
Picked 128 channels from the data
Computing inverse...
Eigenleads need to be weighted ...
Processing epoch : 1 / 588
combining the current components...
Processing epoch : 2 / 588
combining the current components...
Processing epoch : 3 / 588
combining the current components...
Processing epoch : 4 / 588
combining the current components...
Processing epoch : 5 / 588
combining the current components...
Processing epoch : 6 / 588
combining the current components...
Processing epoch : 7 / 588
combining the current components...
Processing epoch : 8 / 588
combining the current components...
Processing epoch : 9 / 588
combining the current components...
Processing epoch : 10 / 588
combining the current components...
Processing epoch : 11 / 588
combining the current components...
Processing epoch : 12 / 588
combining the current components...
Processing epoch : 13 / 588
combining the current components...
Processing epoch : 14 / 588
combining the current components...
Processing epoch : 15 / 588
combining the current components...
Processing epoch : 16 / 588
combining the current components...
Processing epoch : 17 / 588
combining the current components...
Processing epoch : 18 / 588
combining the current components...
Processing epoch : 19 / 588
combining the current components...
Processing epoch : 20 / 588
combining the current components...
Processing epoch : 21 / 588
combining the current components...
Processing epoch : 22 / 588
combining the current components...
Processing epoch : 23 / 588
combining the current components...
Processing epoch : 24 / 588
combining the current components...
Processing epoch : 25 / 588
combining the current components...
Processing epoch : 26 / 588
combining the current components...
Processing epoch : 27 / 588
combining the current components...
Processing epoch : 28 / 588
combining the current components...
Processing epoch : 29 / 588
combining the current components...
Processing epoch : 30 / 588
combining the current components...
Processing epoch : 31 / 588
combining the current components...
Processing epoch : 32 / 588
combining the current components...
Processing epoch : 33 / 588
combining the current components...
Processing epoch : 34 / 588
combining the current components...
Processing epoch : 35 / 588
combining the current components...
Processing epoch : 36 / 588
combining the current components...
Processing epoch : 37 / 588
combining the current components...
Processing epoch : 38 / 588
combining the current components...
Processing epoch : 39 / 588
combining the current components...
Processing epoch : 40 / 588
combining the current components...
Processing epoch : 41 / 588
combining the current components...
Processing epoch : 42 / 588
combining the current components...
Processing epoch : 43 / 588
combining the current components...
Processing epoch : 44 / 588
combining the current components...
Processing epoch : 45 / 588
combining the current components...
Processing epoch : 46 / 588
combining the current components...
Processing epoch : 47 / 588
combining the current components...
Processing epoch : 48 / 588
combining the current components...
Processing epoch : 49 / 588
combining the current components...
Processing epoch : 50 / 588
combining the current components...
Processing epoch : 51 / 588
combining the current components...
Processing epoch : 52 / 588
combining the current components...
Processing epoch : 53 / 588
combining the current components...
Processing epoch : 54 / 588
combining the current components...
Processing epoch : 55 / 588
combining the current components...
Processing epoch : 56 / 588
combining the current components...
Processing epoch : 57 / 588
combining the current components...
Processing epoch : 58 / 588
combining the current components...
Processing epoch : 59 / 588
combining the current components...
Processing epoch : 60 / 588
combining the current components...
Processing epoch : 61 / 588
combining the current components...
Processing epoch : 62 / 588
combining the current components...
Processing epoch : 63 / 588
combining the current components...
Processing epoch : 64 / 588
combining the current components...
Processing epoch : 65 / 588
combining the current components...
Processing epoch : 66 / 588
combining the current components...
Processing epoch : 67 / 588
combining the current components...
Processing epoch : 68 / 588
combining the current components...
Processing epoch : 69 / 588
combining the current components...
Processing epoch : 70 / 588
combining the current components...
Processing epoch : 71 / 588
combining the current components...
Processing epoch : 72 / 588
combining the current components...
Processing epoch : 73 / 588
combining the current components...
Processing epoch : 74 / 588
combining the current components...
Processing epoch : 75 / 588
combining the current components...
Processing epoch : 76 / 588
combining the current components...
Processing epoch : 77 / 588
combining the current components...
Processing epoch : 78 / 588
combining the current components...
Processing epoch : 79 / 588
combining the current components...
Processing epoch : 80 / 588
combining the current components...
Processing epoch : 81 / 588
combining the current components...
Processing epoch : 82 / 588
combining the current components...
Processing epoch : 83 / 588
combining the current components...
Processing epoch : 84 / 588
combining the current components...
Processing epoch : 85 / 588
combining the current components...
Processing epoch : 86 / 588
combining the current components...
Processing epoch : 87 / 588
combining the current components...
Processing epoch : 88 / 588
combining the current components...
Processing epoch : 89 / 588
combining the current components...
Processing epoch : 90 / 588
combining the current components...
Processing epoch : 91 / 588
combining the current components...
Processing epoch : 92 / 588
combining the current components...
Processing epoch : 93 / 588
combining the current components...
Processing epoch : 94 / 588
combining the current components...
Processing epoch : 95 / 588
combining the current components...
Processing epoch : 96 / 588
combining the current components...
Processing epoch : 97 / 588
combining the current components...
Processing epoch : 98 / 588
combining the current components...
Processing epoch : 99 / 588
combining the current components...
Processing epoch : 100 / 588
combining the current components...
Processing epoch : 101 / 588
combining the current components...
Processing epoch : 102 / 588
combining the current components...
Processing epoch : 103 / 588
combining the current components...
Processing epoch : 104 / 588
combining the current components...
Processing epoch : 105 / 588
combining the current components...
Processing epoch : 106 / 588
combining the current components...
Processing epoch : 107 / 588
combining the current components...
Processing epoch : 108 / 588
combining the current components...
Processing epoch : 109 / 588
combining the current components...
Processing epoch : 110 / 588
combining the current components...
Processing epoch : 111 / 588
combining the current components...
Processing epoch : 112 / 588
combining the current components...
Processing epoch : 113 / 588
combining the current components...
Processing epoch : 114 / 588
combining the current components...
Processing epoch : 115 / 588
combining the current components...
Processing epoch : 116 / 588
combining the current components...
Processing epoch : 117 / 588
combining the current components...
Processing epoch : 118 / 588
combining the current components...
Processing epoch : 119 / 588
combining the current components...
Processing epoch : 120 / 588
combining the current components...
Processing epoch : 121 / 588
combining the current components...
Processing epoch : 122 / 588
combining the current components...
Processing epoch : 123 / 588
combining the current components...
Processing epoch : 124 / 588
combining the current components...
Processing epoch : 125 / 588
combining the current components...
Processing epoch : 126 / 588
combining the current components...
Processing epoch : 127 / 588
combining the current components...
Processing epoch : 128 / 588
combining the current components...
Processing epoch : 129 / 588
combining the current components...
Processing epoch : 130 / 588
combining the current components...
Processing epoch : 131 / 588
combining the current components...
Processing epoch : 132 / 588
combining the current components...
Processing epoch : 133 / 588
combining the current components...
Processing epoch : 134 / 588
combining the current components...
Processing epoch : 135 / 588
combining the current components...
Processing epoch : 136 / 588
combining the current components...
Processing epoch : 137 / 588
combining the current components...
Processing epoch : 138 / 588
combining the current components...
Processing epoch : 139 / 588
combining the current components...
Processing epoch : 140 / 588
combining the current components...
Processing epoch : 141 / 588
combining the current components...
Processing epoch : 142 / 588
combining the current components...
Processing epoch : 143 / 588
combining the current components...
Processing epoch : 144 / 588
combining the current components...
Processing epoch : 145 / 588
combining the current components...
Processing epoch : 146 / 588
combining the current components...
Processing epoch : 147 / 588
combining the current components...
Processing epoch : 148 / 588
combining the current components...
Processing epoch : 149 / 588
combining the current components...
Processing epoch : 150 / 588
combining the current components...
Processing epoch : 151 / 588
combining the current components...
Processing epoch : 152 / 588
combining the current components...
Processing epoch : 153 / 588
combining the current components...
Processing epoch : 154 / 588
combining the current components...
Processing epoch : 155 / 588
combining the current components...
Processing epoch : 156 / 588
combining the current components...
Processing epoch : 157 / 588
combining the current components...
Processing epoch : 158 / 588
combining the current components...
Processing epoch : 159 / 588
combining the current components...
Processing epoch : 160 / 588
combining the current components...
Processing epoch : 161 / 588
combining the current components...
Processing epoch : 162 / 588
combining the current components...
Processing epoch : 163 / 588
combining the current components...
Processing epoch : 164 / 588
combining the current components...
Processing epoch : 165 / 588
combining the current components...
Processing epoch : 166 / 588
combining the current components...
Processing epoch : 167 / 588
combining the current components...
Processing epoch : 168 / 588
combining the current components...
Processing epoch : 169 / 588
combining the current components...
Processing epoch : 170 / 588
combining the current components...
Processing epoch : 171 / 588
combining the current components...
Processing epoch : 172 / 588
combining the current components...
Processing epoch : 173 / 588
combining the current components...
Processing epoch : 174 / 588
combining the current components...
Processing epoch : 175 / 588
combining the current components...
Processing epoch : 176 / 588
combining the current components...
Processing epoch : 177 / 588
combining the current components...
Processing epoch : 178 / 588
combining the current components...
Processing epoch : 179 / 588
combining the current components...
Processing epoch : 180 / 588
combining the current components...
Processing epoch : 181 / 588
combining the current components...
Processing epoch : 182 / 588
combining the current components...
Processing epoch : 183 / 588
combining the current components...
Processing epoch : 184 / 588
combining the current components...
Processing epoch : 185 / 588
combining the current components...
Processing epoch : 186 / 588
combining the current components...
Processing epoch : 187 / 588
combining the current components...
Processing epoch : 188 / 588
combining the current components...
Processing epoch : 189 / 588
combining the current components...
Processing epoch : 190 / 588
combining the current components...
Processing epoch : 191 / 588
combining the current components...
Processing epoch : 192 / 588
combining the current components...
Processing epoch : 193 / 588
combining the current components...
Processing epoch : 194 / 588
combining the current components...
Processing epoch : 195 / 588
combining the current components...
Processing epoch : 196 / 588
combining the current components...
Processing epoch : 197 / 588
combining the current components...
Processing epoch : 198 / 588
combining the current components...
Processing epoch : 199 / 588
combining the current components...
Processing epoch : 200 / 588
combining the current components...
Processing epoch : 201 / 588
combining the current components...
Processing epoch : 202 / 588
combining the current components...
Processing epoch : 203 / 588
combining the current components...
Processing epoch : 204 / 588
combining the current components...
Processing epoch : 205 / 588
combining the current components...
Processing epoch : 206 / 588
combining the current components...
Processing epoch : 207 / 588
combining the current components...
Processing epoch : 208 / 588
combining the current components...
Processing epoch : 209 / 588
combining the current components...
Processing epoch : 210 / 588
combining the current components...
Processing epoch : 211 / 588
combining the current components...
Processing epoch : 212 / 588
combining the current components...
Processing epoch : 213 / 588
combining the current components...
Processing epoch : 214 / 588
combining the current components...
Processing epoch : 215 / 588
combining the current components...
Processing epoch : 216 / 588
combining the current components...
Processing epoch : 217 / 588
combining the current components...
Processing epoch : 218 / 588
combining the current components...
Processing epoch : 219 / 588
combining the current components...
Processing epoch : 220 / 588
combining the current components...
Processing epoch : 221 / 588
combining the current components...
Processing epoch : 222 / 588
combining the current components...
Processing epoch : 223 / 588
combining the current components...
Processing epoch : 224 / 588
combining the current components...
Processing epoch : 225 / 588
combining the current components...
Processing epoch : 226 / 588
combining the current components...
Processing epoch : 227 / 588
combining the current components...
Processing epoch : 228 / 588
combining the current components...
Processing epoch : 229 / 588
combining the current components...
Processing epoch : 230 / 588
combining the current components...
Processing epoch : 231 / 588
combining the current components...
Processing epoch : 232 / 588
combining the current components...
Processing epoch : 233 / 588
combining the current components...
Processing epoch : 234 / 588
combining the current components...
Processing epoch : 235 / 588
combining the current components...
Processing epoch : 236 / 588
combining the current components...
Processing epoch : 237 / 588
combining the current components...
Processing epoch : 238 / 588
combining the current components...
Processing epoch : 239 / 588
combining the current components...
Processing epoch : 240 / 588
combining the current components...
Processing epoch : 241 / 588
combining the current components...
Processing epoch : 242 / 588
combining the current components...
Processing epoch : 243 / 588
combining the current components...
Processing epoch : 244 / 588
combining the current components...
Processing epoch : 245 / 588
combining the current components...
Processing epoch : 246 / 588
combining the current components...
Processing epoch : 247 / 588
combining the current components...
Processing epoch : 248 / 588
combining the current components...
Processing epoch : 249 / 588
combining the current components...
Processing epoch : 250 / 588
combining the current components...
Processing epoch : 251 / 588
combining the current components...
Processing epoch : 252 / 588
combining the current components...
Processing epoch : 253 / 588
combining the current components...
Processing epoch : 254 / 588
combining the current components...
Processing epoch : 255 / 588
combining the current components...
Processing epoch : 256 / 588
combining the current components...
Processing epoch : 257 / 588
combining the current components...
Processing epoch : 258 / 588
combining the current components...
Processing epoch : 259 / 588
combining the current components...
Processing epoch : 260 / 588
combining the current components...
Processing epoch : 261 / 588
combining the current components...
Processing epoch : 262 / 588
combining the current components...
Processing epoch : 263 / 588
combining the current components...
Processing epoch : 264 / 588
combining the current components...
Processing epoch : 265 / 588
combining the current components...
Processing epoch : 266 / 588
combining the current components...
Processing epoch : 267 / 588
combining the current components...
Processing epoch : 268 / 588
combining the current components...
Processing epoch : 269 / 588
combining the current components...
Processing epoch : 270 / 588
combining the current components...
Processing epoch : 271 / 588
combining the current components...
Processing epoch : 272 / 588
combining the current components...
Processing epoch : 273 / 588
combining the current components...
Processing epoch : 274 / 588
combining the current components...
Processing epoch : 275 / 588
combining the current components...
Processing epoch : 276 / 588
combining the current components...
Processing epoch : 277 / 588
combining the current components...
Processing epoch : 278 / 588
combining the current components...
Processing epoch : 279 / 588
combining the current components...
Processing epoch : 280 / 588
combining the current components...
Processing epoch : 281 / 588
combining the current components...
Processing epoch : 282 / 588
combining the current components...
Processing epoch : 283 / 588
combining the current components...
Processing epoch : 284 / 588
combining the current components...
Processing epoch : 285 / 588
combining the current components...
Processing epoch : 286 / 588
combining the current components...
Processing epoch : 287 / 588
combining the current components...
Processing epoch : 288 / 588
combining the current components...
Processing epoch : 289 / 588
combining the current components...
Processing epoch : 290 / 588
combining the current components...
Processing epoch : 291 / 588
combining the current components...
Processing epoch : 292 / 588
combining the current components...
Processing epoch : 293 / 588
combining the current components...
Processing epoch : 294 / 588
combining the current components...
Processing epoch : 295 / 588
combining the current components...
Processing epoch : 296 / 588
combining the current components...
Processing epoch : 297 / 588
combining the current components...
Processing epoch : 298 / 588
combining the current components...
Processing epoch : 299 / 588
combining the current components...
Processing epoch : 300 / 588
combining the current components...
Processing epoch : 301 / 588
combining the current components...
Processing epoch : 302 / 588
combining the current components...
Processing epoch : 303 / 588
combining the current components...
Processing epoch : 304 / 588
combining the current components...
Processing epoch : 305 / 588
combining the current components...
Processing epoch : 306 / 588
combining the current components...
Processing epoch : 307 / 588
combining the current components...
Processing epoch : 308 / 588
combining the current components...
Processing epoch : 309 / 588
combining the current components...
Processing epoch : 310 / 588
combining the current components...
Processing epoch : 311 / 588
combining the current components...
Processing epoch : 312 / 588
combining the current components...
Processing epoch : 313 / 588
combining the current components...
Processing epoch : 314 / 588
combining the current components...
Processing epoch : 315 / 588
combining the current components...
Processing epoch : 316 / 588
combining the current components...
Processing epoch : 317 / 588
combining the current components...
Processing epoch : 318 / 588
combining the current components...
Processing epoch : 319 / 588
combining the current components...
Processing epoch : 320 / 588
combining the current components...
Processing epoch : 321 / 588
combining the current components...
Processing epoch : 322 / 588
combining the current components...
Processing epoch : 323 / 588
combining the current components...
Processing epoch : 324 / 588
combining the current components...
Processing epoch : 325 / 588
combining the current components...
Processing epoch : 326 / 588
combining the current components...
Processing epoch : 327 / 588
combining the current components...
Processing epoch : 328 / 588
combining the current components...
Processing epoch : 329 / 588
combining the current components...
Processing epoch : 330 / 588
combining the current components...
Processing epoch : 331 / 588
combining the current components...
Processing epoch : 332 / 588
combining the current components...
Processing epoch : 333 / 588
combining the current components...
Processing epoch : 334 / 588
combining the current components...
Processing epoch : 335 / 588
combining the current components...
Processing epoch : 336 / 588
combining the current components...
Processing epoch : 337 / 588
combining the current components...
Processing epoch : 338 / 588
combining the current components...
Processing epoch : 339 / 588
combining the current components...
Processing epoch : 340 / 588
combining the current components...
Processing epoch : 341 / 588
combining the current components...
Processing epoch : 342 / 588
combining the current components...
Processing epoch : 343 / 588
combining the current components...
Processing epoch : 344 / 588
combining the current components...
Processing epoch : 345 / 588
combining the current components...
Processing epoch : 346 / 588
combining the current components...
Processing epoch : 347 / 588
combining the current components...
Processing epoch : 348 / 588
combining the current components...
Processing epoch : 349 / 588
combining the current components...
Processing epoch : 350 / 588
combining the current components...
Processing epoch : 351 / 588
combining the current components...
Processing epoch : 352 / 588
combining the current components...
Processing epoch : 353 / 588
combining the current components...
Processing epoch : 354 / 588
combining the current components...
Processing epoch : 355 / 588
combining the current components...
Processing epoch : 356 / 588
combining the current components...
Processing epoch : 357 / 588
combining the current components...
Processing epoch : 358 / 588
combining the current components...
Processing epoch : 359 / 588
combining the current components...
Processing epoch : 360 / 588
combining the current components...
Processing epoch : 361 / 588
combining the current components...
Processing epoch : 362 / 588
combining the current components...
Processing epoch : 363 / 588
combining the current components...
Processing epoch : 364 / 588
combining the current components...
Processing epoch : 365 / 588
combining the current components...
Processing epoch : 366 / 588
combining the current components...
Processing epoch : 367 / 588
combining the current components...
Processing epoch : 368 / 588
combining the current components...
Processing epoch : 369 / 588
combining the current components...
Processing epoch : 370 / 588
combining the current components...
Processing epoch : 371 / 588
combining the current components...
Processing epoch : 372 / 588
combining the current components...
Processing epoch : 373 / 588
combining the current components...
Processing epoch : 374 / 588
combining the current components...
Processing epoch : 375 / 588
combining the current components...
Processing epoch : 376 / 588
combining the current components...
Processing epoch : 377 / 588
combining the current components...
Processing epoch : 378 / 588
combining the current components...
Processing epoch : 379 / 588
combining the current components...
Processing epoch : 380 / 588
combining the current components...
Processing epoch : 381 / 588
combining the current components...
Processing epoch : 382 / 588
combining the current components...
Processing epoch : 383 / 588
combining the current components...
Processing epoch : 384 / 588
combining the current components...
Processing epoch : 385 / 588
combining the current components...
Processing epoch : 386 / 588
combining the current components...
Processing epoch : 387 / 588
combining the current components...
Processing epoch : 388 / 588
combining the current components...
Processing epoch : 389 / 588
combining the current components...
Processing epoch : 390 / 588
combining the current components...
Processing epoch : 391 / 588
combining the current components...
Processing epoch : 392 / 588
combining the current components...
Processing epoch : 393 / 588
combining the current components...
Processing epoch : 394 / 588
combining the current components...
Processing epoch : 395 / 588
combining the current components...
Processing epoch : 396 / 588
combining the current components...
Processing epoch : 397 / 588
combining the current components...
Processing epoch : 398 / 588
combining the current components...
Processing epoch : 399 / 588
combining the current components...
Processing epoch : 400 / 588
combining the current components...
Processing epoch : 401 / 588
combining the current components...
Processing epoch : 402 / 588
combining the current components...
Processing epoch : 403 / 588
combining the current components...
Processing epoch : 404 / 588
combining the current components...
Processing epoch : 405 / 588
combining the current components...
Processing epoch : 406 / 588
combining the current components...
Processing epoch : 407 / 588
combining the current components...
Processing epoch : 408 / 588
combining the current components...
Processing epoch : 409 / 588
combining the current components...
Processing epoch : 410 / 588
combining the current components...
Processing epoch : 411 / 588
combining the current components...
Processing epoch : 412 / 588
combining the current components...
Processing epoch : 413 / 588
combining the current components...
Processing epoch : 414 / 588
combining the current components...
Processing epoch : 415 / 588
combining the current components...
Processing epoch : 416 / 588
combining the current components...
Processing epoch : 417 / 588
combining the current components...
Processing epoch : 418 / 588
combining the current components...
Processing epoch : 419 / 588
combining the current components...
Processing epoch : 420 / 588
combining the current components...
Processing epoch : 421 / 588
combining the current components...
Processing epoch : 422 / 588
combining the current components...
Processing epoch : 423 / 588
combining the current components...
Processing epoch : 424 / 588
combining the current components...
Processing epoch : 425 / 588
combining the current components...
Processing epoch : 426 / 588
combining the current components...
Processing epoch : 427 / 588
combining the current components...
Processing epoch : 428 / 588
combining the current components...
Processing epoch : 429 / 588
combining the current components...
Processing epoch : 430 / 588
combining the current components...
Processing epoch : 431 / 588
combining the current components...
Processing epoch : 432 / 588
combining the current components...
Processing epoch : 433 / 588
combining the current components...
Processing epoch : 434 / 588
combining the current components...
Processing epoch : 435 / 588
combining the current components...
Processing epoch : 436 / 588
combining the current components...
Processing epoch : 437 / 588
combining the current components...
Processing epoch : 438 / 588
combining the current components...
Processing epoch : 439 / 588
combining the current components...
Processing epoch : 440 / 588
combining the current components...
Processing epoch : 441 / 588
combining the current components...
Processing epoch : 442 / 588
combining the current components...
Processing epoch : 443 / 588
combining the current components...
Processing epoch : 444 / 588
combining the current components...
Processing epoch : 445 / 588
combining the current components...
Processing epoch : 446 / 588
combining the current components...
Processing epoch : 447 / 588
combining the current components...
Processing epoch : 448 / 588
combining the current components...
Processing epoch : 449 / 588
combining the current components...
Processing epoch : 450 / 588
combining the current components...
Processing epoch : 451 / 588
combining the current components...
Processing epoch : 452 / 588
combining the current components...
Processing epoch : 453 / 588
combining the current components...
Processing epoch : 454 / 588
combining the current components...
Processing epoch : 455 / 588
combining the current components...
Processing epoch : 456 / 588
combining the current components...
Processing epoch : 457 / 588
combining the current components...
Processing epoch : 458 / 588
combining the current components...
Processing epoch : 459 / 588
combining the current components...
Processing epoch : 460 / 588
combining the current components...
Processing epoch : 461 / 588
combining the current components...
Processing epoch : 462 / 588
combining the current components...
Processing epoch : 463 / 588
combining the current components...
Processing epoch : 464 / 588
combining the current components...
Processing epoch : 465 / 588
combining the current components...
Processing epoch : 466 / 588
combining the current components...
Processing epoch : 467 / 588
combining the current components...
Processing epoch : 468 / 588
combining the current components...
Processing epoch : 469 / 588
combining the current components...
Processing epoch : 470 / 588
combining the current components...
Processing epoch : 471 / 588
combining the current components...
Processing epoch : 472 / 588
combining the current components...
Processing epoch : 473 / 588
combining the current components...
Processing epoch : 474 / 588
combining the current components...
Processing epoch : 475 / 588
combining the current components...
Processing epoch : 476 / 588
combining the current components...
Processing epoch : 477 / 588
combining the current components...
Processing epoch : 478 / 588
combining the current components...
Processing epoch : 479 / 588
combining the current components...
Processing epoch : 480 / 588
combining the current components...
Processing epoch : 481 / 588
combining the current components...
Processing epoch : 482 / 588
combining the current components...
Processing epoch : 483 / 588
combining the current components...
Processing epoch : 484 / 588
combining the current components...
Processing epoch : 485 / 588
combining the current components...
Processing epoch : 486 / 588
combining the current components...
Processing epoch : 487 / 588
combining the current components...
Processing epoch : 488 / 588
combining the current components...
Processing epoch : 489 / 588
combining the current components...
Processing epoch : 490 / 588
combining the current components...
Processing epoch : 491 / 588
combining the current components...
Processing epoch : 492 / 588
combining the current components...
Processing epoch : 493 / 588
combining the current components...
Processing epoch : 494 / 588
combining the current components...
Processing epoch : 495 / 588
combining the current components...
Processing epoch : 496 / 588
combining the current components...
Processing epoch : 497 / 588
combining the current components...
Processing epoch : 498 / 588
combining the current components...
Processing epoch : 499 / 588
combining the current components...
Processing epoch : 500 / 588
combining the current components...
Processing epoch : 501 / 588
combining the current components...
Processing epoch : 502 / 588
combining the current components...
Processing epoch : 503 / 588
combining the current components...
Processing epoch : 504 / 588
combining the current components...
Processing epoch : 505 / 588
combining the current components...
Processing epoch : 506 / 588
combining the current components...
Processing epoch : 507 / 588
combining the current components...
Processing epoch : 508 / 588
combining the current components...
Processing epoch : 509 / 588
combining the current components...
Processing epoch : 510 / 588
combining the current components...
Processing epoch : 511 / 588
combining the current components...
Processing epoch : 512 / 588
combining the current components...
Processing epoch : 513 / 588
combining the current components...
Processing epoch : 514 / 588
combining the current components...
Processing epoch : 515 / 588
combining the current components...
Processing epoch : 516 / 588
combining the current components...
Processing epoch : 517 / 588
combining the current components...
Processing epoch : 518 / 588
combining the current components...
Processing epoch : 519 / 588
combining the current components...
Processing epoch : 520 / 588
combining the current components...
Processing epoch : 521 / 588
combining the current components...
Processing epoch : 522 / 588
combining the current components...
Processing epoch : 523 / 588
combining the current components...
Processing epoch : 524 / 588
combining the current components...
Processing epoch : 525 / 588
combining the current components...
Processing epoch : 526 / 588
combining the current components...
Processing epoch : 527 / 588
combining the current components...
Processing epoch : 528 / 588
combining the current components...
Processing epoch : 529 / 588
combining the current components...
Processing epoch : 530 / 588
combining the current components...
Processing epoch : 531 / 588
combining the current components...
Processing epoch : 532 / 588
combining the current components...
Processing epoch : 533 / 588
combining the current components...
Processing epoch : 534 / 588
combining the current components...
Processing epoch : 535 / 588
combining the current components...
Processing epoch : 536 / 588
combining the current components...
Processing epoch : 537 / 588
combining the current components...
Processing epoch : 538 / 588
combining the current components...
Processing epoch : 539 / 588
combining the current components...
Processing epoch : 540 / 588
combining the current components...
Processing epoch : 541 / 588
combining the current components...
Processing epoch : 542 / 588
combining the current components...
Processing epoch : 543 / 588
combining the current components...
Processing epoch : 544 / 588
combining the current components...
Processing epoch : 545 / 588
combining the current components...
Processing epoch : 546 / 588
combining the current components...
Processing epoch : 547 / 588
combining the current components...
Processing epoch : 548 / 588
combining the current components...
Processing epoch : 549 / 588
combining the current components...
Processing epoch : 550 / 588
combining the current components...
Processing epoch : 551 / 588
combining the current components...
Processing epoch : 552 / 588
combining the current components...
Processing epoch : 553 / 588
combining the current components...
Processing epoch : 554 / 588
combining the current components...
Processing epoch : 555 / 588
combining the current components...
Processing epoch : 556 / 588
combining the current components...
Processing epoch : 557 / 588
combining the current components...
Processing epoch : 558 / 588
combining the current components...
Processing epoch : 559 / 588
combining the current components...
Processing epoch : 560 / 588
combining the current components...
Processing epoch : 561 / 588
combining the current components...
Processing epoch : 562 / 588
combining the current components...
Processing epoch : 563 / 588
combining the current components...
Processing epoch : 564 / 588
combining the current components...
Processing epoch : 565 / 588
combining the current components...
Processing epoch : 566 / 588
combining the current components...
Processing epoch : 567 / 588
combining the current components...
Processing epoch : 568 / 588
combining the current components...
Processing epoch : 569 / 588
combining the current components...
Processing epoch : 570 / 588
combining the current components...
Processing epoch : 571 / 588
combining the current components...
Processing epoch : 572 / 588
combining the current components...
Processing epoch : 573 / 588
combining the current components...
Processing epoch : 574 / 588
combining the current components...
Processing epoch : 575 / 588
combining the current components...
Processing epoch : 576 / 588
combining the current components...
Processing epoch : 577 / 588
combining the current components...
Processing epoch : 578 / 588
combining the current components...
Processing epoch : 579 / 588
combining the current components...
Processing epoch : 580 / 588
combining the current components...
Processing epoch : 581 / 588
combining the current components...
Processing epoch : 582 / 588
combining the current components...
Processing epoch : 583 / 588
combining the current components...
Processing epoch : 584 / 588
combining the current components...
Processing epoch : 585 / 588
combining the current components...
Processing epoch : 586 / 588
combining the current components...
Processing epoch : 587 / 588
combining the current components...
Processing epoch : 588 / 588
combining the current components...
[done]
Reading labels from parcellation...
read 35 labels from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/label/lh.lausanne2018.scale1.annot
read 35 labels from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/label/rh.lausanne2018.scale1.annot
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
Extracting time courses for 70 labels (mode: pca_flip)
220709-17:16:19,251 nipype.workflow INFO:
[Node] Finished "mne_invsol", elapsed time 166.864983s.
220709-17:16:20,461 nipype.workflow INFO:
[Job 6] Completed (eeg_pipeline.eeg_source_imaging_stage.mne_invsol).
220709-17:16:20,473 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 1 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:16:20,889 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_connectome_stage.eeg_compute_matrice" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_connectome_stage/eeg_compute_matrice".
220709-17:16:20,901 nipype.workflow INFO:
[Node] Executing "eeg_compute_matrice" <cmtklib.interfaces.mne.MNESpectralConnectivity>
Reading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_preprocessing_stage/eeglab2fif/epo.fif ...
Read a total of 1 projection items:
Average EEG reference (1 x 128) idle
Found the data of interest:
t = -200.00 ... 500.00 ms
0 CTF compensation matrices available
Not setting metadata
Not setting metadata
588 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 1)
1 projection items activated
220709-17:16:22,464 nipype.workflow INFO:
[MultiProc] Running 1 tasks, and 0 jobs ready. Free memory (GB): 14.20/14.40, Free processors: 0/1.
Currently running:
* eeg_pipeline.eeg_connectome_stage.eeg_compute_matrice
Save /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_connectome_stage/eeg_compute_matrice/conndata-network_connectivity.tsv...
Save /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_connectome_stage/eeg_compute_matrice/conndata-network_connectivity.gpickle...
Save /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_connectome_stage/eeg_compute_matrice/conndata-network_connectivity.mat...
Save /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_connectome_stage/eeg_compute_matrice/conndata-network_connectivity.graphml...
220709-17:17:14,972 nipype.workflow INFO:
[Node] Finished "eeg_compute_matrice", elapsed time 54.067281s.
220709-17:17:16,563 nipype.workflow INFO:
[Job 7] Completed (eeg_pipeline.eeg_connectome_stage.eeg_compute_matrice).
220709-17:17:16,570 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 1 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:17:16,685 nipype.workflow INFO:
[Node] Setting-up "eeg_pipeline.eeg_datasinker" in "/Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/nipype-1.7.0/sub-01/eeg_pipeline/eeg_datasinker".
220709-17:17:16,707 nipype.workflow INFO:
[Node] Executing "eeg_datasinker" <nipype.interfaces.io.DataSink>
220709-17:17:16,711 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/epo.fif -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_epo.fif
220709-17:17:16,716 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/bem.fif -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_bem.fif
220709-17:17:16,720 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/noisecov.fif -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_noisecov.fif
220709-17:17:16,724 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/src.fif -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_src.fif
220709-17:17:16,727 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/fwd.fif -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_fwd.fif
220709-17:17:16,731 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/inv.fif -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_inv.fif
220709-17:17:16,734 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/timeseries.pickle -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_atlas-L2018_res-scale1_timeseries.pickle
220709-17:17:16,736 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/conndata-network_connectivity.tsv -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_atlas-L2018_res-scale1_conndata-network_connectivity.tsv
220709-17:17:16,739 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/conndata-network_connectivity.gpickle -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_atlas-L2018_res-scale1_conndata-network_connectivity.gpickle
220709-17:17:16,742 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/conndata-network_connectivity.mat -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_atlas-L2018_res-scale1_conndata-network_connectivity.mat
220709-17:17:16,745 nipype.interface INFO:
sub: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/conndata-network_connectivity.graphml -> /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_atlas-L2018_res-scale1_conndata-network_connectivity.graphml
220709-17:17:16,749 nipype.workflow INFO:
[Node] Finished "eeg_datasinker", elapsed time 0.037833s.
220709-17:17:18,568 nipype.workflow INFO:
[Job 8] Completed (eeg_pipeline.eeg_datasinker).
220709-17:17:18,576 nipype.workflow INFO:
[MultiProc] Running 0 tasks, and 0 jobs ready. Free memory (GB): 14.40/14.40, Free processors: 1/1.
220709-17:17:21,115 nipype.interface INFO:
**** Processing finished ****
CPU times: user 16.5 s, sys: 1.98 s, total: 18.4 s
Wall time: 15min 58s
/Applications/miniconda3/envs/py37cmp-eeg/lib/python3.7/site-packages/joblib/externals/loky/backend/resource_tracker.py:320: UserWarning: resource_tracker: There appear to be 6 leaked folder objects to clean up at shutdown
(len(rtype_registry), rtype))
/Applications/miniconda3/envs/py37cmp-eeg/lib/python3.7/site-packages/joblib/externals/loky/backend/resource_tracker.py:333: UserWarning: resource_tracker: /var/folders/vy/0bw_1jvj54n8lvcgvdrtqb0c0000gn/T/joblib_memmapping_folder_47551_379a43977f814190bfe6f03e92cfdae6_3fedd64ce92b49b9a6481229dccca45d: FileNotFoundError(2, 'No such file or directory')
warnings.warn('resource_tracker: %s: %r' % (name, e))
/Applications/miniconda3/envs/py37cmp-eeg/lib/python3.7/site-packages/joblib/externals/loky/backend/resource_tracker.py:333: UserWarning: resource_tracker: /var/folders/vy/0bw_1jvj54n8lvcgvdrtqb0c0000gn/T/joblib_memmapping_folder_47551_379a43977f814190bfe6f03e92cfdae6_ae6b9e5318df4c718513b5ce780ef199: FileNotFoundError(2, 'No such file or directory')
warnings.warn('resource_tracker: %s: %r' % (name, e))
/Applications/miniconda3/envs/py37cmp-eeg/lib/python3.7/site-packages/joblib/externals/loky/backend/resource_tracker.py:333: UserWarning: resource_tracker: /var/folders/vy/0bw_1jvj54n8lvcgvdrtqb0c0000gn/T/joblib_memmapping_folder_47551_3d28f25931e34ead98c3102ee1bbe4be_e207a49f2faf4ad38c389a170ccc9af2: FileNotFoundError(2, 'No such file or directory')
warnings.warn('resource_tracker: %s: %r' % (name, e))
/Applications/miniconda3/envs/py37cmp-eeg/lib/python3.7/site-packages/joblib/externals/loky/backend/resource_tracker.py:333: UserWarning: resource_tracker: /var/folders/vy/0bw_1jvj54n8lvcgvdrtqb0c0000gn/T/joblib_memmapping_folder_47551_379a43977f814190bfe6f03e92cfdae6_5d99e23deff54a888a0efa2ed85d7b6f: FileNotFoundError(2, 'No such file or directory')
warnings.warn('resource_tracker: %s: %r' % (name, e))
/Applications/miniconda3/envs/py37cmp-eeg/lib/python3.7/site-packages/joblib/externals/loky/backend/resource_tracker.py:333: UserWarning: resource_tracker: /var/folders/vy/0bw_1jvj54n8lvcgvdrtqb0c0000gn/T/joblib_memmapping_folder_47551_ad22804ddb624e55b2ee83b946e96936_428a96c1454840a28c73e60ad46a69c8: FileNotFoundError(2, 'No such file or directory')
warnings.warn('resource_tracker: %s: %r' % (name, e))
/Applications/miniconda3/envs/py37cmp-eeg/lib/python3.7/site-packages/joblib/externals/loky/backend/resource_tracker.py:333: UserWarning: resource_tracker: /var/folders/vy/0bw_1jvj54n8lvcgvdrtqb0c0000gn/T/joblib_memmapping_folder_47551_379a43977f814190bfe6f03e92cfdae6_d7a66cd336f341f8909b02f329faff21: FileNotFoundError(2, 'No such file or directory')
warnings.warn('resource_tracker: %s: %r' % (name, e))
Let’s have a closer look at the outputs that the EEG pipeline produces in the derivatives/cmp-v3.1.0 derivatives directory.
First of all, connectomemapper works in such a way that the pipeline is first assembled and only afterwards, it is executed. During the assembly stage, input and output variables are connected and CMP3 produces a graph that visualizes this.
Each of the stages, again, has an input and and output node, as well as several nodes representing processing steps. Each processing step has its own “interface” which you can find in cmtklib/interfaces (“mne” in parentheses indicates that they are defined in the file mne.py).
eeg_datasource is the input BIDSDataGrabber node and eeg_datasinker is the output DataSinker node. datasource takes care of querying and injecting the input files in the different stages of the EEG pipeline. eeg_sinker is taking care of collecting, moving, and renaming all the files produced by the different stages to the derivatives/cmp-v3.1.0 directory.
In the following, we will go over the interfaces and show what output they produce.
The preprocessing stage consists of converting EEGLab .set EEG files to MNE Epochs in .fif format, the format used in the rest of the pipeline by calling, if necessary the following interface:
EEGLAB2fif: Read EEGLab data and converts them to MNE format (.fif file extension).
The information given by the config file regarding this stage is as follows:
If your data are not already in MNE format (.fif file extension), they have to be read and re-saved. The eeglab2fif interface does this for EEGLAB-format data (.set file extension).
The interface produces a file named sub-01_epo.fif in the derivatives/cmp-v3.0.3 folder.
Critically, the saved epochs contain a montage, i.e. the sensor locations which have to be supplied in a file names sub-01.xyz inside the subject’s EEGLAB derivatives folder (derivatives/eeglab-v14.1.1/sub-01/eeg/sub-01.xyz). Not sure it still applied :)
[16]:
# Let's have a look at the EEG data
with warnings.catch_warnings(): # suppress some irrelevant warnings coming from mne.read_epochs_eeglab()
warnings.simplefilter("ignore")
epochs_eeglab = mne.read_epochs_eeglab(
os.path.join(output_dir, __eeglab_directory__,
participant_label, 'eeg',
participant_label + f'_task-{task_label}_desc-preproc_eeg.set')
) # sub-01_FACES_250HZ_prepd.set
# eeglab2fif removes a baseline and crops the epochs according to parameters start_t and end_t in config file
start_t = -0.2
end_t = 0.6
epochs_eeglab.apply_baseline((start_t, 0))
epochs_eeglab.crop(tmin=start_t, tmax=end_t)
evoked_eeglab = epochs_eeglab.average().pick('eeg')
# compare to what eeglab2fif saved
epochs_mne = mne.read_epochs(
os.path.join(output_dir, __cmp_directory__,
participant_label, 'eeg',
participant_label + f'_task-{task_label}_epo.fif'))
evoked_mne = epochs_mne.average().pick('eeg')
Extracting parameters from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/eeglab-v14.1.1/sub-01/eeg/sub-01_task-faces_desc-preproc_eeg.set...
Not setting metadata
Not setting metadata
588 matching events found
No baseline correction applied
0 projection items activated
Ready.
Applying baseline correction (mode: mean)
Reading /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_epo.fif ...
Read a total of 1 projection items:
Average EEG reference (1 x 128) idle
Found the data of interest:
t = -200.00 ... 500.00 ms
0 CTF compensation matrices available
Not setting metadata
Not setting metadata
588 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 1)
1 projection items activated
[17]:
# plot and convince yourself it's the same
%matplotlib inline
fig = plt.figure()
plt.rcParams['figure.figsize'] = (15, 10)
_ = evoked_mne.plot(time_unit='s')
fig = plt.figure()
plt.rcParams['figure.figsize'] = (15, 10)
_ = evoked_eeglab.plot(time_unit='s')
This stage takes your data in fif format from the “Preprocessing Stage”, the parcellation, and the previously generated electrode transform file as inputs. With the aim to compute inverse solutions and extract ROI time courses with MNE, its workflow consists of five processing interfaces, :
CreateBEM: Create the boundary element method.
CreateSrc: Create the dipole locations along the surface of the brain.
CreateFwd: Create the forward solution (leadfield) from the BEM and the source space.
CreateCov: Create the noise covariance matrix from the data.
MNEInverseSolutionROI: Create the actual inverse operator and applies it, resulting in ROI-time courses.
The following possible EEG source imaging algorithms can be used for computing the inverse solutions: “sLORETA”, “eLORETA”, “MNE”, and “dSPM”. The configuration file of this tutorial is set to use “sLORETA”.
The information given by the config file regarding this stage is as follows:
The BEM (boundary element model) is the head model we use, in our case, it is based on the individual’s structural MRI and, again, related freesurfer derivatives. Its creation consists of two steps:
The necessary surfaces (brain, inner skull, outer skull, and outer skin) are extracted using mne.bem.make_watershed_bem(). The surfaces are saved in the subject’s freesurfer-directory in a new folder bem/ watershed.
The model itself is created using mne.make_bem_model() and mne.make_bem_solution(). In this step, the surfaces and the tissue conductivities between the surfaces are used.
[18]:
# Let's visualize the BEM surfaces and source space
src = mne.read_source_spaces(
os.path.join(
output_dir,__cmp_directory__,
participant_label, 'eeg', participant_label + f'_task-{task_label}_src.fif'))
# plot will appear in separate window
%matplotlib qt
# lines are the surfaces, pink dots are the sources (dipoles)
_=mne.viz.plot_bem(
subject=participant_label,
subjects_dir=project.freesurfer_subjects_dir,
brain_surfaces='white',
src=src,
orientation='sagittal'
)
Reading a source space...
Computing patch statistics...
Patch information added...
Distance information added...
[done]
Reading a source space...
Computing patch statistics...
Patch information added...
Distance information added...
[done]
2 source spaces read
Using surface: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/bem/inner_skull.surf
Using surface: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/bem/outer_skull.surf
Using surface: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/freesurfer-7.1.1/sub-01/bem/outer_skin.surf
MNE first computes a forward solution that describes how electrical currents propagate from the sources created earlier (via createsrc) through the tissues of the head modelled by the BEM (created via createbem) to the electrodes. Thus, the electrode positions have to be known and be aligned to the head model.
[19]:
# Let's check the alignment between MRI and electrode positions.
trans = mne.read_trans(
os.path.join(
output_dir, __cmp_directory__,
participant_label, 'eeg', participant_label + '_trans.fif'
)
)
mne.viz.plot_alignment(
epochs_mne.info,
trans=trans,
subject=participant_label,
subjects_dir=project.freesurfer_subjects_dir,
dig=False,
surfaces=dict(head=0.95),
coord_frame='mri')
Using pyvistaqt 3d backend.
Using outer_skin.surf for head surface.
Channel types:: eeg: 128
[19]:
<mne.viz.backends._pyvista._Figure at 0x7fc713dd9710>
MNE uses an estimate of signal to noise ratio in its creation of the inverse solution. For that, it considers the pre-stimulus period of the EEG recordings.
[20]:
# Let's have a look at the noise covariance.
%matplotlib inline
noise_cov = mne.read_cov(
os.path.join(
output_dir, __cmp_directory__,
participant_label, 'eeg', participant_label + f'_task-{task_label}_noisecov.fif'
)
)
fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, epochs_mne.info)
128 x 128 full covariance (kind = 1) found.
Read a total of 1 projection items:
Average EEG reference (1 x 128) active
Computing rank from covariance with rank=None
Using tolerance 1.2e-14 (2.2e-16 eps * 128 dim * 0.43 max singular value)
Estimated rank (eeg): 127
EEG: rank 127 computed from 128 data channels with 0 projectors
Now, everything comes together to create the inverse operator, which is then applied to the EEG data to create source time courses. In the last step, the source time courses are converted to ROI-time courses according to the selected parcellation.
The outputs that are necessary for this step to work were created in the previous processing steps, namely:
the EEG epochs in .fif format
the electrode montage
the head model
the source point locations
the forward operator
the noise covariance
First, the inverse operator is created using mne.minimum_norm.make_inverse_operator(). We use the options loose=1, depth=None, and fixed=False to obtain full 3-dimensional dipoles whose orientation is not fixed or constrained to be (somewhat) orthogonal to surface; and we are not applying any depth weighting. The solution is finally written to a file sub-01_task-faces_inv.fif in the same directory as the other outputs (derivatives/cmp-v3.1.0/sub-01/eeg).
In a subsequent step in the same interface, this inverse operator is then applied to the epochs (not the evoked time course averaged over trials) using mne.minimum_norm.apply_inverse_epochs.
The final step performed by this interface and by the EEG pipeline is to use mne.extract_label_time_course to create ROI-time courses according to mne.read_labels_from_annot(). As given in the config file, we use “lausanne2008” scale 1, which is the Desikan-atlas. The time courses and the ROI-names are stored in sub-01_task-faces_atlas-L2018_res-scale1_timeseries.pickle in pickle format.
Let’s have a look at the time courses.
[21]:
# Load the generated ROI time series file
roi_ts_fname = participant_label + f'_task-{task_label}_atlas-L2018_res-scale1_timeseries.pickle'
roi_ts_file = os.path.join(
output_dir, __cmp_directory__,
participant_label, 'eeg', roi_ts_fname
)
with open(roi_ts_file, 'rb') as f:
rtc_epo = pickle.load(f)
# For some reason, MNE writes label time courses as lists. convert to numpy array
rtc_epo['data'] = np.array(rtc_epo['data'])
[22]:
# Sort labels to make the time courses look nicer
N = len(rtc_epo['labels']) - 2 # two "unknown" regions - do not plot
sorting = list(np.arange(0, N, 2)) + list(np.arange(1, N, 2)) # left and right always alternating
# List of ROI names
labels_list_left = [i.name for i in rtc_epo['labels'][0::2] if i.name != 'unknown -lh']
labels_list_right = [i.name for i in rtc_epo['labels'][1::2] if i.name != 'unknown -rh']
labels_list = labels_list_left + labels_list_right
We can see that some of the time courses are “flipped” (have the opposite sign of the others). We will not address this problem here, but this is because of the step where dipole time courses are summarized for each brain region, using PCA. The direction of the resulting vector is not uniquely defined.
This leads us to the last stage of the pipeline, the “Connectome Stage”.
CMP3 uses MNE-Connectivity to compute the functional connectivity matrices. Results can be saved in the same formats (['tsv','gPickle','mat','graphml']) as the diffusion MRI and resting-state fMRI pipelines.
Keep in mind that we only plot a single subject’s connectivity here, so it is not surprising if you do not see exactly what you would expect.
We can load the matrices in network format, by reading the gpickle files using Networkx:
[24]:
# Index the new CMP3 derivatives including the connectome files
# in the BIDSLayout representation
bids_layout.add_derivatives(os.path.join(project.base_directory, "derivatives", "cmp-v3.1.0"))
# Query the generated connectome gpickle file
bids_query = {
"subject": participant_label.split('-')[-1], # Keep the label only, e.g. "01"
"datatype": 'eeg',
"atlas": 'L2018',
"res": 'scale1',
"suffix": 'connectivity',
"extension": 'gpickle',
"return_type": 'filename'
}
cmat_file = bids_layout.get(**bids_query)[0] # BIDSLayout always return a list
# Load wpli2_debiased connectivity matrix from the connectome gpickle file
weight = "wpli2_debiased"
print(f'Load {weight} connectivity matrix from {cmat_file}')
G = nx.read_gpickle(cmat_file)
A_wpli2_debiased = nx.to_numpy_array(G, weight=weight)
A_wpli2_debiased
Load dataset_description for: /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0
Load wpli2_debiased connectivity matrix from /Users/sebastientourbier/Documents/GitHub/connectomemapper3/docs/notebooks/ds003505_demo/derivatives/cmp-v3.1.0/sub-01/eeg/sub-01_task-faces_atlas-L2018_res-scale1_conndata-network_connectivity.gpickle
Then, we can load and order the name of the labels from the dictionary storing the ROI timeseries results, and visualize the connectivity matrix in a pretty circular layout with MNE-Connectivityviz.plot_connectivity_circle() as follows:
[25]:
%%time
label_names = [label.name for label in rtc_epo['labels']]
lh_labels = [name for name in label_names if name.endswith('lh')]
# Get the y-location of the label
label_ypos = list()
for name in lh_labels:
idx = label_names.index(name)
ypos = np.mean(rtc_epo['labels'][idx].pos[:, 1])
label_ypos.append(ypos)
# Reorder the labels based on their location
lh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))]
# For the right hemi
rh_labels = [label[:-2] + 'rh' for label in lh_labels]
# Save the plot order and create a circular layout
node_order = list()
node_order.extend(lh_labels[::-1]) # reverse the order
node_order.extend(rh_labels)
node_angles = mnec.viz.circular_layout(label_names, node_order, start_pos=90,
group_boundaries=[0, len(label_names) / 2])
# Plot the graph using node colors from the FreeSurfer parcellation. We only
# show the 300 strongest connections.
# plot will appear in separate window
%matplotlib inline
mnec.viz.plot_connectivity_circle(A_wpli2_debiased, label_names, n_lines=300,
node_angles=node_angles, node_colors='r',
title='')
CPU times: user 8.77 s, sys: 47.8 ms, total: 8.82 s
Wall time: 8.41 s
[25]:
(<Figure size 576x576 with 2 Axes>, <PolarAxesSubplot:>)
This concludes the tutorial 🧠!
We hope you enjoy it and any feedback or suggestions to improve it are very welcome! Just please open a new issue on GitHub and share your thoughts with us.
Copyright (C) 2009-2022, Ecole Polytechnique Fédérale de Lausanne (EPFL) and
Hospital Center and University of Lausanne (UNIL-CHUV), Switzerland, & Contributors,
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
Neither the name of the Ecole Polytechnique Fédérale de Lausanne (EPFL)
and Hospital Center and University of Lausanne (UNIL-CHUV) nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL “Ecole Polytechnique Fédérale de Lausanne (EPFL) and
Hospital Center and University of Lausanne (UNIL-CHUV), Switzerland & Contributors”
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This version fully integrates the new pipeline dedicated to EEG modality inside the BIDS App and the GUI.
What’s Changed
Updates
The conda environment files for cmpbidsappmanager
(conda/environment.yml and conda/environment_macosx.yml) have been greatly modified
(PR #212).
This includes the following updates:
python: from 3.7 to 3.9
pip: from 21.3.1 to 22.2
indexed_gzip: from 1.6.4 to 1.6.13
git-annex (conda/environment.yml only): from 8.20211123 to 10.20220724
qt/pyqt5.15.4 installed via conda-forge
pyqt5-sip12.9.0 (version compatible with qt/pyqt5.15.4) installed via conda-forge
In addition, the created environment has been renamed py39cmp-gui to be consistent
with the new python version installed in the environment.
In all conda environment *.yml and requirements.txt files, datalad and its container extension have
been updated to the following versions
(PR #209):
datalad: from 0.15.4 to 0.17.2
(See Datalad changelog for more details).
datalad-container: from 1.1.5 to 1.1.6
New features
The new pipeline dedicated to EEG modality has been integrated into the BIDS App
and cmpbidsappmanager
(PR #201 and
PR #205).
EEG pipeline configuration files are passed to the BIDS
App or its docker/singularity python wrapper via the option flag --eeg_pipeline.
A new tab has been added to the configurator window of cmpbidsappmanager for
the setup and saving of configuration files for the EEG pipeline. A new tab has
also been added to the output inspector window of cmpbidsappmanager to enable
the visual inspection of outputs generated by the EEG pipeline. The EEG
configuration file can now be specified in the BIDS App interface window of
cmpbidsappmanager and the command to run the BIDS has been updated. A new
EEGConnectomeStage stage has been implemented that builds the connectivity
matrices from the extracted ROI time-series using the function
spectral_connectivity_epochs of
MNE Connectivity. A new utility script visualize_eeg_pipeline_outputs.py has been
implemented in the cmp/cli module, which is called by the output inspector window
of cmpbidsappmanager.
Option to apply or not band-pass filtering in fMRI pipeline.
(PR #200)
Code refactoring
Major refactoring of all the code related to the EEG pipeline
(PR #198).
This includes:
Refactoring of all inputs, outputs, and config traits of the different stages
Modification of (1) cmp.pipelines.functional.eeg.py and (2) the tutorial
notebook for the EEG pipeline that integrates all previously mentioned changes
Bug fix
Problems to install and launch cmpbidsappmanager on Ubuntu. (PR #212)
Fix nibabel to 3.2.2 as the imported functions of nibabel.trackvis has been moved since 4.0.0 and caused errors.
(PR #XX)
Fix problem of traits not updated while making the diffusion pipeline config with ACT.
(PR #200)
Documentation
Update/add documentation for the EEG pipeline
(PR #208).
This includes:
Update the BIDS flowchart displayed in README and in docs/index.rst with the EEG pipeline. The SVG can be found inside the docs/images/svg directory.
Make appropriate changes to docs/index.rst and README around the EEG pipeline
Show call to --eeg_pipeline in docs/usage.rst
Show how to configure and check outputs of EEG pipeline in docs/bidsappmanager.rst
Add link to VEPCON dataset as example with EEG in docs/cmpbids.rst
Software development life cycle
Optimization of resources stored in the cache and in the workspace.
(PR #201)
Add tests 10 and 11 that run the EEG pipeline with the MNE and Cartool ESI workflow respectively.
(PR #201)
Add missing package data for parcellation in setup_pypi.py. (PR #182)
Use HTTPS instead of SSH for datalad clone in notebooks . (PR #181)
Add missing condition to handle custom BIDS files with session. (PR #183)
Integrate fix from Napari project for issues with menubar on Mac. (PR #174)
Use the most recent PyQt5 instead of PySide2 (older) for graphical backend of cmpbidsappmanager, which provides a fix to run Qt-based GUI on MacOSX Big Sur. (PR #188)
Documentation
Correct condaenvcreate instruction in the README. (PR #164)
Refer to contributing guidelines in the README. (PR #167)
Use sphinx-copybutton extension in the docs. (PR #168)
Add notes about docker image and conda environment size and time to download. (PR #169)
JOSS paper
Integrate minor wording tweaks by @jsheunis. (PR #162)
Add higher level summary and rename the old summary to “Overview of Functionalities”. (PR #175)
License
The license has been updated to a pure 3-clause BSD license to comply with JOSS. (PR #163)
Software development life cycle
Migrate ubuntu 16.04 (now deprecated) to 20.04 on CircleCI. (PR #172)
This version introduces the new pipeline dedicated to EEG modality with a tutorial, updates Freesurfer to 7.1.1, and adds a new tutorial that shows how to analyze the CMP3 connectomes.
This version mostly introduces the capability to estimate carbon footprint of CMP3 execution and fix problem of conflicts during the creation of the conda environment.
It incorporates in particular the following changes.
New features
Allow the estimation of the carbon footprint while using the BIDS App python wrappers and the GUI.
Estimations are conducted using codecarbon. All functions supporting
this features have been implemented in the new module cmtklib.carbonfootprint.
See PR #136 for more details.
Code changes
Creation of init_subject_derivatives_dirs() for AnatomicalPipeline, DifusionPipeline, and fMRIPipeline
that return the paths to Nipype and CMP derivatives folders of a given subject / session for a given pipeline.
This removed all the implicated code from the process() method and improve modularity and readability.
In the future, the different functions could be merged as there is a lot of code duplication between them.
AnatomicalPipeline, DiffusionPipeline, and fMRIPipeline workflows are run with the MultiProc plugin.
Bug fix
Major update of the conda/environment.yml and conda/environment_macosx.yml to correct the problems of conflicts in the previous version,
as reported in issue #137. This has resulted in the following package updates:
Add description of carbon footprint estimation feature.
Improve description on how to use already computed Freesurfer derivatives.
Misc
Add bootstrap CSS and jquery JS as resources to cmtklib/data/report/carbonfootprint.
They are used to display the carbon footprint report in the GUI.
Clean the resources related to parcellation in cmtklib/data/parcellation and rename all files and mentions of lausanne2008 to lausanne2018.
Removed unused cmtklib.interfaces.camino, cmtklib.interfaces.camino2trackvis,
and cmtklib.interfaces.diffusion modules
Specify to Coverage.py with #pragma:nocover part of the code we know it won’t be executed
Create and use a coveragerc file to set the run of Coverage.py with --concurrency=multiprocessing
to be allow to track code inside Nipype interfaces, now managed by multiprocessing.
Code style
Correct a number of code style issues with class names.
This version is mostly a bug fix release that allows the python packages of Connectome Mapper 3 to be available on PyPI.
It incorporates Pull Request #132 which includes the following changes.
Bug fix
Rename the project name in setup.py and setup_pypi.py from "cmp" to "connectomemapper".
Such a "cmp" project name was already existing on PyPI, that caused continuous integration on CircleCI to fail during the last v3.0.0 release, while uploading the python packages of CMP3 to PyPI.
Code refactoring
Make cmp.bidsappmanager.gui.py more lightweight by splitting the classes defined there in different files.
(See Issue #129 for more discussion details)
Split the create_workflow() method of the RegistrationStage into the create_ants_workflow(), create_flirt_workflow(), and create_bbregister_workflow().
(See Issue #95 for more discussion details)
Code style
Correct a number of code style issues with class names
This version corresponds to the first official release of Connectome Mapper 3 (CMP3).
It incorporates Pull Request #88 (>450 commits)
which includes the following changes.
Updates
traits has been updated from 6.0.0 to 6.2.0.
traitsui has been updated from 6.1.3 to 7.0.0.
pybids has been updated from 0.10.2 to 0.14.0.
nipype has been updated to 1.5.1 to 1.7.0.
dipy has been updated from 1.1.0 to 1.3.0.
obspy has been updated from 1.2.1 to 1.2.2.
New features
CMP3 can take custom segmentation (brain, white-matter, gray-matter and
CSF masks, Freesurfer’s aparcaseg - used for ACT for PFT) and parcellation
files as long as they comply to BIDS Derivatives specifications,
by providing the label value for the different entity in the filename.
This has led to the creation of the new module cmtklib.bids.io,
which provides different classes to represent the diversity of custom input
BIDS-formatted files.
(PR #88)
CMP3 generates generic label-index mapping tsv files along with the parcellation
files, in accordance to
BIDS derivatives.
This has led to the creation of the CreateBIDSStandardParcellationLabelIndexMappingFile
and CreateCMPParcellationNodeDescriptionFilesFromBIDSFile interfaces, which allows us to
create the BIDS label-index mapping file from the parcellation node description files employed
by CMP3 (that includes _FreeSurferColorLUT.txt and _dseg.graphml), and vice versa.
CMP3 provide python wrappers to the Docker and Singularity container images
(connectomemapper3_docker and connectomemapper3_singularity)
that will generate and execute the appropriate command to run the BIDS App.
(PR #109,
Lausanne2018 parcellation has completely replaced the old Lausanne2008 parcellation.
In brief, the new parcellation was introduced to provide (1) symmetry of labels
between hemispheres, and (2) a more optimal generation of the volumetric parcellation images,
that now are generated at once from annot files. This fixes the issue of overwritten labels
encountered by in the process of creating the Lausanne2008 parcellation. Any code and data
related to Lausanne2008 has been removed. If one still wish to use this old parcellation scheme,
one should use CMP3 (v3.0.0-RC4).
Output updates
Directories for the derivatives produced by cmp (cmp, freesurfer, nipype)
were renamed to cmp-, freesurfer-, and
nipype- to comply with BIDS 1.4.0+.
(PR #3 (fork))
Code refactoring
Creation in AnatomicalPipeline, DiffusionPipeline, fMRIPipeline of
create_datagrabber_node() and create_datasinker_node() methods to
reduce the code in create_workflow().
The run(command) function of cmp.bidsappmanager.core has been moved to
cmtklib.process, which is used by the python wrappers in cmp.cli.
Pipeline Improvements
Better handle of existing Freesurfer outputs. In this case, CMP3 does not
re-create the mri/orig/001.mgz and connect the reconall interface anymore.
Creation of 5TT, gray-matter / white-matter interface, and partial volume maps images
are performed in the preprocessing stage of the diffusion pipeline only if
necessary
Code Style
Clean code and remove a number of commented lines that are now obsolete.
Code related to the connection of nodes in the Nipype Workflow adopts a
specific format and are protected from being reformatted by BLACK with
the #fmt:off and #fmt:on tags.
Documentation
Add instructions to use custom segmentation and parcellation files as inputs.
Add description in contributing page of format for code related to
the connection of the nodes in a Nipype Workflow.
Add instructions to use the python wrappers for running the BIDS App.
(PR #115)
Add notification about the removal of the old Lausanne2008 parcellation, and
remove any other mentions in the documentation.
Software container
Define multiple build stages in Dockerfile, which can be run in parallel at build
with BUILDKIT.
(PR #88)
Software development life cycle
Update the list of outputs of circleci tests with the new names of
directories produced by cmp in output_dir/.
Following major changes in the pricing plans of CircleCI but also to improve its readability,
circleci/config.yml has been dramatically refactored, including:
* Use BUILDKIT in docker build to take advantage of the multi-stage build
* Reordering and modularization of the tests:
tests 01-02 (Docker): anatomical pipeline for each parcellation scheme
tests 03-06 (Docker): diffusion pipeline for dipy/mrtrix deterministic/probabilistic tractography
tests 07-08 (Docker): fMRI pipeline for FLIRT and BBRegistration registrations
test 09 (Singularity): anatomical pipeline for Lausanne2018 scheme
Creation of commands for steps that are shared between jobs to reduce code duplication
This version corresponds to the fourth and final release
candidate of Connectome Mapper 3 (CMP3).
It incorporates the relatively large
Pull Request #74 (~270 commits)
which includes the following changes such that it marks
the end of the release candidate phase.
New features
CMP3 pipeline configuration files adopt JSON as new format.
(PR #76)
CMP3 is compatible with PyPI for installation.
(PR #78)
BIDS convention naming of data derived from parcellation atlas adopt now the new BIDS
entity atlas-<atlas_label> to distinguish data derived from different parcellation
atlases. The use of the entity desc-<scale_label> to distinguish between
parcellation scale has been replaced by the use of the entity res-<scale_label>.
(PR #79)
Updates
Content of dataset_description.json for each derivatives folder has been updated
to conform to BIDS version 1.4.0.
(PR #79)
Code refactoring
Major refactoring of the cmtklib.config module with the addition and
replacement of a number of new methods to handle JSON configuration files.
(See full diff on GitHub)
Configuration files in the old INI format can be converted automatically
with the help of the two new methods check_configuration_format()
and convert_config_ini_2_json() to detect if configuration files are
in the INI format and to make the conversion.
(PR #76)
Major changes to make cmp and cmpbidsappmanager compatible with the
Python Package Index (pip) for package distribution and installation.
This includes the merge of setup.py and setup_gui.py, which
have been merged into one setup.py and a major refactoring to make
pip happy, as well as the creation of a new cmp.cli module,
migration to cmp.cli module and refactoring of the scripts
connectomemapper3, showmatrix_gpickle, and cmpbidsappmanager
with correction of code style issues and addition of missing docstrings.
(PR #78)
Improvements
Clean parameters to be saved in configuration files with the new API.
(PR #74)
Clean output printed by the cmpbidsappmanager Graphical User Interface.
(PR #74)
Add in cmtklib.config the three new functions print_error, print_blue,
and print_warning to use different colors to differentiate general info
(default color), error (red), command or action (blue), and highlight or
warning (yellow).
(PR #74)
Clean code and remove a number of commented lines that are now obsolete.
(PR #74,
PR #79)
Documentation
Review usage and add a note regarding the adoption of the new JSON format
for configuration files.
(PR #76)
Update tutorial on using CMP3 and Datalad for collaboration.
(PR #77)
Update installation instruction of cmpbidsappmanager using pipinstall..
(PR #78)
Update list of outputs following the new BIDS derivatives naming convention introduced.
(PR #79)
Bug fixes
Correct attributes related to the diffusion imaging model type multishell.
(PR #74)
Review code in cmtklib/connectome.py for saving functional connectome files
in GRAPHML format.
(PR #74)
Software Updates
Update version of datalad and dependencies
(PR #77):
datalad[full]==0.13.0 to datalad[full]==0.14.0.
datalad-container==0.3.1 to datalad-container==1.1.2.
datalad_neuroimaging==0.2.0 to datalad-neuroimaging==0.3.1.
git-annex=8.20200617 to git-annex=8.20210127.
datalad-revolution was removed.
Software development life cycle
Improve code coverage by calling the methods check_stages_execution()
and fill_stages_outputs()
on each pipeline when executed with coverage.
(PR #75)
Improve code coverage by saving in test-01 structural connectome files in MAT
and GRAPHML format.
(PR #74)
Improve code coverage by saving in test-07 functional connectome files
in GRAPHML format.
(PR #74)
Update the list of outputs for all tests.
(PR #74)
Add test-python-install job that test the build and installation of cmp
and cmpbidsappmanager packages compatible with pip.
(PR #78)
Update code for Dipy tracking with DTI model following major changes in Dipy 1.0 (Fix reported issue #54).
Update to Dipy 1.3.0 has removed the deprecated warnings related to CVXPY when using MAP_MRI (#63)
Do not set anymore OMP_NUM_THREADS at execution due to allocation errors raised when using numpy function dot in Dipy.
Software development life cycle
Add Test08 that runs anatomical and fMRI pipelines with:
Lausanne2018 parcellation, FSL FLIRT co-registration, all nuisance regression, linear detrending and scrubbing
A number of classes describing interfaces to fsl and mrtrix3 have been moved from cmtklib/interfaces/util.py to cmtklib/interfaces/fsl.py and cmtklib/interfaces/mrtrix3.py.
Capitalize the first letter of a number of class names.
Lowercase a number of variable names in cmtklib/parcellation.py.
Graphical User Interface
Improve display of qpushbuttons with images in the GUI (PR #52).
Make the window to control BIDS App execution scrollable.
Allow to specify a custom output directory.
Tune new options in the window to control BIDS App multi-threading (OpenMP and ANTs) and random number generators (ANTs and MRtrix).
Documentation
Full code documentation with numpydoc-style docstrings.
Fix the error reported in #17 if it is still occuring.
Review statements for creating contents of BIDS App entrypoint scripts to fix issue with Singularity converted images reported in #47.
Install dc package inside the BIDS App to fix the issue with FSL BET reported in #50.
Install libopenblas package inside the BIDS App to fix the issue with FSL EDDY_OPENMP reported in #49.
Software development life cycle
Add a new job test_docker_fmri that test the fMRI pipeline.
Add build_singularity, test_singularity_parcellation, deploy_singularity_latest, and deploy_singularity_release jobs to build, test and deploy the Singularity image in CircleCI (PR #56).
This version corresponds to the first release candidate of Connectome Mapper 3. In particular, it integrates Pull Request #40 where the last major changes prior to its official release have been made, which includes in particular:
Migration to Python 3
Fixes automatically with 2to3 and manually a number of Python 2 statements invalid in python 3 including the print() function
Correct automatically PEP8 code style issues with autopep8
Correct manually a number of code stly issues reported by Codacy (bandits/pylints/flake8)
Major dependency upgrades including:
dipy0.15->1.0 and related code changes in cmtklib/interfaces/dipy (Check here for more details about Dipy 1.0 changes)
Warning
Interface for tractography based on Dipy DTI model and EuDX tractography, which has been drastically changed in Dipy 1.0, has not been updated yet, It will be part of the next release candidate.
nipype1.1.8->1.5.0
pybids0.9.5->0.10.2
pydicom1.4.2->2.0.0
networkX2.2->2.4
statsmodels0.9.0->0.11.1
obspy1.1.1->1.2.1
traits5.1->6.0.0
traitsui6.0.0->6.1.3
numpy1.15.4->1.18.5
matplotlib1.1.8->1.5.0
fsleyes0.27.3->0.33.0
mne0.17.1->0.20.7
sphinx1.8.5->3.1.1
sphinx_rtd_theme0.4.3->0.5.0
recommonmark0.5.0->0.6.0
New feature
Option to run Freesurfer recon-all in parallel and to specify the number of threads used by not only Freesurfer but also all softwares relying on OpenMP for multi-threading. This can be achieved by running the BIDS App with the new option flag --number_of_threads.
Functions to save and load pipeline configuration files have been moved to cmtklib/config.py
Bug fixes
Major changes in how inspection of stage/pipeline outputs with the graphical user interface (cmpbidsappmanager) which was not working anymore after migration to Python3
Fixes to compute the structural connectivity matrices following migration to python 3
Fixes to computes ROI volumetry for Lausanne2008 and NativeFreesurfer parcellation schemes
Add missing renaming of the ROI volumetry file for the NativeFreesurfer parcellation scheme following BIDS
Create the mask used for computing peaks from the Dipy CSD model when performing Particle Filtering Tractography (development still on-going)
Add missing renaming of Dipy tensor-related maps (AD, RD, MD) following BIDS
Remove all references to use Custom segmentation / parcellation / diffusion FOD image / tractogram, inherited from CMP2 but not anymore functional following the adoption of BIDS standard inside CMP3.
Software development life cycle
Use Codacy to support code reviews and monitor code quality over time.
Use coveragepy in CircleCI during regression tests of the BIDS app and create code coverage reports published on our Codacy project page.
Add new regression tests in CircleCI to improve code coverage:
This version integrates Pull Request #33 which corresponds to the last beta release that still relies on Python 2.7. It includes in particular:
Upgrade
Uses fsleyes instead of fslview (now deprecated), which now included in the conda environment of the GUI (py27cmp-gui).
New feature
Computes of ROI volumetry stored in <output_dir>/sub-<label>(/ses<label>)/anat folder, recognized by their _stats.tsv file name suffix.
Improved replicability
Sets the MATRIX_RNG_SEED environment variable (used by MRtrix) and seed for the numpy random number generator (numpy.random.seed())
Bug fixes
Fixes the output inspector window of the cmpbidsappmanager (GUI) that fails to find existing outputs, after adoption of /bids_dir and /output_dir in the bidsapp docker image.
Fixes the way to get the list of networkx edge attributes in inspect_outputs() of ConnectomeStage for the output inspector window of the cmpbidsappmanager (GUI)
Added missing package dependencies (fury and vtk) that fixes dipy_CSD execution error when trying to import module actor from dipy.viz to save the results in a png
Fixes a number of unresolved references identified by pycharm code inspection tool
Code refactoring
Interfaces for fMRI processing were moved to cmtklib/functionalMRI.py.
Interface for fMRI connectome creation (rsfmri_conmat) moved to cmtklib/connectome.py
This version integrates Pull Request #28 which includes in summary:
A major revision of continuous integration testing and deployment with CircleCI which closes Issue 14 integrates an in-house dataset published and available on Zenodo @ https://doi.org/10.5281/zenodo.3708962.
Multiple bug fixes and enhancements incl. close Issue 30 , update mrtrix3 to RC3 version, bids-app run command generated by the GUI, location of the configuration and log files to be more BIDS compliant.
Change in tagging beta version which otherwise might not be meaningfull in accordance with the release date (especially when the expected date is delayed due to unexpected errors that might take longer to be fixed than expected).
This version addresses multiple issues to make successful conversion and run of the CMP3 BIDS App on HPC (Clusters) using Singularity.
Revised the build of the master and BIDS App images:
Install locales and set $LC_ALL and $LANG to make freesurfer hippocampal subfields and brainstem segmentation (matlab-based) modules working when run in the converted SIngularity image
BIDS input and output directories inside the BIDS App container are no longer the /tmp and /tmp/derivatives folders but /bids_dir and /output_dir.
.. warning:: this might affect the use of Datalad container (To be confirmed.)
Fix the branch of mrtrix3 to check out
Updated metadata
Fix the configuration of CircleCI to not use Docker layer cache feature anymore as this feature is not included anymore in the free plan for open source projects.
Improved documentation where the latest version should be dynamically generated everywhere it should appear.
Implementation of an in-house Nipype interface to AFNI 3DBandPass which can handle to check output as ..++orig.BRIK or as ..tlrc.BRIK (The later can occur with HCP preprocessed fmri data)
Updated multi-scale parcellation with a new symmetric version:
The right hemisphere labels were projected in the left hemisphere to create a symmetric version of the multiscale cortical parcellation proposed by Cammoun2012.
For scale 1, the boundaries of the projected regions over the left hemisphere were matched to the boundaries of the original parcellation for the left hemisphere.
This transformation was applied for the rest of the scales.
If your are using the Connectome Mapper 3 in your work, please acknowledge this software with the following two entries:
Tourbier S, Aleman-Gomez Y, Mullier E, Griffa A, Wirsich J, Tuncel MA, Jancovic J, Bach Cuadra M, Hagmann P, (2022). Connectome Mapper 3: A Flexible and Open-Source Pipeline Software for Multiscale Multimodal Human Connectome Mapping. Journal of Open Source Software, 7(74), 4248, https://doi.org/10.21105/joss.04248
@article{TourbierJOSS2022,doi={10.21105/joss.04248},url={https://doi.org/10.21105/joss.04248},year={2022},publisher={{The Open Journal}},volume={7},number={74},pages={4248},author={Tourbier, Sebastien and Rue Queralt, Joan and Glomb, Katharina and Aleman-Gomez, Yasser and Mullier, Emeline and Griffa, Alessandra and Schöttner, Mikkel and Wirsich, Jonathan and Tuncel, Anil and Jancovic, Jakub and Bach Cuadra, Meritxell and Hagmann, Patric},title={{Connectome Mapper 3: A Flexible and Open-Source Pipeline Software for Multiscale Multimodal Human Connectome Mapping}, journal = {{Journal of Open Source Software}}}
Tourbier S, Aleman-Gomez Y, Mullier E, Griffa A, Wirsich J, Tuncel MA, Jancovic J, Bach Cuadra M, Hagmann P. (2022). Connectome Mapper 3: A Flexible and Open-Source Pipeline Software for Multiscale Multimodal Human Connectome Mapping (v3.2.0). Zenodo. http://doi.org/10.5281/zenodo.6645256.
@software{TourbierZenodo6645256,author={Tourbier, Sebastien and Rue Queralt, Joan and Glomb, Katharina and Aleman-Gomez, Yasser and Mullier, Emeline and Griffa, Alessandra and Schöttner, Mikkel and Wirsich, Jonathan and Tuncel, Anil and Jancovic, Jakub and Bach Cuadra, Meritxell and Hagmann, Patric},title={{Connectome Mapper 3: A Flexible and Open-Source Pipeline Software for Multiscale Multimodal Human Connectome Mapping}},month=jun,year=2022,publisher={Zenodo},version={v3.0.4},doi={10.5281/zenodo.6645256},url={https://doi.org/10.5281/zenodo.6645256}}
The development philosophy for this new version of the Connectome Mapper is to:
Enhance interoperability by working with datasets structured following the Brain Imaging Data Structure structured dataset.
Keep the code of the processing as much as possible outside of the actual main Connectome Mapper code,
through the use and extension of existing Nipype interfaces and an external library (dubbed cmtklib).
Separate the code of the graphical interface and the actual main Connectomer Mapper code
through inheritance of the classes of the actual main stages and pipelines.
Enhance portability by freezing the computing environment with all software dependencies installed,
through the adoption of the BIDS App framework relying on light software container technologies.
Adopt best modern open-source software practices that includes to continuously test the build and execution of the BIDS App
with code coverage and to follow the PEP8 and PEP257 conventions for python code and docstring style conventions. The use
of an integrated development environment such as PyCharm or SublimeText with a python linter (code style checker) is strongly recommended.
Follow the all contributors specification to acknowledge any kind of contribution.
This means that contributions in many different ways (discussed in the following subsections) are welcome and will be properly acknowledged!
If you have contributed to CMP3 and are not listed as contributor, please add yourself and make a pull request.
This also means that further development, typically additions of other tools and configuration options should go in this direction.
The adding of newer configuration options to existing stages should be self-understandable. If the addition is large enough to be considered a “sub-module” of an existing stage, see the Diffusion stage example.
Adding a new stage implies the addition of the stage folder to the cmp/stages and cmp/bidsappmanager/stages directory and according modification of the parent pipeline along with insertion of a new image in cmp/bidsappmanager/stages. Copy-paste of existing stage (such as segmentation stage) is recommended. Note that CMP3 adopts a specific style for code dedicated to the connection of stages and interfaces, which is as follows:
The #fmt:off and #fmt:on flags protect the lines to be reformatted by BLACK.
Adding a new pipeline implies the creation of a new pipeline script and folder in the cmp/pipelines and cmp/bidsappmanager/pipelines directories Again copy-pasting an existing pipeline is the better idea here. Modification of cmp/project.py and cmp/bidsappmanager/project.py file is also needed.
Each new module, class or function should be properly documented with a docstring in accordance to the Numpy docstring style.
CMP3 could always use more documentation, whether as part of the official CMP3 docs, in docstrings, or even on the web in blog posts, articles, and such.
When you commit changes related to the documentation, please always insert at then end of your message [skipci] to not perform continuous integration of the whole project with CircleCI.
Now you can make your changes locally. If you add a new node in a pipeline or a completely new pipeline, we encourage you to rebuild the BIDS App Docker image (See BIDS App build instructions).
Note
Please keep your commit the most specific to a change it describes. It is highly advice to track un-staged files with gitstatus, add a file involved in the change to the stage one by one with gitadd<file>. The use of gitadd. is highly discouraged. When all the files for a given change are staged, commit the files with a brief message using gitcommit-m"[COMMIT_TYPE]:Yourdetaileddescriptionofthechange." that describes your change and where [COMMIT_TYPE] can be [FIX] for a bug fix, [ENH] for a new feature, [MAINT] for code maintenance and typo fix, [DOC] for documentation, [CI] for continuous integration testing, [UPD] for dependency update, [MISC] for miscellaneous.
When you’re done making changes, push your branch to GitHub:
Go to the clone directory of your fork and run the script build_bidsapp.sh
cdconnectomemapper3shscripts/build_bidsapp.sh
Note
Tag of the version of the image is extracted from cmp/info.py. You might want to change the version in this file to not overwrite an other existing image with the same version.