Contains code to make the dataframe for selective recruitment testing. For more information, see (selective recruitment paper)
This module uses the Functional Fusion and corticco_cereb_connectivity packages and assumes that your data is organized according to the directory structure defined in Functional Fusion framework. see https://github.com/DiedrichsenLab/Functional_Fusion for information on dependencies, data structures, and how to organize your dataset.
First, clone these repositories by:
git clone https://github.com/DiedrichsenLab/Functional_Fusion.git
git clone https://github.com/DiedrichsenLab/cortico_cereb_connectivity.git
Second, clone the repository for selective recruitment by:
git clone https://github.com/DiedrichsenLab/selective_recruitment.git
Third, open your bashrc with a text editor and add paths to these repositories. For example:.
export PYTHONPATH="${PYTHONPATH}:/home/ROBARTS/lshahsha/Documents/Projects/Functional_Fusion"
export PYTHONPATH="${PYTHONPATH}:/home/ROBARTS/lshahsha/Documents/Projects/selective_recruitment"
Next, cd to the local folder for your repository and create a virtual environment on your computer, activate it, and install all the required dependencies:
python3 -m venv ./env
source ./env/bin/activate
pip install -r requirements.txt
Data must be extracted using Functional_Fusion framework. Check out extract_.py under scripts.
extract_wmfs(ses_id='ses-02', type='CondAll', atlas='fs32k')
import selective_recruitment.recruite_ana as ra
D_whole = ra.get_summary(dataset = "WMFS",
ses_id = 'ses-02',
type = "CondAll",
cerebellum_roi =None,
cortex_roi = None,
add_rest = True)
# you can save the datafarme on your computer
D_roi = ra.get_summary(dataset = "WMFS",
ses_id = 'ses-02',
type = "CondAll",
cerebellum_roi ="Verbal2Back",
cortex_roi = "Verbal2Back.32k",
add_rest = True)
in scripts, use script_prep_sc to create the summary dataframe for the connectivity-based approach