General linear model#

0. Introduction#

This notebook holds the code for the general linear model.

First we will need to load the data. We are now dealing with the preprocessed data. The preprocessing has been done with using fMRI-prep. fMRIprep describes itself as such “(…)It performs basic processing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, skullstripping etc.) providing outputs that can be easily submitted to a variety of group level analyses, including task-based or resting-state fMRI, graph theory measures, surface or volume-based statistics, etc.”

Thus, we will now perform our first level model. The predictors are the respective categories of the stimuli. The dependent variable is the brain activity in a given voxel. So lets start by loading the data! For now we are dealing with the first five runs of subject one, for the first session in the imagery task.

import os
os.chdir('/mnt/c/Users/janos/git/sub01-ses-01-imagery')

We can now take a look at all the files we have. In total, we have five runs. For each run, we have .json file, a nii.gz file and a confounds.tsv file. In the Dataset_Exploration notebook we had a similar structure. But instead of the events.tsv file, we now have the confounds file. The confound file contain the regressors of no interest. Those are variables, that we should explicitly exclude from our first level model and are computed during the preprocessing. We also see that we have preprocessed data, because the respective file names carry the additional “preproc”.

ls
sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.json*
sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.nii.gz*
sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_confounds.tsv*
sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_bold.json*
sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_bold.nii.gz*
sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_confounds.tsv*
sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_bold.json*
sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_bold.nii.gz*
sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_confounds.tsv*
sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_bold.json*
sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_bold.nii.gz*
sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_confounds.tsv*
sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_bold.json*
sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_bold.nii.gz*
sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_confounds.tsv*
z_maps/

We can now load one sample confounds.tsv file, to see what regressors of no interest are

import pandas as pd
confounds=pd.read_csv('sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_confounds.tsv',sep='\t')
confounds
global_signal global_signal_derivative1 global_signal_derivative1_power2 global_signal_power2 csf csf_derivative1 csf_power2 csf_derivative1_power2 white_matter white_matter_derivative1 ... rot_z_derivative1 rot_z_power2 rot_z_derivative1_power2 motion_outlier00 motion_outlier01 motion_outlier02 motion_outlier03 motion_outlier04 motion_outlier05 motion_outlier06
0 554.498550 NaN NaN 307468.641921 826.870090 NaN 683714.146374 NaN 551.701398 NaN ... NaN 0.000031 NaN 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 552.545617 -1.952933 3.813947 305306.659011 815.862929 -11.007161 665632.319586 121.157593 551.526335 -0.175063 ... -0.000709 0.000040 5.032484e-07 1.0 0.0 0.0 0.0 0.0 0.0 0.0
2 550.823228 -1.722389 2.966623 303406.228820 810.248197 -5.614732 656502.140689 31.525220 551.675734 0.149399 ... -0.000045 0.000040 2.061160e-09 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 548.659178 -2.164050 4.683113 301026.893801 807.405975 -2.842222 651904.408184 8.078227 551.932003 0.256269 ... 0.000220 0.000038 4.852768e-08 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 547.315987 -1.343191 1.804162 299554.789688 806.497248 -0.908726 650437.811788 0.825784 550.669163 -1.262840 ... 0.000058 0.000037 3.414065e-09 0.0 0.0 0.0 0.0 0.0 0.0 0.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
222 554.966041 -1.277939 1.633127 307987.307133 813.088753 -6.370867 661113.320100 40.587943 552.662723 -0.652898 ... 0.000004 0.000019 1.436410e-11 0.0 0.0 0.0 0.0 0.0 0.0 0.0
223 554.111798 -0.854243 0.729731 307039.885019 810.995302 -2.093451 657713.379057 4.382539 552.448883 -0.213841 ... 0.000335 0.000016 1.119772e-07 0.0 0.0 0.0 0.0 0.0 0.0 0.0
224 553.453749 -0.658049 0.433028 306311.052626 815.053154 4.057853 664311.643996 16.466168 552.643977 0.195094 ... -0.000573 0.000021 3.285238e-07 0.0 0.0 0.0 0.0 0.0 0.0 0.0
225 551.820319 -1.633431 2.668096 304505.664025 808.053376 -6.999778 652950.258208 48.996896 552.455372 -0.188605 ... -0.000682 0.000028 4.651786e-07 0.0 0.0 0.0 0.0 0.0 0.0 0.0
226 553.363106 1.542787 2.380193 306210.727138 809.010798 0.957423 654498.471869 0.916658 552.985391 0.530019 ... -0.000365 0.000032 1.334295e-07 0.0 0.0 0.0 0.0 0.0 0.0 0.0

227 rows × 243 columns

While there are certainly a lot of confound variable (243, to be exact) we are not including all of them in our first level analysis. For the course purpose, we will only include the rotational parameters pitch, yaw, roll [rot x, rot y and rot z], the translational axes x, y and z [trans x, trans y, trans z] and the global signal.

We can extract the respective columns from our confounds dataframe and create a new one, that only containts the regressors of no interest

reg_no_interest = confounds[['global_signal', 'trans_x','trans_z','trans_y','rot_x','rot_y','rot_z']].copy()
reg_no_interest
global_signal trans_x trans_z trans_y rot_x rot_y rot_z
0 554.498550 0.152997 0.059195 0.124130 -0.004563 0.007716 -0.005596
1 552.545617 0.174779 0.158409 0.017678 -0.003366 0.007310 -0.006305
2 550.823228 0.171488 0.189918 0.002432 -0.002752 0.007173 -0.006351
3 548.659178 0.168912 0.155673 0.054476 -0.003509 0.007052 -0.006131
4 547.315987 0.171913 0.199063 0.030412 -0.003196 0.007388 -0.006072
... ... ... ... ... ... ... ...
222 554.966041 -0.271419 0.030273 -0.364504 0.006790 -0.017684 -0.004396
223 554.111798 -0.240987 -0.066789 -0.252943 0.005830 -0.016664 -0.004061
224 553.453749 -0.195186 -0.108477 -0.237718 0.005214 -0.015303 -0.004634
225 551.820319 -0.225104 -0.083225 -0.247957 0.004335 -0.015039 -0.005316
226 553.363106 -0.237460 -0.057182 -0.257723 0.004266 -0.015373 -0.005682

227 rows × 7 columns

This looks good! Now that we took care of the confounds, its time to import nilearns FirstLevelModel function. This function creates a design matrix and uses the information provided by the events file.

The function requires a lot of different parameters to be set.

from nilearn.glm.first_level import FirstLevelModel
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/nilearn/glm/__init__.py:56: FutureWarning: The nilearn.glm module is experimental. It may change in any future release of Nilearn.
  'It may change in any future release of Nilearn.', FutureWarning)

For the sake of the course purpose, we will stick to the default settings. Only t_r (= the time of repetition of acquisitions) and the hrf_model (hemodynamic response model) need to be adapted.

We get the time of repetition from the respective json file!.
import json
with open('sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.json') as json_file:
    meta_file = json.load(json_file)
meta_file  
{'DelayTime': 0.09749999999999992,
 'RepetitionTime': 2.0,
 'SkullStripped': False,
 'SliceTimingCorrected': True,
 'StartTime': 0.951,
 'TaskName': 'imagery'}

The t_r is 2.0 seconds. Now that we have this parameter we are almost good to go. For the first level model the hrf model is set to SPM, so we also have this information.

fmri_glm = FirstLevelModel(t_r = 2.0, hrf_model = 'spm',
                           slice_time_ref=0.0, 
                           drift_model='cosine',
                           high_pass=.01, 
                           noise_model='ar1',
                           minimize_memory = False)

As mentioned before, we need the events.tsv file for the FirstLevelModel and thus the design matrix. Lets load this file so we can continue.

events=pd.read_csv('/home/jpauli/ds001506/sub-01/ses-imagery01/func/sub-01_ses-imagery01_task-imagery_run-01_events.tsv',sep='\t')
events
onset duration trial_no event_type category_id category_name category_index response_time evaluation
0 0.0 32.0 1.0 -1 NaN NaN NaN NaN NaN
1 32.0 4.0 2.0 1 1976957.0 n01976957 7.0 44.967050 4.0
2 36.0 8.0 2.0 2 1976957.0 n01976957 7.0 44.967050 4.0
3 44.0 3.0 2.0 3 1976957.0 n01976957 7.0 44.967050 4.0
4 47.0 1.0 2.0 4 1976957.0 n01976957 7.0 44.967050 4.0
... ... ... ... ... ... ... ... ... ...
101 432.0 4.0 27.0 1 1943899.0 n01943899 6.0 445.498226 2.0
102 436.0 8.0 27.0 2 1943899.0 n01943899 6.0 445.498226 2.0
103 444.0 3.0 27.0 3 1943899.0 n01943899 6.0 445.498226 2.0
104 447.0 1.0 27.0 4 1943899.0 n01943899 6.0 445.498226 2.0
105 448.0 6.0 28.0 -2 NaN NaN NaN NaN NaN

106 rows × 9 columns

From this file, we need the to save the onset, duration and the trial_type. However, we do not have the trial_type column yet. This column should tell us which respective trial we are dealing with at a given timestap. This is already given in the event_type column, but we need to translate the integers to an actual informative string. Also, we cannot simply assign every imagery trial the same value. We want to differantiate between the different stimuli. So before extracting the onset and duration column, we will first create the trial_type column and then proceed. This column indicates, when rest, cue presentation, imagery evaluation or inter-trial and post rest periods happened. Also it informs us, which exact stimuli was imagined.

trial_type = ['ABC']*len(events['event_type'])
category = events['category_id']
for idx, x in enumerate(events['event_type']):
    if x == -1:
        trial_type[idx] = 'Rest'
    if x == 1:
        trial_type[idx] = 'Cue presentation'
    if x == 2:
        trial_type[idx] = 'imagery' + str(category[idx])
    if x == 3:
        trial_type[idx] = 'Imagery evaluation'
    if x == 4 or x == -2:
        trial_type[idx] = 'inter-trial and post rest period'
        
        
    
event_sub_ses_run_01 = events[['onset','duration']]
event_sub_ses_run_01['trial_type'] = trial_type
event_sub_ses_run_01
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  
onset duration trial_type
0 0.0 32.0 Rest
1 32.0 4.0 Cue presentation
2 36.0 8.0 imagery1976957.0
3 44.0 3.0 Imagery evaluation
4 47.0 1.0 inter-trial and post rest period
... ... ... ...
101 432.0 4.0 Cue presentation
102 436.0 8.0 imagery1943899.0
103 444.0 3.0 Imagery evaluation
104 447.0 1.0 inter-trial and post rest period
105 448.0 6.0 inter-trial and post rest period

106 rows × 3 columns

The last thing we need is the fMRI img. We get this by loading the functional image and then applying nilearns mean_img function to it.

path_func_img = '/mnt/c/Users/janos/git/sessions_new/sub01-ses-01-imagery' 
fmri_img = os.path.join(path_func_img ,'sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.nii.gz')
from nilearn.image import mean_img 
from nilearn.plotting import plot_stat_map, plot_anat, plot_img, show, plot_glass_brain
mean_img_ = mean_img(fmri_img)
plot_img(mean_img_)
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f02b0e9b908>
../_images/6685ea4bc9565de9c87a387677b7a5caae6bf32ee59ae258681f29eef570e198.png

1.0 Running the model#

Now it is time to run our defined first level model. We are using our mean fMRI img and our event and confound file. This results in a design matrix. The design matrix is informing us about a given activity for the respective regressor and for a given scan number. We have a recording frequency of 2 seconds. The whole paradigm took about 400 seconds (see: Dataset_Exploration.ipynb). This is why the only have 200 scan numbers on our y-axis.

fmri_glm = fmri_glm.fit(fmri_img, event_sub_ses_run_01, reg_no_interest)
import matplotlib.pyplot as plt
%matplotlib inline
design_matrix = fmri_glm.design_matrices_[0]
from nilearn.plotting import plot_design_matrix
plot_design_matrix(design_matrix)
<AxesSubplot:label='conditions', ylabel='scan number'>
../_images/5aa4f378352f26d46a0202a0de401ccc409089c812cf52ad09ade1bce8d60530.png

We can also plot the expected response for a given category. This tells us at which time there was a BOLD response for the respective category.

plt.plot(design_matrix['imagery14435370'])
plt.xlabel('time')
plt.title('Expected Response for category 14435370')
Text(0.5, 1.0, 'Expected Response for category 14435370')
../_images/242404b926dd9c01cbd3e4fcfa336ccfd76d31a35447f38f16ed0320d51206e7.png

2.0 Detection of significant voxels#

In order to estimate our Betas of the GLM, we need to calculate the contrasts for all conditions. By doing this, we are weighting the columns to discover the associated statistics.

This requires some data wrangling. First, we need to extract all unique values from our event file, so we only have the stimuli presented in the paradigm. Before, we have to sort the values, because our design matrix follows a ascending order. Then we remove the one NaN from this list. After that, we will create the variable conditions. This list containts the prefix “active”, so we know we are dealing with an activated brain region and is added by the respective condition name.

Then an array of the length of all regressors is created. For each of our regressors of interest, so for all our conditions, i.e. imagery stimuli, a weight is assigned. One thing is very important here: Our design matrix tells us, that the first three regressors are NOT the imagery conditions but rather the cue presentation, rest and imagery evaluation. Due to this, we have to start by assigning the first weight to the fourth zero of our array. This is then combined into a dictionary, so it can serve as in input to the nilearn function plot_contrast_matrix.

import math
events.sort_values(by=['category_id'], inplace=True,ascending=True) 
con = events.category_id.unique().tolist()
con_no_NaN = [item for item in con if not(math.isnan(item)) == True]
 
conditions = ['spaceholder']*len(con_no_NaN)
for pos, x in enumerate(con_no_NaN):
    conditions[pos] = 'active -' + str(x)
from numpy import array
array_a = array([0]*design_matrix.shape[1])
array_a[3] = 1

arr = []
i = 4
k=1
while i < len(conditions)+4:
    arr.append(array_a)
    if len(arr) >= k:
        array_a = array([0]*design_matrix.shape[1])
        array_a[i] = 1
        i = i+1
        k=k+1
    else:
        array_a = array([0]*design_matrix.shape[1])
        i=i+1
        continue
   
condition = dict.fromkeys(conditions)
i=0
for k,x in condition.items():
    condition[k] = arr[i]
    i=i+1
condition
{'active -1443537.0': array([0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -1621127.0': array([0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -1677366.0': array([0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -1846331.0': array([0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -1858441.0': array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -1943899.0': array([0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -1976957.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2071294.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2128385.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2139199.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2190790.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2274259.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2416519.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2437136.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2437971.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2690373.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2797295.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2824058.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2882301.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2916179.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2950256.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -2951358.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -3064758.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -3122295.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -3124170.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0]),
 'active -3237416.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0])}

After finishing the data wrangling, it is finally time to compute the contrasts and plot the coefficients.

For now, we will plot and compute the contrast matrix for the condition 'active - 16773660'.
from nilearn.plotting import plot_contrast_matrix
plot_contrast_matrix(condition['active -1677366.0'], design_matrix=design_matrix)
<AxesSubplot:label='conditions'>
../_images/02fec068624d4c2df0bdd5eef22ad9c4cd9b47b9fae159e640960b759b66191b.png

We can also compute the effect size for the contrast. This must be done, because the BOLD signal unit inherits no statistical guarantee, mainly because the associated variance is not taken into account. We can compute a t test and z-transform the values, meaning the mean is equal to 0 and the variance is equal to 1 across voxels.

eff_map = fmri_glm.compute_contrast(condition['active -1677366.0'],
                                    output_type='effect_size')
z_map = fmri_glm.compute_contrast(condition['active -1677366.0'],
                                  output_type='z_score')
The z map serves combined with the condition labels as the input for nilearns decoding pipeline.

Its time to plot the z_map on top of the fMRI img. There are several approaches to define statistical tresholds. We will control for the false discovery rate, meaning the amount of false discoveries relative to all detections.

from nilearn.glm.thresholding import threshold_stats_img
_, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr')
print('False Discovery rate = 0.05 threshold: %.3f' % threshold)
plot_stat_map(z_map, bg_img=mean_img_, threshold=threshold,
              display_mode='ortho', cut_coords=None, black_bg=True,
              title='active -1677366.0 (fdr=0.05)')
False Discovery rate = 0.05 threshold: 2.599
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f02ac138ac8>
../_images/806cf472b4a8ffc386db229cc30769dfbd7720d66f0995acaa614cabfb5ebee5.png
plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False,
                 title='active -1677366.0(fdr=0.05)')
<nilearn.plotting.displays._projectors.OrthoProjector at 0x7f02a57be908>
../_images/f242e95dadae9b2a0e401d1efcc36f5216eb5306fb6a0595862d1556d77dfa55.png

3.0 Performing the glm for all runs#

Now that we demonstrated the process for one single run, we want to execute the GLM for all runs we have. First, we can start by saving all functional images in one list.

fmri_img_all_runs = []

    
for file in os.listdir():
     filename = file
     if filename.endswith(".gz"): 
         fmri_img_all_runs.append(file)
         continue
     else:
         continue
fmri_img_all_runs
['sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_bold.nii.gz']

Compute the mean_img for every image we just saved in the fmri_img_all_runs variable.

mean_img_all = []
for pos, x in enumerate(fmri_img_all_runs):
    m = mean_img(fmri_img_all_runs[pos])
    mean_img_all.append(m)

Save all confounds

confounds_all_runs = []

for file in os.listdir():
    filename = file
    if filename.endswith(".tsv"):
        confounds_all_runs.append(pd.read_csv(file,sep='\t'))
        continue
    else:
        continue
os.listdir()
['sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.json',
 'sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_confounds.tsv',
 'sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_bold.json',
 'sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_confounds.tsv',
 'sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_bold.json',
 'sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_confounds.tsv',
 'sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_bold.json',
 'sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_confounds.tsv',
 'sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_bold.json',
 'sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_confounds.tsv',
 'z_maps']

And then extract all regressors of no interest!

reg_no_interest_all = []

for x in confounds_all_runs:
    
    reg_no_interest_all.append(x[['global_signal', 'trans_x','trans_z','trans_y','rot_x','rot_y','rot_z']].copy())

Load all event files

os.chdir('/home/jpauli/ds001506/sub-01/ses-imagery01/func')
events_all_runs = []

for file in os.listdir():
    filename = file
    if filename.endswith(".tsv"):
        events_all_runs.append(pd.read_csv(file,sep='\t'))
        continue
    else:
        continue
    

Now once again we have to get the trialtype aka the stimuli seperated from the event files. For this case I created the trialtype function. After that, we extract onset and duration and create a new list called regressors_of_interest. This list contains the onset and duration of our regressors.

def trialtype(event):
    trial_type = ['SPACE']*len(event['event_type'])
    category = event['category_id']
    for idx, x in enumerate(event['event_type']):
        if x == -1:
            trial_type[idx] = 'Rest'
        if x == 1:
            trial_type[idx] = 'Cue presentation'
        if x == 2:
            trial_type[idx] = 'imagery' + str(category[idx])
        if x == 3:
            trial_type[idx] = 'Imagery evaluation'
        if x == 4 or x == -2:
            trial_type[idx] = 'inter-trial and post rest period'
    return trial_type
stimuli_all = []

for files, events in enumerate(events_all_runs):
    stimuli_all.append(trialtype(events_all_runs[files]))
regressors_of_interest = []

for files,events in enumerate(events_all_runs):
    regressors_of_interest.append(events_all_runs[files][['onset','duration']])
    regressors_of_interest[files]['trial_type'] = stimuli_all[files]
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:5: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  """
To proceed, it is necessary to define the design matrices for the respective runs. This is done by using the make_first_level_design_matrix function from nilearn.glm.first_level module.

First, we need to define some parameters. The times of repetition (T_R) as taken from the bold.json file. The frame times. This is equal to the number of scans multiplied by the times of repetition. The number of scans is simply the duration of the whole run divided by the T_R.

from numpy import arange
T_R = 2.0
n_scans = 227 
frame_times = arange(n_scans)*T_R
from nilearn.glm.first_level import make_first_level_design_matrix

Then, we just loop through our predefined variables and create a design matrix for each run.

from nilearn.glm.first_level import make_first_level_design_matrix
hrf_model = 'glover'
runs = [0,1,2,3,4]
X = []
for run, x in enumerate(runs):
    X.append(make_first_level_design_matrix(
    frame_times, regressors_of_interest[run], drift_model='polynomial', drift_order=3,
    add_regs=reg_no_interest_all[run], add_reg_names=None, hrf_model=hrf_model))

Next step is to simply create and then fit our first level model on the respective fmri images and matrices we just calculated.

glm = FirstLevelModel(t_r = T_R, hrf_model = 'spm',
                           slice_time_ref=0.0, 
                           drift_model='cosine',
                           high_pass=.01, 
                           noise_model='ar1',
                           minimize_memory = False)
os.chdir('/mnt/c/Users/janos/git/sub01-ses-01-imagery')
glm = glm.fit(fmri_img_all_runs, design_matrices=X)

Now we need to compute the contrasts. Since the stimuli are all the same for every run, we can just use the category list from above

CAVE:

In the first example of running the glm for only one run, I first created a design matrix that contained 47 regressors. This is because there are the variables drift 1 to drift 9. However, when calculating the design matrices for each run, I only get the variables drift 1 to drift 3. Thats why I needed to adapt the contrast dictionary from run one.

from numpy import array
array_a = array([0]*X[0].shape[1])
array_a[3] = 1

arr = []
i = 4
k=1
while i < len(conditions)+4:
    arr.append(array_a)
    if len(arr) >= k:
        array_a = array([0]*X[0].shape[1])
        array_a[i] = 1
        i = i+1
        k=k+1
    else:
        array_a = array([0]*X[0].shape[1])
        i=i+1
        continue
condition = dict.fromkeys(conditions)
i=0
for k,x in condition.items():
    condition[k] = arr[i]
    i=i+1
condition
{'active -1443537.0': array([0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -1621127.0': array([0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -1677366.0': array([0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -1846331.0': array([0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -1858441.0': array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -1943899.0': array([0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -1976957.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2071294.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2128385.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2139199.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2190790.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2274259.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2416519.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2437136.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2437971.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2690373.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2797295.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2824058.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2882301.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2916179.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2950256.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -2951358.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -3064758.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -3122295.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -3124170.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
 'active -3237416.0': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])}
from nilearn import plotting
from os import path
os.mkdir('/mnt/c/Users/janos/git/sub01-ses-01-imagery/z_maps')
write_dir = '/mnt/c/Users/janos/git/sub01-ses-01-imagery/z_maps'
print('Computing contrasts...')
for index, (contrast_id, contrast_val) in enumerate(condition.items()):
    print('  Contrast % 2i out of %i: %s' % (
        index + 1, len(condition), contrast_id))
    #Estimate the contasts. Note that the model implicitly computes a fixed
    # effect across the two sessions
    z_map = glm.compute_contrast(
        contrast_val, output_type='z_score')

    # write the resulting stat images to file
    z_image_path = path.join(write_dir, '%s_z_map.nii.gz' % contrast_id)
    z_map.to_filename(z_image_path)

4.0 Quality check#

4.1 Model evaluation#

Since we now calculated all contrasts, it is time to check the quality of the glm we produced and applied. Lets start with the model evaluation. We evalute the model by inspecting R-Squared.

At first a anatomical reference image is plotted. To check if everything was modelled correct, we are expecting only voxels within grey matter areas to respond to the general linear model, leading to an increased R-square value.
path_ref_img = '/home/jpauli/ds001506/sub-01/ses-anatomy/anat'
coordinates_anat = (-30,-30,16)
ref_anat_T1 = os.path.join(path_ref_img ,'sub-01_ses-anatomy_T1w.nii.gz')
plot_anat(ref_anat_T1, display_mode='ortho', cut_coords = coordinates_anat)
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f02956af828>
../_images/cb16f145d688bb2100354983f5beae50739d0b4d2f0bee21b4b9553083b31368.png
from nilearn import plotting
coordinates_func = (-20,-13,35)
plotting.plot_stat_map(glm.r_square[0], bg_img=mean_img_all[0], threshold=.1,
                       display_mode='ortho', cut_coords=coordinates_func, cmap='magma')
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f8a910e3e10>
../_images/a661910d0c806858eff62d287fac5f520f65c9b4365fcc5234126709f0c81407.png
plotting.plot_stat_map(glm.r_square[1], bg_img=mean_img_all[1], threshold=.1,
                       display_mode='ortho', cut_coords=coordinates_func, cmap='magma')
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f8a8765ef60>
../_images/4992059eae0461eb1aef69b535b84981ae53b2795abc80d8dd923dad6946d04c.png
plotting.plot_stat_map(glm.r_square[2], bg_img=mean_img_all[2], threshold=.1,
                       display_mode='ortho', cut_coords=coordinates_func, cmap='magma')
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f8a64aeb9e8>
../_images/7498b89bf79cec2b04e7fdd431cf2a2d95735fdd162fc9d00efa4bb75ebad702.png
plotting.plot_stat_map(glm.r_square[3], bg_img=mean_img_all[3], threshold=.1,
                       display_mode='ortho', cut_coords=coordinates_func, cmap='magma')
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f8a1aa6ec88>
../_images/e3a983214e3c8370e8c703bd1dde786503d2e61c0b526c67f8a8c0caf3602898.png
plotting.plot_stat_map(glm.r_square[4], bg_img=mean_img_all[4], threshold=.1,
                       display_mode='ortho', cut_coords=coordinates_func, cmap='magma')
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f8a16b30f98>
../_images/7a831d0a986152af212687b6ded06c58ca29e2e83f1a3f90e8b2c3ac095a8dee.png

We can observe the pattern, that R-square is relatively low when inspecting brain areas that mostly contain white matter. We want to inspect neural activtiy, and thus are interested in grey matter areas. This means, that only the voxels within the grey matter should respond to the general linear model, meaning that higher R-square values can be observed. This can be seen in all 5 plots, meaning our glm did exactly what it needed to do.

Thus, we can continue with the quality check.

4.2 Comparing the results with literature#

Next step would be to check, if the results of all contrasts across runs is consistend with the literature on mental imagery. Meaning, can a similar activation pattern be seen in the z-map plots when comparing it to activation pattern, that are based on several other studies/meta analysis on mental imagery? To investigate this, I will plot the contrasts and then compare them to the activation maps on neurosynth.

os.chdir('/mnt/c/Users/janos/git/sub01-ses-01-imagery/z_maps')
for file in os.listdir():
    _, threshold = threshold_stats_img(file, alpha=.05, height_control='fdr')
    plot_stat_map(file, bg_img=mean_img_, threshold=threshold,
            display_mode='ortho', cut_coords=coordinates_func, black_bg=True,
            title='{activation} (fdr=0.05)'.format(activation = file))
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/nilearn/plotting/displays/_slicers.py:145: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  facecolor=facecolor)
../_images/88a597b47ec437ccee328d2aa623540e79ee1b502286114e9e22e9cb09421bd1.png ../_images/348d803cb9761816d30f5f3d0526e47829f6f0f728c9f3f2465c27cc0a446a73.png ../_images/72af8b1ec6f79cfb066d6642443d5ba267acce00e4a58ca4f0fc1f66e7dcc4e2.png ../_images/d5343b9589500e6e1faec5b70c1faa3a8c4c2ed5f67f871edca168f9bf4a8bb7.png ../_images/5d1e73b3a4554269ca56bf0beb1183e1779e3918763041c5c9f5725d5a18673a.png ../_images/1c65e6f5d8f62d333c6b16bcc07a6efc60424d2e5722282d7c3594e6e9c4ee8a.png ../_images/89438c8442c1df1620b90d859cbda47261a77ab285a6692a9bdf177a931711dc.png ../_images/80fabe4247b95cdc4445aa6ff5d6c69a83f4fb0a85bccaee152f8c87aff0e350.png ../_images/dcc61d9bba9c1e62a3dd56ad56b76116942dd71d162b5098897008ecf2079363.png ../_images/365f45289303974efcd3a60850635cc5c6b3e36db483377d7ded317531bdad6a.png ../_images/b5497477b4a306db9565bd4d15b58dadbf877b163471ab3fcc931cda94b5dc57.png ../_images/1ecf2c8fbe48731e2a9838e896d9aeadff03fbf35e07c0819708f63622f5ec87.png ../_images/36b4a159c40d1441599401780dd2dcc61f3a27cc699b1f83400752ffc7460797.png ../_images/c1ff53a6ff1455611b2a124b04b3ac9fb692c22f1a7c88eff22ce1d9188f0c1b.png ../_images/d3c520c86f0e40373039ea466d0aed1901df356bfff11c2dbb10634b7cc1f026.png ../_images/fa0add7608246dc598a2e7fbe5a1487ae46c4da519b755fd890a6f3452b5f470.png ../_images/ee73d527b1f6b357124fb26b361a4d686bbc257226517bbc19f21293f6fe5c58.png ../_images/ced075aee646bd2b63d56aa98da08d5c889b6f896234c78e08b1c32bb7dd5e1a.png ../_images/cde76736fcfb81a91066b082a0590614ec7fc09768720ed360016ff5919582fa.png ../_images/668608752272c9d1f4e50d56e002c16f3dd14b5a2cc7d2518419371f7b438d2e.png ../_images/823e2e5fbb67e46f864073d79c6356682cbdcd72bfd2f2ebef8c19aaa8b6156b.png ../_images/ef743534841e5a3f823b1c591b293a9d28c5a05dd21a69137634dfdd0545a0ae.png ../_images/a987372da772b83719219dea1590b0a285876515492eb46dc73d77aa230fb0af.png ../_images/226ad23325410faa5c20ef7da7f5958fb2663631d1d0eadba5913025e76aa5f7.png ../_images/a53df649d8eed1690f0e593e48a9e6420af5a2bdf5092d02bb383355b398faec.png ../_images/a8f08f173a505786bfaef9346b0c2788a6722c4c7e626dd26d06f147fd8eaea4.png

neuro synth

When looking at all 26 plots and comparing it to the one from neurosynth, we see some similarities in the activity pattern. Especially in the occipital lobe (see posterior portion of the x plot) we inspect significant activation in our plots.

Since the activation pattern from the plotted z-maps seem to be in accordance to the results from neurosynth we can conclude, that the results are as expected and match with prior research.

4.3 Plotting design matrices#

For the last step, we should make sure that the design matrices show the same pattern of stimuli onset as the event files.

for run, matrix in enumerate(runs):
    design_matrix = glm.design_matrices_[run]
    plot_design_matrix(design_matrix)
../_images/db3dbb06747fadd72a6d0f945a3a9e6c41b542b1944eb99378448e4d36dda1ad.png ../_images/fbf1645c8c9a4292d004cc9a7daa33c908e9ebe6b7a5a6a2e39f6d5de1fbd9c7.png ../_images/41d795e81c0263153b33c380b27f711a10651e294d0c42e73ebb31fbafe4ebf3.png ../_images/6de54b56355c447b35e2ae5b6ae109493d7ef373f75df512383044383d9be564.png ../_images/10f728c7fc55489b2427aa3cdfdf7139235c85bba6e4dfc244cfe3d7bdefa52c.png
import seaborn as sns
for run,x in enumerate(runs):
    events_all_runs[run].sort_values(by=['category_id'], inplace=True,ascending=True) #sort values, because values are also sorted in design matrix.
    categories = events_all_runs[run]['category_id']
    categories_no_nan = categories.dropna()
    cat_string = [None]*106

    for idx, x in enumerate(categories_no_nan):
        cat_string[idx] = str(x)
    events_all_runs[run]['stimulus'] = cat_string
    
    plt.figure(figsize=(10,10))
    g=sns.scatterplot(data=events_all_runs[run], x='stimulus', y='onset')
    sns.despine(left=True)
    g.invert_yaxis()
    g.xaxis.tick_top()
    g.xaxis.set_label_position('top')
    plt.xticks(rotation=45);
    plt.title('Imagery session sub01-ses01-run{number}'.format(number = run+1));
../_images/044e3b11ef0f8c333722b7bcd9e51f919849aa61cc632547e3cb5f4a445ada64.png ../_images/f16fef22a4d92d087868af86940e8095902a63c48f9fde00eb5d7fa323eae62b.png ../_images/359269508f74d91b21f42074e7b3a2efb6148c8df7d6ce65f2014dc674bb22de.png ../_images/2b17fbe2ad2e23638f87c5d9597e39ac78619b039e740e5cfd0b8f34db8ef68d.png ../_images/262e5c469890dd899ef3e01ea98209734f05ff4be0239b491ed9ab41aae0caf2.png

Since the pattern is equal for the respective runs of the design matrices and event files, we can now proceed with the other sessions.

5.0 Calculating z-maps perrun for all sessions#

Since we have the data for three more sessions, we will apply the contrast computation for per run for each session. This way we can use split the existing z-maps into training and testing data, i.e. cross validating our model.

We will simply repeat all the step from section 3.0. Thus, I wont provide a detailed explanation on what is happening in the individual steps. The code is simply adapted a bit to iterate to all session folders and apply the same steps of section 3.0 to each folder.

Please note, that the functional files have been reprocessed. They are now in the respective subject anatomical space.

Further important notice: We will calculate the glm PER RUN and not across runs. This way we end up with more z_maps aka more data for the machine learning algorithm.

import os
os.chdir("/mnt/c/Users/janos/git/Sessions_new")
func_images = []

for session in ["01","02","03","04"]:
        for run in ["01","02","03","04","05"]: 
            func_images.append('sub-01_ses-imagery{}_task-imagery_run-{}_desc-preproc_bold.nii.gz'.format(session,run))
      
    
func_images
['sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery02_task-imagery_run-01_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery02_task-imagery_run-02_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery02_task-imagery_run-03_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery02_task-imagery_run-04_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery02_task-imagery_run-05_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery03_task-imagery_run-01_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery03_task-imagery_run-02_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery03_task-imagery_run-03_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery03_task-imagery_run-04_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery03_task-imagery_run-05_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery04_task-imagery_run-01_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery04_task-imagery_run-02_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery04_task-imagery_run-03_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery04_task-imagery_run-04_desc-preproc_bold.nii.gz',
 'sub-01_ses-imagery04_task-imagery_run-05_desc-preproc_bold.nii.gz']
os.mkdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_1_perrun')
os.mkdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_2_perrun')
os.mkdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_3_perrun')
os.mkdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_4_perrun')

ses_1 = '/mnt/c/Users/janos/git/Sessions_new/z_maps_1_perrun'
ses_2 = '/mnt/c/Users/janos/git/Sessions_new/z_maps_2_perrun'
ses_3 = '/mnt/c/Users/janos/git/Sessions_new/z_maps_3_perrun'
ses_4 = '/mnt/c/Users/janos/git/Sessions_new/z_maps_4_perrun'
glm = FirstLevelModel(t_r = T_R, hrf_model = 'spm',
                           slice_time_ref=0.0, 
                           drift_model='cosine',
                           high_pass=.01, 
                           noise_model='ar1',
                           minimize_memory = False)
X = []
hrf_model = 'glover'
run_num = 0
#j = 8

for session in ["01","02","03","04"]:
        for run in ["01","02","03","04","05"]:
            confounds__ = pd.read_table('/mnt/c/Users/janos/git/sessions_new/sub01-ses-{}-imagery/sub-01_ses-imagery{}_task-imagery_run-{}_desc-preproc_confounds.tsv'.format(session,session,run))
            reg_no_interest__ = confounds__[['global_signal', 'trans_x','trans_z','trans_y','rot_x','rot_y','rot_z']].copy()
            events_all_ = pd.read_table('/home/jpauli/ds001506/sub-01/ses-imagery{}/func/sub-01_ses-imagery{}_task-imagery_run-{}_events.tsv'.format(session,session,run))
            stimuli_all_ = trialtype(events_all_)
            regressors_of_interest_all_ = events_all_[['onset','duration']]
            regressors_of_interest_all_['trial_type'] = stimuli_all_
            X.append(make_first_level_design_matrix(
            frame_times, regressors_of_interest_all_, drift_model='polynomial', drift_order=3,
            add_regs=reg_no_interest__, add_reg_names=None, hrf_model=hrf_model))
            os.chdir("/mnt/c/Users/janos/git/sessions_new/sub01-ses-{}-imagery".format(session))
            glm_ses = glm.fit(func_images[run_num],design_matrices = X[run_num])
            print(func_images[run_num])
            run_num=run_num+1
            
                #j=0
            for index, (contrast_id, contrast_val) in enumerate(condition.items()):
                z_maps_new=glm_ses.compute_contrast(
                contrast_val, output_type='z_score')
                if session == "01":
                    z_image_path = path.join(ses_1, '%s_%s_%s_z_map.nii.gz' % (session, contrast_id,run_num))
                    z_maps_new.to_filename(z_image_path)
                if session == "02":
                    z_image_path = path.join(ses_2, '%s_%s_%s_z_map.nii.gz' % (session, contrast_id,run_num))
                    z_maps_new.to_filename(z_image_path)
                if session == "03":
                    z_image_path = path.join(ses_3, '%s_%s_%s_z_map.nii.gz' % (session, contrast_id,run_num))      
                    z_maps_new.to_filename(z_image_path)
                if session == "04":
                    z_image_path = path.join(ses_4, '%s_%s_%s_z_map.nii.gz' % (session, contrast_id,run_num))
                    z_maps_new.to_filename(z_image_path)
          

        
        



 
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery01_task-imagery_run-01_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery01_task-imagery_run-02_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery01_task-imagery_run-03_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery01_task-imagery_run-04_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery01_task-imagery_run-05_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery02_task-imagery_run-01_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery02_task-imagery_run-02_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery02_task-imagery_run-03_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery02_task-imagery_run-04_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery02_task-imagery_run-05_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery03_task-imagery_run-01_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery03_task-imagery_run-02_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery03_task-imagery_run-03_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery03_task-imagery_run-04_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery03_task-imagery_run-05_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery04_task-imagery_run-01_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery04_task-imagery_run-02_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery04_task-imagery_run-03_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery04_task-imagery_run-04_desc-preproc_bold.nii.gz
/home/jpauli/miniconda3/envs/neuro_ai/lib/python3.7/site-packages/ipykernel_launcher.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  del sys.path[0]
sub-01_ses-imagery04_task-imagery_run-05_desc-preproc_bold.nii.gz
Great! We now calculated a z-map for each stimuli per run! It makes sense to check if the voxel pattern are still comparable.

We reprocessed the functional data, meaning that they are no longer in ‘functional subject space’, but rather in the anatomical subject space. This is imporant for comparability.

from nilearn.image import mean_img
from nilearn.plotting import plot_img
os.chdir("/mnt/c/Users/janos/git/Sessions_new/sub01-ses-01-imagery")
mean_img_new = mean_img(func_images[0])
plot_img(mean_img_new)
<nilearn.plotting.displays._slicers.OrthoSlicer at 0x7f3c4af67940>
../_images/6685ea4bc9565de9c87a387677b7a5caae6bf32ee59ae258681f29eef570e198.png
Please note: The loops below work with an index variable 'c'. This is due to the fact, that the z_map folders perrun contain a lot of z_maps and plotting every single map would blow up the computational ability of my device. Thus, only the first 15 aka the z maps for the first 3 stimuli are plotted.
c=0
os.chdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_1_perrun')
for z_map in os.listdir():
    _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr')
    plot_stat_map(z_map, bg_img=mean_img_new, threshold=threshold,
            display_mode='ortho', cut_coords=coordinates_func, black_bg=True,
            title='{activation} (fdr=0.05)'.format(activation = z_map))
    c=c+1
    if c == 15:
        break
../_images/325f759bed9550ffdcc2c1809e54798caa77af4e2594ce0567fc65efc5135ca2.png ../_images/5c1a32160b559d3a1e117285f9fa62c1e5016483b4c765fbbe3b9087339526d2.png ../_images/6b9192c03da85ef77b183760960e2dcc7760461c25249fdfa2a0bbd5bc6f2107.png ../_images/90b0e6a12f2e2c492cf5bdd2307c826ccfce10884a78504e70a38bb86ad2cfeb.png ../_images/f542c83f54ca010040440816c2a3fc94448111fec46eeaf65641ae92d96287bf.png ../_images/5bd97dd3ffb04c4ff3941880f9ec7d3146ea2808bc5d7b36591852cdf94dd33a.png ../_images/87429ac8246cab7f9bdba70ef3b6105bc45b441228222ee69801ecbe802e1fe2.png ../_images/d23c80c90530d68d519eae8d57dc60f6b777376e115aaddefcbd4277aab5edbd.png ../_images/651af8d2a88f57d26720a16834399ec82d906e4e607a2b9cd993cdc002683bb9.png ../_images/fdf14e44bea58f2c6fb82d57b0bda8f9dcf65f20aee9b40aa64e5387787e54a2.png ../_images/01c5fd3563e8493f5c8c30c6b955dd204af4fcab5975a4c3ba539a6cc5a3b13a.png ../_images/1652c73903ebb83d98eccc17988807bb04f63348bcfd3ae87357bd20f0acab34.png ../_images/1987aeaf1531a2e2c77dbcb7a1672c36e7c6c77abacbcf233ef00df2d254c862.png ../_images/bacba39e71d0ad2ddd14ae66f09495af84dd285f061c1c152c5590f18e380e49.png ../_images/53a85d65d20a6bb171e32ea2cb37d8427d26babf7ba736e3fc333d78854823d8.png
os.chdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_2_perrun')
c=0
for z_map in os.listdir():
    _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr')
    plot_stat_map(z_map, bg_img=mean_img_new, threshold=threshold,
            display_mode='ortho', cut_coords=None, black_bg=True,
            title='{activation} (fdr=0.05)'.format(activation = z_map))
    c=c+1
    if c == 15:
        break
../_images/3089765c7d400bf643971648524c51148a3f384333dbb524b75819565f434cd1.png ../_images/b58882e39db3cb3afe20d6745a5b07b46c25af51154d74eaa8f3799c90a3d058.png ../_images/b97c564f9f90046b0f5396ef2a3572294a661ad198ec8b97eab67abbe25f6692.png ../_images/fdc5a279363387078dd9fd044588732e93ff7fa6ec424c3cabd1cd514d615eb3.png ../_images/833f0404ea6f1da9697eba5b7ad95aaa38c71d4179fedf6b2940135d645787f4.png ../_images/3735bafda5dd6d1a3c0178e362cc56e641c9b2546bd5aec11eeda1fc5db3aa63.png ../_images/8e7566efdabf4552b1a6a1843f6890bd4d5ca09da4231308e013f8eae8b28c89.png ../_images/5047bc3c1677d5bdf00c176ca328bebb21a4f321ba4851f9fe61b8594d513646.png ../_images/312d7d0ddd4d0b9514724f871dccd1cfcefe926fcd3c3703eb7a7f9a55042097.png ../_images/1859d824f6dea783c5cc8837ec1585b15a588ff9cb814feb46eef185ab5e3346.png ../_images/a965a12a9feced935e1d93c405010ddb757b2d882c172ef6301bc1e8a41227ce.png ../_images/a1f669fad1cfe3536c624b2cb112ec293528df1280e8f75b8835df1564c19eae.png ../_images/b300a664ffc08990eed20dabda4343a123177e0c61babac10dd2a5efd41b63c8.png ../_images/6954abf1a81a47d48f02fd468f72fa4014d5d0c16a0f5e155b7eb66845653625.png ../_images/f32847c81fc2c464bacc1a61c8c1adc3d8ed942ab34d81db14fce491683c54d6.png
os.chdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_3_perrun')
c=0
for z_map in os.listdir():
    _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr')
    plot_stat_map(z_map, bg_img=mean_img_new, threshold=threshold,
            display_mode='ortho', cut_coords=coordinates_func, black_bg=True,
            title='{activation} (fdr=0.05)'.format(activation = z_map))
    c=c+1
    if c== 15:
        break
../_images/5dd01ea1d6685ae43f636573cc30713baceab9ff60e4ce982d1d5198e6810a95.png ../_images/75d847ebbafc0a3cc3ddeee1718b2e68785e9a33f1b70f8f6cef8b6860f7becb.png ../_images/26524537504bf15675f3118c7aa745706242b02941f6896dcc48c12dabd88576.png ../_images/b77f3d61f4601871cbac03701f13c4efb74b3c1f961505768b2fbd6c779eaab3.png ../_images/6870eef62a918c3692a1662f10e8af48ab136c3a6ac723ccffd4a6ed8800da7c.png ../_images/e6bb3c4472751a0ceabf6d597d0791ebc23217c84aabd2315facb574b02b387a.png ../_images/e4f5eea620efe8a5efed2e92d1e9eb33bbd1f215f985b195f80a74f4e5bb1f8d.png ../_images/cd653bced490b167ea7a5b95d87f6a945106b33a7314c71c9ae02c11daf18448.png ../_images/2efb789396744c97d85b9aa2912d9c1ccf18235242f3a850c3a16299d9e63123.png ../_images/fb0fb6df072429ba9e8a49fc66936c92932f6f67ddece6cc2c3af62f57ee64c4.png ../_images/afdf21e49b43d0358887211ef1a2785b3e6aecddef5f1ca7b599c935e97a8025.png ../_images/3ab89489ee2772fcdf7b98bc2bf563e7779a64f2dfcffdf87e77c2407ecfe5a3.png ../_images/ac709cb11611747c077e30e9654fe2ffd57f3100c80c730385e974f296b082bf.png ../_images/4d560fd6a81b42fef4b90ee6733a67943249c4cc422c490f70127393cd2915e0.png ../_images/c9194e5b1bf825bac0d08948f7806a8df0dc75c4bedbc0cd89919db6eb743c98.png
os.chdir('/mnt/c/Users/janos/git/Sessions_new/z_maps_4_perrun')
c=0
for z_map in os.listdir():
    _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr')
    plot_stat_map(z_map, bg_img=mean_img_new, threshold=threshold,
            display_mode='ortho', cut_coords=coordinates_func, black_bg=True,
            title='{activation} (fdr=0.05)'.format(activation = z_map))
    c=c+1
    if c==15:
        break
../_images/e0405871fe34ae8f83c0ca77b857c916bc0bd8ab81f48b8268698e7b69478eae.png ../_images/9f442f8de78ce673b1dd03c6fa17fe2b45cf8ab95533bf706d92663eb175e441.png ../_images/2cf985a1ba3b718de28b8ed21e2d578bf7e79e1c2afc578d20c14d0bfe1ed3bb.png ../_images/667df7820ee074a491b3aa88fe5cc60e7f266b911e18bf2501f1ff531d4cb1c5.png ../_images/d604fd524a1314484e39374f41ffd5abee2e9eff7f83d9ff925d4ad9340e25d8.png ../_images/47db9d819b1e66456e63fa4424034619b95c6c189b6179415f46808c2c85c10b.png ../_images/6c1d4d96895778213100f186928c208115810f98970a30acc1962c59e6cc77f6.png ../_images/1db16b4a423e33ad54ede5a44be8376d2517328c931735f7b833560fee3cfe51.png ../_images/047194bbc315327648b99649fe68c5424dc7d1e1fdc6cdbf26df10e45b6cc8ca.png ../_images/613675abeab0f1135b66e053df9360c7edd406addcedac2c1f4c1a26c29906c0.png ../_images/e21d25fcb646f15609cb45ffe0aa356cd1a3c32fa06c1cb6b15dbe820efcd40d.png ../_images/c9e2b126185278b67ed4b7e1ca31622c58c9807eefef8d750d27644c37a5bcca.png ../_images/59937c34208e78064f5fe1e34a6cb3e64aa1094890a186646146d27a0a037dda.png ../_images/4d89cc12a4d968e5e0f6355ce54fdee8241f7302b757e5db01cf27170f5f0e53.png ../_images/8abd8f4d97a5612021fdfd0782711c552015b735c3ae3798c1cf43198002cbfa.png

The activity pattern looks somewhat comparable. We should now continue with the machine learning algorithm.

VERY IMPORANT INFORMATION:

This notebook has been gone through several attempts. Please take a look at the respective github repository folder to understand the different attempts made.

For example: The first idea was to simply run the glm across runs and not per run. This ended up with few data points, thus the idea do run the glm perrun was executed. Also, not only the data from subject one but also from subject two has been tried out. This lead to very similar problems, which you will encounter in the following notebooks. Please accept, that it is not possible to include every attempt in this jupyter book. If you are interested in which methods I tried out, please contact me (see: Welcome page) and I’ll be very happy to discuss everything with you.

6.0 Logistic Regression#

Since we now completed the GLM for all sessions, we should continue with the Logistic Regression. Please follow this jupyter book.