Save and load GLM results#

This is an example of how to save and load functional near-infrared spectroscopy (fNIRS) GLM results from analysis in MNE-NIRS. As computation can be expensive and time consuming it can be useful to store computed results to disk, so that you can query the results later. For example, to remake a figure or answer a new scientific question.

For a description of the analysis in this tutorial see the MNE-NIRS individual GLM tutorial and MNE-NIRS group GLM tutorial. This tutorial will simply focus on saving and loading data.

The data used in this example is available at this location. It is a finger tapping example and is briefly described below. The dataset contains 5 participants. The example dataset is in BIDS format and therefore already contains information about triggers, condition names, etc.

Note

This tutorial uses data stored using the BIDS format [1].

MNE-Python allows you to process fNIRS data that is not in BIDS format. Simply modify the read_raw_ function to match your data type. See data importing tutorial to learn how to use your data with MNE-Python.

# Authors: Robert Luke <mail@robertluke.net>
#
# License: BSD (3-clause)

# Import common libraries
from os.path import join

import pandas as pd

# Import MNE functions
from mne.preprocessing.nirs import beer_lambert_law, optical_density

# Import MNE-BIDS processing
from mne_bids import BIDSPath, get_entity_vals, read_raw_bids

from mne_nirs.datasets import fnirs_motor_group
from mne_nirs.experimental_design import make_first_level_design_matrix

# Import MNE-NIRS functions
from mne_nirs.statistics import read_glm, run_glm

Set up directories#

First we will define where the raw data is stored. We will analyse a BIDS dataset, note that the BIDS specification for NIRS data is still under development and you will need to install the development branch as described above.

We first define the root directory of our dataset.

/home/circleci/mne_data/fNIRS-motor-group

And as we are using MNE-BIDS we can create a BIDSPath. This helps to handle all the path wrangling.

dataset = BIDSPath(root=root, task="tapping")
print(dataset.directory)
/home/circleci/mne_data/fNIRS-motor-group

For example we can automatically query the subjects, tasks, and sessions.

subjects = get_entity_vals(root, "subject")
print(subjects)
['01', '02', '03', '04', '05']

But for this example we will only process the first two subjects

Define individual analysis#

First we define the analysis that will be applied to each file. This is a GLM analysis as described in the individual GLM tutorial, so this example will skim over the individual level details.

def individual_analysis(bids_path):
    raw_intensity = read_raw_bids(bids_path=bids_path, verbose=False)
    # Delete annotation labeled 15, as these just signify the experiment start and end.
    raw_intensity.annotations.delete(raw_intensity.annotations.description == "15.0")
    raw_intensity.pick(picks=range(20)).crop(200).resample(0.3)  # Reduce load
    raw_haemo = beer_lambert_law(optical_density(raw_intensity), ppf=0.1)
    design_matrix = make_first_level_design_matrix(raw_haemo)
    glm_est = run_glm(raw_haemo, design_matrix)

    return glm_est

Run analysis and save to disk#

Next we loop through the five measurements and run the individual analysis on each. We will then save the GLM results to disk as .h5 files.

for sub in subjects:
    # Create path to file based on experiment info
    data_path = dataset.update(
        subject=sub, datatype="nirs", suffix="nirs", extension=".snirf"
    )

    # Analyse data and glm results
    glm = individual_analysis(data_path)

    # Next we create a location to store the results.
    # In BIDS fashion we will store this in a subdirectory called derivatives.
    # And we can use the BIDSPath type from above to handle the path details.

    save_path = dataset.copy().update(
        root=join(root, "derivatives"),
        datatype="nirs",
        suffix="glm",
        extension=".h5",
        check=False,
    )
    # Ensure the folder exists, and make it if not.
    save_path.fpath.parent.mkdir(exist_ok=True, parents=True)

    # Finally we save the results to disk as a hdf5 file
    glm.save(save_path.fpath, overwrite=True)

Reload results and extract summary statistics#

Next we loop through the five measurements and reload all the results.

# Create a dataframe to store results
df = pd.DataFrame()

for sub in subjects:
    # Point to the correct subject
    save_path = save_path.update(subject=sub)

    # Read the data
    results = read_glm(save_path.fpath)

    # Extract results from data as dataframe
    individual_results = results.to_dataframe()

    # Indicate the subject ID
    individual_results["ID"] = sub

    # Append individual results to larger dataframe
    df = pd.concat([df, individual_results], ignore_index=True)

View the resulting dataframe#

Finally we can view the resulting dataframe which contains data from all subjects.

variable Condition df mse p_value se t theta Source Detector Chroma Significant ch_name ID
0 Control 59.0 1.869524e-10 5.993412e-01 7.698598e-06 -0.528201 -4.066406e-06 1 1 hbo False S1_D1 hbo 01
1 Tapping/Left 59.0 1.869524e-10 6.043981e-09 7.413595e-06 6.793526 5.036445e-05 1 1 hbo True S1_D1 hbo 01
2 Tapping/Right 59.0 1.869524e-10 7.593816e-14 7.530295e-06 9.712051 7.313461e-05 1 1 hbo True S1_D1 hbo 01
3 constant 59.0 1.869524e-10 3.294925e-01 9.559640e-07 -0.983264 -9.399650e-07 1 1 hbo False S1_D1 hbo 01
4 drift_1 59.0 1.869524e-10 5.296594e-11 2.727251e-05 8.008358 2.184080e-04 1 1 hbo True S1_D1 hbo 01
... ... ... ... ... ... ... ... ... ... ... ... ... ...
2135 drift_5 48.0 1.877388e-11 4.550922e-01 5.775801e-06 -0.753059 -4.349520e-06 3 3 hbr False S3_D3 hbr 02
2136 drift_6 48.0 1.877388e-11 4.718397e-03 5.773074e-06 2.963815 1.711032e-05 3 3 hbr True S3_D3 hbr 02
2137 drift_7 48.0 1.877388e-11 1.512092e-03 5.779047e-06 -3.365168 -1.944746e-05 3 3 hbr True S3_D3 hbr 02
2138 drift_8 48.0 1.877388e-11 2.830068e-02 5.772148e-06 -2.261437 -1.305335e-05 3 3 hbr True S3_D3 hbr 02
2139 drift_9 48.0 1.877388e-11 1.224099e-04 5.773596e-06 4.180832 2.413844e-05 3 3 hbr True S3_D3 hbr 02

2140 rows × 13 columns



Total running time of the script: (0 minutes 6.973 seconds)

Estimated memory usage: 469 MB

Gallery generated by Sphinx-Gallery