Note
Go to the end to download the full example code. or to run this example in your browser via Binder
Save and load GLM results#
This is an example of how to save and load functional near-infrared spectroscopy (fNIRS) GLM results from analysis in MNE-NIRS. As computation can be expensive and time consuming it can be useful to store computed results to disk, so that you can query the results later. For example, to remake a figure or answer a new scientific question.
For a description of the analysis in this tutorial see the MNE-NIRS individual GLM tutorial and MNE-NIRS group GLM tutorial. This tutorial will simply focus on saving and loading data.
The data used in this example is available at this location. It is a finger tapping example and is briefly described below. The dataset contains 5 participants. The example dataset is in BIDS format and therefore already contains information about triggers, condition names, etc.
Note
This tutorial uses data stored using the BIDS format [1].
MNE-Python allows you to process fNIRS data that is not in BIDS format.
Simply modify the read_raw_
function to match your data type.
See data importing tutorial to learn how
to use your data with MNE-Python.
# Authors: Robert Luke <mail@robertluke.net>
#
# License: BSD (3-clause)
# Import common libraries
from os.path import join
import pandas as pd
# Import MNE functions
from mne.preprocessing.nirs import beer_lambert_law, optical_density
# Import MNE-BIDS processing
from mne_bids import BIDSPath, get_entity_vals, read_raw_bids
from mne_nirs.datasets import fnirs_motor_group
from mne_nirs.experimental_design import make_first_level_design_matrix
# Import MNE-NIRS functions
from mne_nirs.statistics import read_glm, run_glm
Set up directories#
First we will define where the raw data is stored. We will analyse a BIDS dataset, note that the BIDS specification for NIRS data is still under development and you will need to install the development branch as described above.
We first define the root directory of our dataset.
root = fnirs_motor_group.data_path()
print(root)
/home/circleci/mne_data/fNIRS-motor-group
And as we are using MNE-BIDS we can create a BIDSPath. This helps to handle all the path wrangling.
dataset = BIDSPath(root=root, task="tapping")
print(dataset.directory)
/home/circleci/mne_data/fNIRS-motor-group
For example we can automatically query the subjects, tasks, and sessions.
subjects = get_entity_vals(root, "subject")
print(subjects)
['01', '02', '03', '04', '05']
But for this example we will only process the first two subjects
Define individual analysis#
First we define the analysis that will be applied to each file. This is a GLM analysis as described in the individual GLM tutorial, so this example will skim over the individual level details.
def individual_analysis(bids_path):
raw_intensity = read_raw_bids(bids_path=bids_path, verbose=False)
# Delete annotation labeled 15, as these just signify the experiment start and end.
raw_intensity.annotations.delete(raw_intensity.annotations.description == "15.0")
raw_intensity.pick(picks=range(20)).crop(200).resample(0.3) # Reduce load
raw_haemo = beer_lambert_law(optical_density(raw_intensity), ppf=0.1)
design_matrix = make_first_level_design_matrix(raw_haemo)
glm_est = run_glm(raw_haemo, design_matrix)
return glm_est
Run analysis and save to disk#
Next we loop through the five measurements and run the individual analysis
on each. We will then save the GLM results to disk as .h5
files.
for sub in subjects:
# Create path to file based on experiment info
data_path = dataset.update(
subject=sub, datatype="nirs", suffix="nirs", extension=".snirf"
)
# Analyse data and glm results
glm = individual_analysis(data_path)
# Next we create a location to store the results.
# In BIDS fashion we will store this in a subdirectory called derivatives.
# And we can use the BIDSPath type from above to handle the path details.
save_path = dataset.copy().update(
root=join(root, "derivatives"),
datatype="nirs",
suffix="glm",
extension=".h5",
check=False,
)
# Ensure the folder exists, and make it if not.
save_path.fpath.parent.mkdir(exist_ok=True, parents=True)
# Finally we save the results to disk as a hdf5 file
glm.save(save_path.fpath, overwrite=True)
Reload results and extract summary statistics#
Next we loop through the five measurements and reload all the results.
# Create a dataframe to store results
df = pd.DataFrame()
for sub in subjects:
# Point to the correct subject
save_path = save_path.update(subject=sub)
# Read the data
results = read_glm(save_path.fpath)
# Extract results from data as dataframe
individual_results = results.to_dataframe()
# Indicate the subject ID
individual_results["ID"] = sub
# Append individual results to larger dataframe
df = pd.concat([df, individual_results], ignore_index=True)
View the resulting dataframe#
Finally we can view the resulting dataframe which contains data from all subjects.
Total running time of the script: (0 minutes 5.133 seconds)
Estimated memory usage: 438 MB