Group Level GLM Analysis#

This is an example of a group level GLM based functional near-infrared spectroscopy (fNIRS) analysis in MNE-NIRS.

Individual level analysis of this data is described in the MNE fNIRS waveform tutorial and the MNE-NIRS fNIRS GLM tutorial So this example will skim over the individual level details and focus on the group level aspect of analysis. Here we describe how to process multiple measurements and summarise group level effects both as summary statistics and visually.

The data used in this example is available at this location. It is a finger tapping example and is briefly described below. The dataset contains 5 participants. The example dataset is in BIDS format and therefore already contains information about triggers, condition names, etc.

Note

This tutorial uses data stored using the BIDS format [1].

MNE-Python allows you to process fNIRS data that is not in BIDS format. Simply modify the read_raw_ function to match your data type. See data importing tutorial to learn how to use your data with MNE-Python.

Note

Optodes were placed over the motor cortex using the standard NIRX motor montage, but with 8 short channels added (see their web page for details). To view the sensor locations run raw_intensity.plot_sensors(). A sound was presented to indicate which hand the participant should tap. Participants tapped their thumb to their fingers for 5s. Conditions were presented in a random order with a randomised inter stimulus interval.

# sphinx_gallery_thumbnail_number = 2

# Authors: Robert Luke <mail@robertluke.net>
#
# License: BSD (3-clause)

# Import common libraries
import matplotlib as mpl

# Import Plotting Library
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns

# Import StatsModels
import statsmodels.formula.api as smf

# Import MNE processing
from mne.preprocessing.nirs import beer_lambert_law, optical_density

# Import MNE-BIDS processing
from mne_bids import BIDSPath, get_entity_vals, read_raw_bids

from mne_nirs.channels import get_long_channels, get_short_channels, picks_pair_to_idx
from mne_nirs.datasets import fnirs_motor_group
from mne_nirs.experimental_design import make_first_level_design_matrix
from mne_nirs.io.fold import fold_channel_specificity

# Import MNE-NIRS processing
from mne_nirs.statistics import run_glm, statsmodels_to_results
from mne_nirs.visualisation import plot_glm_group_topo, plot_glm_surface_projection

Set up directories#

First we will define where the raw data is stored. We will analyse a BIDS dataset. This ensures we have all the metadata we require without manually specifying the trigger names etc. We first define where the root directory of our dataset is. In this example we use the example dataset fnirs_motor_group.

/home/circleci/mne_data/fNIRS-motor-group

And as we are using MNE-BIDS we can create a BIDSPath object. This class helps to handle all the path wrangling. We inform the software that we are analysing nirs data that is saved in the snirf format.

dataset = BIDSPath(
    root=root, task="tapping", datatype="nirs", suffix="nirs", extension=".snirf"
)

print(dataset.directory)
/home/circleci/mne_data/fNIRS-motor-group/nirs

For example we can automatically query the subjects, tasks, and sessions.

subjects = get_entity_vals(root, "subject")
print(subjects)
['01', '02', '03', '04', '05']

Define individual analysis#

First we define the analysis that will be applied to each file. This is a GLM analysis as described in the individual GLM tutorial, so this example will skim over the individual level details.

The analysis extracts a response estimate for each channel, each region of interest, and computes a contrast between left and right finger tapping. We return the raw object and data frames for the computed results. Information about channels, triggers and their meanings are stored in the BIDS structure and are automatically obtained when importing the data.

Here we also resample to a 0.3 Hz sample rate just to speed up the example and use less memory, resampling to 0.6 Hz is a better choice for full analyses.

Note

The nilearn library does not allow backslash characters in the condition name. So we must replace the backslash with an underscore to ensure the GLM computation is successful. Hopefully future versions of MNE-NIRS will automatically handle these characters, see mne-tools/mne-nirs#420 for more information. In the meantime use the following code to replace the illegal characters.

def individual_analysis(bids_path, ID):
    raw_intensity = read_raw_bids(bids_path=bids_path, verbose=False)
    # Delete annotation labeled 15, as these just signify the experiment start and end.
    raw_intensity.annotations.delete(raw_intensity.annotations.description == "15.0")
    # sanitize event names
    raw_intensity.annotations.description[:] = [
        d.replace("/", "_") for d in raw_intensity.annotations.description
    ]

    # Convert signal to haemoglobin and resample
    raw_od = optical_density(raw_intensity)
    raw_haemo = beer_lambert_law(raw_od, ppf=0.1)
    raw_haemo.resample(0.3)

    # Cut out just the short channels for creating a GLM repressor
    sht_chans = get_short_channels(raw_haemo)
    raw_haemo = get_long_channels(raw_haemo)

    # Create a design matrix
    design_matrix = make_first_level_design_matrix(raw_haemo, stim_dur=5.0)

    # Append short channels mean to design matrix
    design_matrix["ShortHbO"] = np.mean(
        sht_chans.copy().pick(picks="hbo").get_data(), axis=0
    )
    design_matrix["ShortHbR"] = np.mean(
        sht_chans.copy().pick(picks="hbr").get_data(), axis=0
    )

    # Run GLM
    glm_est = run_glm(raw_haemo, design_matrix)

    # Define channels in each region of interest
    # List the channel pairs manually
    left = [[4, 3], [1, 3], [3, 3], [1, 2], [2, 3], [1, 1]]
    right = [[8, 7], [5, 7], [7, 7], [5, 6], [6, 7], [5, 5]]
    # Then generate the correct indices for each pair
    groups = dict(
        Left_Hemisphere=picks_pair_to_idx(raw_haemo, left, on_missing="ignore"),
        Right_Hemisphere=picks_pair_to_idx(raw_haemo, right, on_missing="ignore"),
    )

    # Extract channel metrics
    cha = glm_est.to_dataframe()

    # Compute region of interest results from channel data
    roi = glm_est.to_dataframe_region_of_interest(
        groups, design_matrix.columns, demographic_info=True
    )

    # Define left vs right tapping contrast
    contrast_matrix = np.eye(design_matrix.shape[1])
    basic_conts = dict(
        [(column, contrast_matrix[i]) for i, column in enumerate(design_matrix.columns)]
    )
    contrast_LvR = basic_conts["Tapping_Left"] - basic_conts["Tapping_Right"]

    # Compute defined contrast
    contrast = glm_est.compute_contrast(contrast_LvR)
    con = contrast.to_dataframe()

    # Add the participant ID to the dataframes
    roi["ID"] = cha["ID"] = con["ID"] = ID

    # Convert to uM for nicer plotting below.
    cha["theta"] = [t * 1.0e6 for t in cha["theta"]]
    roi["theta"] = [t * 1.0e6 for t in roi["theta"]]
    con["effect"] = [t * 1.0e6 for t in con["effect"]]

    return raw_haemo, roi, cha, con

Run analysis on all participants#

Next we loop through the five measurements and run the individual analysis on each. We append the individual results in to a large dataframe that will contain the results from all measurements. We create a group dataframe for the region of interest, channel level, and contrast results.

df_roi = pd.DataFrame()  # To store region of interest results
df_cha = pd.DataFrame()  # To store channel level results
df_con = pd.DataFrame()  # To store channel level contrast results

for sub in subjects:  # Loop from first to fifth subject
    # Create path to file based on experiment info
    bids_path = dataset.update(subject=sub)

    # Analyse data and return both ROI and channel results
    raw_haemo, roi, channel, con = individual_analysis(bids_path, sub)

    # Append individual results to all participants
    df_roi = pd.concat([df_roi, roi], ignore_index=True)
    df_cha = pd.concat([df_cha, channel], ignore_index=True)
    df_con = pd.concat([df_con, con], ignore_index=True)
Reading 0 ... 23238  =      0.000 ...  2974.464 secs...
Reading 0 ... 18877  =      0.000 ...  2416.256 secs...
Reading 0 ... 18874  =      0.000 ...  2415.872 secs...
Reading 0 ... 23120  =      0.000 ...  2959.360 secs...
Reading 0 ... 23006  =      0.000 ...  2944.768 secs...

Visualise Individual results#

First we visualise the results from each individual to ensure the data values look reasonable. Here we see that we have data from five participants, we plot just the HbO values and observe they are in the expect range. We can already see that the control condition is always near zero, and that the responses look to be contralateral to the tapping hand.

grp_results = df_roi.query("Condition in ['Control', 'Tapping_Left', 'Tapping_Right']")
grp_results = grp_results.query("Chroma in ['hbo']")

sns.catplot(
    x="Condition",
    y="theta",
    col="ID",
    hue="ROI",
    data=grp_results,
    col_wrap=5,
    errorbar=None,
    palette="muted",
    height=4,
    s=10,
)
ID = 01, ID = 02, ID = 03, ID = 04, ID = 05
<seaborn.axisgrid.FacetGrid object at 0x7ff02d275f00>

Compute group level results#

Next we use a linear mixed effects model to examine the relation between conditions and our response estimate (theta). Combinations of 3 fixed effects will be evaluated, ROI (left vs right), condition (control, tapping/left, tapping/right), and chromophore (HbO, HbR). With a random effect of subject. Alternatively, you could export the group dataframe (df_roi.to_csv()) and analyse in your favorite stats program.

We do not explore the modeling procedure in depth here as topics such model selection and examining residuals are beyond the scope of this example (see relevant literature). Alternatively, we could use a robust linear model by using the code roi_model = rlm(‘theta ~ -1 + ROI:Condition:Chroma’, grp_results).fit().

grp_results = df_roi.query("Condition in ['Control','Tapping_Left', 'Tapping_Right']")

roi_model = smf.mixedlm(
    "theta ~ -1 + ROI:Condition:Chroma", grp_results, groups=grp_results["ID"]
).fit(method="nm")
roi_model.summary()
/home/circleci/python_env/lib/python3.10/site-packages/statsmodels/regression/mixed_linear_model.py:2237: ConvergenceWarning: The MLE may be on the boundary of the parameter space.
  warnings.warn(msg, ConvergenceWarning)
Model: MixedLM Dependent Variable: theta
No. Observations: 60 Method: REML
No. Groups: 5 Scale: 5.3964
Min. group size: 12 Log-Likelihood: -118.2233
Max. group size: 12 Converged: Yes
Mean group size: 12.0
Coef. Std.Err. z P>|z| [0.025 0.975]
ROI[Left_Hemisphere]:Condition[Control]:Chroma[hbo] -0.242 1.039 -0.233 0.816 -2.278 1.794
ROI[Right_Hemisphere]:Condition[Control]:Chroma[hbo] -0.720 1.039 -0.693 0.489 -2.756 1.317
ROI[Left_Hemisphere]:Condition[Tapping_Left]:Chroma[hbo] 1.965 1.039 1.892 0.059 -0.071 4.002
ROI[Right_Hemisphere]:Condition[Tapping_Left]:Chroma[hbo] 7.199 1.039 6.929 0.000 5.162 9.235
ROI[Left_Hemisphere]:Condition[Tapping_Right]:Chroma[hbo] 6.678 1.039 6.428 0.000 4.641 8.714
ROI[Right_Hemisphere]:Condition[Tapping_Right]:Chroma[hbo] 2.983 1.039 2.871 0.004 0.947 5.019
ROI[Left_Hemisphere]:Condition[Control]:Chroma[hbr] 0.231 1.039 0.222 0.824 -1.805 2.267
ROI[Right_Hemisphere]:Condition[Control]:Chroma[hbr] 0.126 1.039 0.122 0.903 -1.910 2.163
ROI[Left_Hemisphere]:Condition[Tapping_Left]:Chroma[hbr] -1.441 1.039 -1.387 0.166 -3.477 0.595
ROI[Right_Hemisphere]:Condition[Tapping_Left]:Chroma[hbr] -2.754 1.039 -2.651 0.008 -4.790 -0.717
ROI[Left_Hemisphere]:Condition[Tapping_Right]:Chroma[hbr] -2.833 1.039 -2.727 0.006 -4.870 -0.797
ROI[Right_Hemisphere]:Condition[Tapping_Right]:Chroma[hbr] -1.241 1.039 -1.194 0.232 -3.277 0.795
Group Var 0.000 0.159



Second level analysis with covariates#

It is simple to extend these models to include covariates. This dataset is small, so including additional factors may not be appropriate. However, for instructional purpose, we will include a covariate of gender. There are 3 females and 2 males in this dataset. Also, for instructional purpose, we modify the model above to only explore the difference between the two tapping conditions in the hbo signal in the right hemisphere.

From the model result we observe that hbo responses in the right hemisphere are smaller when the right hand was used (as expected for these contralaterally dominant responses) and there is no significant effect of gender.

grp_results = df_roi.query("Condition in ['Tapping_Left', 'Tapping_Right']")
grp_results = grp_results.query("Chroma in ['hbo']")
grp_results = grp_results.query("ROI in ['Right_Hemisphere']")

roi_model = smf.mixedlm(
    "theta ~ Condition + Sex", grp_results, groups=grp_results["ID"]
).fit(method="nm")
roi_model.summary()
Model: MixedLM Dependent Variable: theta
No. Observations: 10 Method: REML
No. Groups: 5 Scale: 2.6753
Min. group size: 2 Log-Likelihood: -18.3513
Max. group size: 2 Converged: Yes
Mean group size: 2.0
Coef. Std.Err. z P>|z| [0.025 0.975]
Intercept 5.714 1.846 3.096 0.002 2.097 9.331
Condition[T.Tapping_Right] -4.216 1.034 -4.075 0.000 -6.243 -2.188
Sex[T.male] 3.711 2.801 1.325 0.185 -1.779 9.202
Group Var 8.079 6.219



Visualise group results#

Now we can summarise the output of the second level model. This figure shows that the control condition has small responses that are not significantly different to zero for both HbO and HbR in both hemispheres. Whereas clear significant responses are show for the two tapping conditions. We also observe the the tapping response is larger in the contralateral hemisphere. Filled symbols represent HbO, unfilled symbols represent HbR.

# Regenerate the results from the original group model above
grp_results = df_roi.query("Condition in ['Control','Tapping_Left', 'Tapping_Right']")
roi_model = smf.mixedlm(
    "theta ~ -1 + ROI:Condition:Chroma", grp_results, groups=grp_results["ID"]
).fit(method="nm")

df = statsmodels_to_results(roi_model)

sns.catplot(
    x="Condition",
    y="Coef.",
    hue="ROI",
    data=df.query("Chroma == 'hbo'"),
    errorbar=None,
    palette="muted",
    height=4,
    s=10,
)
plot 12 group glm
/home/circleci/python_env/lib/python3.10/site-packages/statsmodels/regression/mixed_linear_model.py:2237: ConvergenceWarning: The MLE may be on the boundary of the parameter space.
  warnings.warn(msg, ConvergenceWarning)

<seaborn.axisgrid.FacetGrid object at 0x7fefe8506140>

Group topographic visualisation#

We can also view the topographic representation of the data (rather than the ROI summary above). Here we just plot the oxyhaemoglobin for the two tapping conditions. First we compute the mixed effects model for each channel (rather than region of interest as above). Then we pass these results to the topomap function.

fig, axes = plt.subplots(
    nrows=2, ncols=2, figsize=(10, 10), gridspec_kw=dict(width_ratios=[1, 1])
)

# Cut down the dataframe just to the conditions we are interested in
ch_summary = df_cha.query("Condition in ['Tapping_Left', 'Tapping_Right']")
ch_summary = ch_summary.query("Chroma in ['hbo']")

# Run group level model and convert to dataframe
ch_model = smf.mixedlm(
    "theta ~ -1 + ch_name:Chroma:Condition", ch_summary, groups=ch_summary["ID"]
).fit(method="nm")
ch_model_df = statsmodels_to_results(ch_model)

# Plot the two conditions
plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo"),
    ch_model_df.query("Condition in ['Tapping_Left']"),
    colorbar=False,
    axes=axes[0, 0],
    vlim=(0, 20),
    cmap=mpl.cm.Oranges,
)

plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo"),
    ch_model_df.query("Condition in ['Tapping_Right']"),
    colorbar=True,
    axes=axes[0, 1],
    vlim=(0, 20),
    cmap=mpl.cm.Oranges,
)

# Cut down the dataframe just to the conditions we are interested in
ch_summary = df_cha.query("Condition in ['Tapping_Left', 'Tapping_Right']")
ch_summary = ch_summary.query("Chroma in ['hbr']")

# Run group level model and convert to dataframe
ch_model = smf.mixedlm(
    "theta ~ -1 + ch_name:Chroma:Condition", ch_summary, groups=ch_summary["ID"]
).fit(method="nm")
ch_model_df = statsmodels_to_results(ch_model)

# Plot the two conditions
plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbr"),
    ch_model_df.query("Condition in ['Tapping_Left']"),
    colorbar=False,
    axes=axes[1, 0],
    vlim=(-10, 0),
    cmap=mpl.cm.Blues_r,
)
plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbr"),
    ch_model_df.query("Condition in ['Tapping_Right']"),
    colorbar=True,
    axes=axes[1, 1],
    vlim=(-10, 0),
    cmap=mpl.cm.Blues_r,
)
Tapping_Left, Tapping_Right, Tapping_Left, Tapping_Right
<Axes: title={'center': 'Tapping_Right'}>

Contrasts#

Finally we can examine the difference between the left and right hand tapping conditions by viewing the contrast results in a topographic representation.

fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
con_summary = df_con.query("Chroma in ['hbo']")

# Run group level model and convert to dataframe
con_model = smf.mixedlm(
    "effect ~ -1 + ch_name:Chroma", con_summary, groups=con_summary["ID"]
).fit(method="nm")
con_model_df = statsmodels_to_results(
    con_model, order=raw_haemo.copy().pick(picks="hbo").ch_names
)

plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo"), con_model_df, colorbar=True, axes=axes
)
Contrast
/home/circleci/python_env/lib/python3.10/site-packages/statsmodels/regression/mixed_linear_model.py:2237: ConvergenceWarning: The MLE may be on the boundary of the parameter space.
  warnings.warn(msg, ConvergenceWarning)

<Axes: title={'center': 'Contrast'}>

Or we can view only the left hemisphere for the contrast. And set all channels that dont have a significant response to zero.

plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo").pick(picks=range(10)),
    con_model_df,
    colorbar=True,
    threshold=True,
)
Contrast
Reducing GLM results to match MNE data

<Axes: title={'center': 'Contrast'}>

Cortical Surface Projections#

The topographic plots above can sometimes be difficult to interpret with respect to the underlying cortical locations. It is also possible to present the data by projecting the channel level GLM values to the nearest cortical surface. This can make it easier to understand the spatial aspects of your data. Note however, that this is not a complete forward model with photon migration simulations. In the figure below we project the group results from the two conditions to the cortical surface, and also present the contrast results in the same fashion. As in the topo plots above you can see that the activity is predominately contralateral to the side of finger tapping.

# Generate brain figure from data
clim = dict(kind="value", pos_lims=(0, 8, 11))
brain = plot_glm_surface_projection(
    raw_haemo.copy().pick("hbo"),
    con_model_df,
    clim=clim,
    view="dorsal",
    colorbar=True,
    size=(800, 700),
)
brain.add_text(0.05, 0.95, "Left-Right", "title", font_size=16, color="k")

# Run model code as above
clim = dict(kind="value", pos_lims=(0, 11.5, 17))
for idx, cond in enumerate(["Tapping_Left", "Tapping_Right"]):
    # Run same model as explained in the sections above
    ch_summary = df_cha.query("Condition in [@cond]")
    ch_summary = ch_summary.query("Chroma in ['hbo']")
    ch_model = smf.mixedlm(
        "theta ~ -1 + ch_name", ch_summary, groups=ch_summary["ID"]
    ).fit(method="nm")
    model_df = statsmodels_to_results(
        ch_model, order=raw_haemo.copy().pick("hbo").ch_names
    )

    # Generate brain figure from data
    brain = plot_glm_surface_projection(
        raw_haemo.copy().pick("hbo"),
        model_df,
        clim=clim,
        view="dorsal",
        colorbar=True,
        size=(800, 700),
    )
    brain.add_text(0.05, 0.95, cond, "title", font_size=16, color="k")
plot 12 group glm
  • plot 12 group glm
  • plot 12 group glm
True
True
True

Table of channel level results#

Sometimes a reviewer wants a long table of results per channel. This can be generated from the statistics dataframe.

ch_summary = df_cha.query("Condition in ['Tapping_Left', 'Tapping_Right']")
ch_summary = ch_summary.query("Chroma in ['hbo']")

# Run group level model and convert to dataframe
ch_model = smf.mixedlm(
    "theta ~ -1 + ch_name:Chroma:Condition", ch_summary, groups=ch_summary["ID"]
).fit(method="nm")

# Here we can use the order argument to ensure the channel name order
ch_model_df = statsmodels_to_results(
    ch_model, order=raw_haemo.copy().pick(picks="hbo").ch_names
)
# And make the table prettier
ch_model_df.reset_index(drop=True, inplace=True)
ch_model_df = ch_model_df.set_index(["ch_name", "Condition"])
ch_model_df
Coef. Std.Err. z P>|z| [0.025 0.975] Chroma Significant
ch_name Condition
S1_D1 hbo Tapping_Left 2.300806 2.603911 0.883596 3.769142e-01 -2.802765 7.404377 hbo False
Tapping_Right 4.832504 2.603911 1.855864 6.347295e-02 -0.271067 9.936075 hbo False
S1_D2 hbo Tapping_Left 1.441410 2.603911 0.553556 5.798828e-01 -3.662161 6.544981 hbo False
Tapping_Right 5.200520 2.603911 1.997196 4.580388e-02 0.096949 10.304091 hbo True
S1_D3 hbo Tapping_Left 2.170590 2.603911 0.833589 4.045129e-01 -2.932981 7.274161 hbo False
Tapping_Right 11.935955 2.603911 4.583857 4.564758e-06 6.832384 17.039526 hbo True
S2_D1 hbo Tapping_Left 0.310979 2.603911 0.119428 9.049367e-01 -4.792593 5.41455 hbo False
Tapping_Right 0.952344 2.603911 0.365736 7.145620e-01 -4.151227 6.055915 hbo False
S2_D3 hbo Tapping_Left 1.214184 2.603911 0.466293 6.410061e-01 -3.889387 6.317755 hbo False
Tapping_Right 5.060555 2.603911 1.943444 5.196251e-02 -0.043016 10.164126 hbo False
S2_D4 hbo Tapping_Left -0.472056 2.603911 -0.181287 8.561421e-01 -5.575627 4.631515 hbo False
Tapping_Right -0.016825 2.603911 -0.006462 9.948445e-01 -5.120396 5.086746 hbo False
S3_D2 hbo Tapping_Left 3.694683 2.603911 1.418898 1.559288e-01 -1.408888 8.798254 hbo False
Tapping_Right 7.237216 2.603911 2.779364 5.446543e-03 2.133645 12.340787 hbo True
S3_D3 hbo Tapping_Left 2.799423 2.603911 1.075084 2.823371e-01 -2.304148 7.902994 hbo False
Tapping_Right 9.676907 2.603911 3.716298 2.021635e-04 4.573336 14.780478 hbo True
S4_D3 hbo Tapping_Left 2.553132 2.603911 0.980499 3.268399e-01 -2.550439 7.656703 hbo False
Tapping_Right 7.322194 2.603911 2.811999 4.923469e-03 2.218623 12.425765 hbo True
S4_D4 hbo Tapping_Left 4.423935 2.603911 1.698958 8.932705e-02 -0.679636 9.527507 hbo False
Tapping_Right 5.354420 2.603911 2.056300 3.975365e-02 0.250849 10.457992 hbo True
S5_D5 hbo Tapping_Left 4.454096 2.603911 1.710541 8.716590e-02 -0.649475 9.557667 hbo False
Tapping_Right 2.474102 2.603911 0.950148 3.420368e-01 -2.629469 7.577673 hbo False
S5_D6 hbo Tapping_Left 2.297986 2.603911 0.882513 3.774993e-01 -2.805585 7.401557 hbo False
Tapping_Right 1.478123 2.603911 0.567655 5.702693e-01 -3.625448 6.581694 hbo False
S5_D7 hbo Tapping_Left 13.375966 2.603911 5.136876 2.793433e-07 8.272395 18.479537 hbo True
Tapping_Right 3.622637 2.603911 1.391229 1.641559e-01 -1.480934 8.726208 hbo False
S6_D5 hbo Tapping_Left 1.981211 2.603911 0.760860 4.467408e-01 -3.12236 7.084782 hbo False
Tapping_Right 1.651362 2.603911 0.634185 5.259599e-01 -3.452209 6.754933 hbo False
S6_D7 hbo Tapping_Left 9.014428 2.603911 3.461881 5.364150e-04 3.910857 14.117999 hbo True
Tapping_Right 3.793460 2.603911 1.456832 1.451628e-01 -1.310112 8.897031 hbo False
S6_D8 hbo Tapping_Left 6.993697 2.603911 2.685844 7.234689e-03 1.890126 12.097269 hbo True
Tapping_Right 4.373042 2.603911 1.679413 9.307154e-02 -0.730529 9.476613 hbo False
S7_D6 hbo Tapping_Left 6.571715 2.603911 2.523787 1.160984e-02 1.468144 11.675286 hbo True
Tapping_Right 3.420110 2.603911 1.313451 1.890310e-01 -1.683462 8.523681 hbo False
S7_D7 hbo Tapping_Left 8.158717 2.603911 3.133255 1.728789e-03 3.055146 13.262288 hbo True
Tapping_Right 3.112211 2.603911 1.195207 2.320063e-01 -1.99136 8.215782 hbo False
S8_D7 hbo Tapping_Left 11.167793 2.603911 4.288854 1.795973e-05 6.064222 16.271364 hbo True
Tapping_Right 5.021596 2.603911 1.928482 5.379515e-02 -0.081975 10.125167 hbo False
S8_D8 hbo Tapping_Left 8.682680 2.603911 3.334477 8.546003e-04 3.579109 13.786251 hbo True
Tapping_Right 6.716025 2.603911 2.579207 9.902734e-03 1.612454 11.819596 hbo True


Relating Responses to Brain Landmarks#

It can be useful to understand what brain structures the measured response may have resulted from. Here we illustrate how to report the brain structures/landmarks that the source detector pair with the largest response was sensitive to.

First we determine the channel with the largest response.

Next, we query the fOLD dataset to determine the brain landmarks that this channel is most sensitive to. MNE-NIRS does not distribute the fOLD toolbox or the data that they provide. See the Notes section of mne_nirs.io.fold_channel_specificity() for more information.

Coef.          13.375966
Std.Err.        2.603911
z               5.136876
P>|z|                0.0
[0.025          8.272395
0.975]         18.479537
Chroma               hbo
Significant         True
Name: (S5_D7 hbo, Tapping_Left), dtype: object

Next we use information from the fOLD toolbox to report the channel specificity to different brain regions. For licensing reasons, these files are not distributed with MNE-NIRS. To set up your system to use the fOLD functions, see the Notes section of mne_nirs.io.fold_channel_specificity().

raw_channel = raw_haemo.copy().pick(largest_response_channel.name[0])
fold_channel_specificity(raw_channel)[0]
Channel Source Detector Distance (mm) brainSens X (mm) Y (mm) Z (mm) Landmark Specificity BestSource BestDetector BestMatchDistance MatchDistance
0 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Precentral Gyrus 48.935362 C2 C4 0.003683 0.003683
1 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Postcentral Gyrus 40.399862 C2 C4 0.003683 0.003683
2 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Inferior Parietal Lobule 2.960631 C2 C4 0.003683 0.003683
3 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Superior Frontal Gyrus 2.944289 C2 C4 0.003683 0.003683
4 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R SupraMarginal Gyrus 2.083585 C2 C4 0.003683 0.003683
5 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Middle Frontal Gyrus 1.981430 C2 C4 0.003683 0.003683


We observe that the channel with the largest response to tapping had the greatest specificity to the Precentral Gyrus, which is the site of the primary motor cortex. This is consistent with the expectation for a finger tapping task.

Conclusion#

This example has demonstrated how to perform a group level analysis using a GLM approach. We observed the responses were evoked primarily contralateral to the hand of tapping and most likely originate from the primary motor cortex.

Total running time of the script: (0 minutes 27.719 seconds)

Estimated memory usage: 866 MB

Gallery generated by Sphinx-Gallery