Group Level GLM Analysis#

This is an example of a group level GLM based functional near-infrared spectroscopy (fNIRS) analysis in MNE-NIRS.

Individual level analysis of this data is described in the MNE fNIRS waveform tutorial and the MNE-NIRS fNIRS GLM tutorial So this example will skim over the individual level details and focus on the group level aspect of analysis. Here we describe how to process multiple measurements and summarise group level effects both as summary statistics and visually.

The data used in this example is available at this location. It is a finger tapping example and is briefly described below. The dataset contains 5 participants. The example dataset is in BIDS format and therefore already contains information about triggers, condition names, etc.

Note

This tutorial uses data stored using the BIDS format [1].

MNE-Python allows you to process fNIRS data that is not in BIDS format. Simply modify the read_raw_ function to match your data type. See data importing tutorial to learn how to use your data with MNE-Python.

Note

Optodes were placed over the motor cortex using the standard NIRX motor montage, but with 8 short channels added (see their web page for details). To view the sensor locations run raw_intensity.plot_sensors(). A sound was presented to indicate which hand the participant should tap. Participants tapped their thumb to their fingers for 5s. Conditions were presented in a random order with a randomised inter stimulus interval.

# sphinx_gallery_thumbnail_number = 2

# Authors: Robert Luke <mail@robertluke.net>
#
# License: BSD (3-clause)

# Import common libraries
import matplotlib as mpl

# Import Plotting Library
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns

# Import StatsModels
import statsmodels.formula.api as smf

# Import MNE processing
from mne.preprocessing.nirs import beer_lambert_law, optical_density

# Import MNE-BIDS processing
from mne_bids import BIDSPath, get_entity_vals, read_raw_bids

from mne_nirs.channels import get_long_channels, get_short_channels, picks_pair_to_idx
from mne_nirs.datasets import fnirs_motor_group
from mne_nirs.experimental_design import make_first_level_design_matrix
from mne_nirs.io.fold import fold_channel_specificity

# Import MNE-NIRS processing
from mne_nirs.statistics import run_glm, statsmodels_to_results
from mne_nirs.visualisation import plot_glm_group_topo, plot_glm_surface_projection

Set up directories#

First we will define where the raw data is stored. We will analyse a BIDS dataset. This ensures we have all the metadata we require without manually specifying the trigger names etc. We first define where the root directory of our dataset is. In this example we use the example dataset fnirs_motor_group.

/home/circleci/mne_data/fNIRS-motor-group

And as we are using MNE-BIDS we can create a BIDSPath object. This class helps to handle all the path wrangling. We inform the software that we are analysing nirs data that is saved in the snirf format.

dataset = BIDSPath(
    root=root, task="tapping", datatype="nirs", suffix="nirs", extension=".snirf"
)

print(dataset.directory)
/home/circleci/mne_data/fNIRS-motor-group/nirs

For example we can automatically query the subjects, tasks, and sessions.

subjects = get_entity_vals(root, "subject")
print(subjects)
['01', '02', '03', '04', '05']

Define individual analysis#

First we define the analysis that will be applied to each file. This is a GLM analysis as described in the individual GLM tutorial, so this example will skim over the individual level details.

The analysis extracts a response estimate for each channel, each region of interest, and computes a contrast between left and right finger tapping. We return the raw object and data frames for the computed results. Information about channels, triggers and their meanings are stored in the BIDS structure and are automatically obtained when importing the data.

Here we also resample to a 0.3 Hz sample rate just to speed up the example and use less memory, resampling to 0.6 Hz is a better choice for full analyses.

Note

The nilearn library does not allow backslash characters in the condition name. So we must replace the backslash with an underscore to ensure the GLM computation is successful. Hopefully future versions of MNE-NIRS will automatically handle these characters, see mne-tools/mne-nirs#420 for more information. In the meantime use the following code to replace the illegal characters.

def individual_analysis(bids_path, ID):
    raw_intensity = read_raw_bids(bids_path=bids_path, verbose=False)
    # Delete annotation labeled 15, as these just signify the experiment start and end.
    raw_intensity.annotations.delete(raw_intensity.annotations.description == "15.0")
    # sanitize event names
    raw_intensity.annotations.description[:] = [
        d.replace("/", "_") for d in raw_intensity.annotations.description
    ]

    # Convert signal to haemoglobin and resample
    raw_od = optical_density(raw_intensity)
    raw_haemo = beer_lambert_law(raw_od, ppf=0.1)
    raw_haemo.resample(0.3)

    # Cut out just the short channels for creating a GLM repressor
    sht_chans = get_short_channels(raw_haemo)
    raw_haemo = get_long_channels(raw_haemo)

    # Create a design matrix
    design_matrix = make_first_level_design_matrix(raw_haemo, stim_dur=5.0)

    # Append short channels mean to design matrix
    design_matrix["ShortHbO"] = np.mean(
        sht_chans.copy().pick(picks="hbo").get_data(), axis=0
    )
    design_matrix["ShortHbR"] = np.mean(
        sht_chans.copy().pick(picks="hbr").get_data(), axis=0
    )

    # Run GLM
    glm_est = run_glm(raw_haemo, design_matrix)

    # Define channels in each region of interest
    # List the channel pairs manually
    left = [[4, 3], [1, 3], [3, 3], [1, 2], [2, 3], [1, 1]]
    right = [[8, 7], [5, 7], [7, 7], [5, 6], [6, 7], [5, 5]]
    # Then generate the correct indices for each pair
    groups = dict(
        Left_Hemisphere=picks_pair_to_idx(raw_haemo, left, on_missing="ignore"),
        Right_Hemisphere=picks_pair_to_idx(raw_haemo, right, on_missing="ignore"),
    )

    # Extract channel metrics
    cha = glm_est.to_dataframe()

    # Compute region of interest results from channel data
    roi = glm_est.to_dataframe_region_of_interest(
        groups, design_matrix.columns, demographic_info=True
    )

    # Define left vs right tapping contrast
    contrast_matrix = np.eye(design_matrix.shape[1])
    basic_conts = dict(
        [(column, contrast_matrix[i]) for i, column in enumerate(design_matrix.columns)]
    )
    contrast_LvR = basic_conts["Tapping_Left"] - basic_conts["Tapping_Right"]

    # Compute defined contrast
    contrast = glm_est.compute_contrast(contrast_LvR)
    con = contrast.to_dataframe()

    # Add the participant ID to the dataframes
    roi["ID"] = cha["ID"] = con["ID"] = ID

    # Convert to uM for nicer plotting below.
    cha["theta"] = [t * 1.0e6 for t in cha["theta"]]
    roi["theta"] = [t * 1.0e6 for t in roi["theta"]]
    con["effect"] = [t * 1.0e6 for t in con["effect"]]

    return raw_haemo, roi, cha, con

Run analysis on all participants#

Next we loop through the five measurements and run the individual analysis on each. We append the individual results in to a large dataframe that will contain the results from all measurements. We create a group dataframe for the region of interest, channel level, and contrast results.

df_roi = pd.DataFrame()  # To store region of interest results
df_cha = pd.DataFrame()  # To store channel level results
df_con = pd.DataFrame()  # To store channel level contrast results

for sub in subjects:  # Loop from first to fifth subject
    # Create path to file based on experiment info
    bids_path = dataset.update(subject=sub)

    # Analyse data and return both ROI and channel results
    raw_haemo, roi, channel, con = individual_analysis(bids_path, sub)

    # Append individual results to all participants
    df_roi = pd.concat([df_roi, roi], ignore_index=True)
    df_cha = pd.concat([df_cha, channel], ignore_index=True)
    df_con = pd.concat([df_con, con], ignore_index=True)
Reading 0 ... 23238  =      0.000 ...  2974.464 secs...
Reading 0 ... 18877  =      0.000 ...  2416.256 secs...
Reading 0 ... 18874  =      0.000 ...  2415.872 secs...
Reading 0 ... 23120  =      0.000 ...  2959.360 secs...
Reading 0 ... 23006  =      0.000 ...  2944.768 secs...

Visualise Individual results#

First we visualise the results from each individual to ensure the data values look reasonable. Here we see that we have data from five participants, we plot just the HbO values and observe they are in the expect range. We can already see that the control condition is always near zero, and that the responses look to be contralateral to the tapping hand.

grp_results = df_roi.query("Condition in ['Control', 'Tapping_Left', 'Tapping_Right']")
grp_results = grp_results.query("Chroma in ['hbo']")

sns.catplot(
    x="Condition",
    y="theta",
    col="ID",
    hue="ROI",
    data=grp_results,
    col_wrap=5,
    errorbar=None,
    palette="muted",
    height=4,
    s=10,
)
ID = 01, ID = 02, ID = 03, ID = 04, ID = 05
<seaborn.axisgrid.FacetGrid object at 0x7fdb287a60b0>

Compute group level results#

Next we use a linear mixed effects model to examine the relation between conditions and our response estimate (theta). Combinations of 3 fixed effects will be evaluated, ROI (left vs right), condition (control, tapping/left, tapping/right), and chromophore (HbO, HbR). With a random effect of subject. Alternatively, you could export the group dataframe (df_roi.to_csv()) and analyse in your favorite stats program.

We do not explore the modeling procedure in depth here as topics such model selection and examining residuals are beyond the scope of this example (see relevant literature). Alternatively, we could use a robust linear model by using the code roi_model = rlm(‘theta ~ -1 + ROI:Condition:Chroma’, grp_results).fit().

grp_results = df_roi.query("Condition in ['Control','Tapping_Left', 'Tapping_Right']")

roi_model = smf.mixedlm(
    "theta ~ -1 + ROI:Condition:Chroma", grp_results, groups=grp_results["ID"]
).fit(method="nm")
roi_model.summary()
/home/circleci/python_env/lib/python3.10/site-packages/statsmodels/regression/mixed_linear_model.py:2237: ConvergenceWarning: The MLE may be on the boundary of the parameter space.
  warnings.warn(msg, ConvergenceWarning)
Model: MixedLM Dependent Variable: theta
No. Observations: 60 Method: REML
No. Groups: 5 Scale: 10.1637
Min. group size: 12 Log-Likelihood: -133.4174
Max. group size: 12 Converged: Yes
Mean group size: 12.0
Coef. Std.Err. z P>|z| [0.025 0.975]
ROI[Left_Hemisphere]:Condition[Control]:Chroma[hbo] -0.320 1.426 -0.224 0.823 -3.114 2.475
ROI[Right_Hemisphere]:Condition[Control]:Chroma[hbo] -0.930 1.426 -0.652 0.514 -3.725 1.864
ROI[Left_Hemisphere]:Condition[Tapping_Left]:Chroma[hbo] 2.506 1.426 1.758 0.079 -0.288 5.301
ROI[Right_Hemisphere]:Condition[Tapping_Left]:Chroma[hbo] 9.576 1.426 6.717 0.000 6.782 12.370
ROI[Left_Hemisphere]:Condition[Tapping_Right]:Chroma[hbo] 8.851 1.426 6.208 0.000 6.057 11.646
ROI[Right_Hemisphere]:Condition[Tapping_Right]:Chroma[hbo] 3.862 1.426 2.709 0.007 1.067 6.656
ROI[Left_Hemisphere]:Condition[Control]:Chroma[hbr] 0.296 1.426 0.208 0.836 -2.499 3.090
ROI[Right_Hemisphere]:Condition[Control]:Chroma[hbr] 0.159 1.426 0.112 0.911 -2.635 2.954
ROI[Left_Hemisphere]:Condition[Tapping_Left]:Chroma[hbr] -1.927 1.426 -1.352 0.176 -4.722 0.867
ROI[Right_Hemisphere]:Condition[Tapping_Left]:Chroma[hbr] -3.756 1.426 -2.634 0.008 -6.550 -0.961
ROI[Left_Hemisphere]:Condition[Tapping_Right]:Chroma[hbr] -3.755 1.426 -2.634 0.008 -6.549 -0.960
ROI[Right_Hemisphere]:Condition[Tapping_Right]:Chroma[hbr] -1.692 1.426 -1.187 0.235 -4.486 1.102
Group Var 0.000 0.226



Second level analysis with covariates#

It is simple to extend these models to include covariates. This dataset is small, so including additional factors may not be appropriate. However, for instructional purpose, we will include a covariate of gender. There are 3 females and 2 males in this dataset. Also, for instructional purpose, we modify the model above to only explore the difference between the two tapping conditions in the hbo signal in the right hemisphere.

From the model result we observe that hbo responses in the right hemisphere are smaller when the right hand was used (as expected for these contralaterally dominant responses) and there is no significant effect of gender.

grp_results = df_roi.query("Condition in ['Tapping_Left', 'Tapping_Right']")
grp_results = grp_results.query("Chroma in ['hbo']")
grp_results = grp_results.query("ROI in ['Right_Hemisphere']")

roi_model = smf.mixedlm(
    "theta ~ Condition + Sex", grp_results, groups=grp_results["ID"]
).fit(method="nm")
roi_model.summary()
Model: MixedLM Dependent Variable: theta
No. Observations: 10 Method: REML
No. Groups: 5 Scale: 5.8912
Min. group size: 2 Log-Likelihood: -20.8959
Max. group size: 2 Converged: Yes
Mean group size: 2.0
Coef. Std.Err. z P>|z| [0.025 0.975]
Intercept 7.496 2.562 2.926 0.003 2.474 12.518
Condition[T.Tapping_Right] -5.714 1.535 -3.723 0.000 -8.723 -2.706
Sex[T.male] 5.200 3.865 1.345 0.179 -2.376 12.776
Group Var 14.982 7.978



Visualise group results#

Now we can summarise the output of the second level model. This figure shows that the control condition has small responses that are not significantly different to zero for both HbO and HbR in both hemispheres. Whereas clear significant responses are show for the two tapping conditions. We also observe the the tapping response is larger in the contralateral hemisphere. Filled symbols represent HbO, unfilled symbols represent HbR.

# Regenerate the results from the original group model above
grp_results = df_roi.query("Condition in ['Control','Tapping_Left', 'Tapping_Right']")
roi_model = smf.mixedlm(
    "theta ~ -1 + ROI:Condition:Chroma", grp_results, groups=grp_results["ID"]
).fit(method="nm")

df = statsmodels_to_results(roi_model)

sns.catplot(
    x="Condition",
    y="Coef.",
    hue="ROI",
    data=df.query("Chroma == 'hbo'"),
    errorbar=None,
    palette="muted",
    height=4,
    s=10,
)
plot 12 group glm
/home/circleci/python_env/lib/python3.10/site-packages/statsmodels/regression/mixed_linear_model.py:2237: ConvergenceWarning: The MLE may be on the boundary of the parameter space.
  warnings.warn(msg, ConvergenceWarning)

<seaborn.axisgrid.FacetGrid object at 0x7fdb286b87c0>

Group topographic visualisation#

We can also view the topographic representation of the data (rather than the ROI summary above). Here we just plot the oxyhaemoglobin for the two tapping conditions. First we compute the mixed effects model for each channel (rather than region of interest as above). Then we pass these results to the topomap function.

fig, axes = plt.subplots(
    nrows=2, ncols=2, figsize=(10, 10), gridspec_kw=dict(width_ratios=[1, 1])
)

# Cut down the dataframe just to the conditions we are interested in
ch_summary = df_cha.query("Condition in ['Tapping_Left', 'Tapping_Right']")
ch_summary = ch_summary.query("Chroma in ['hbo']")

# Run group level model and convert to dataframe
ch_model = smf.mixedlm(
    "theta ~ -1 + ch_name:Chroma:Condition", ch_summary, groups=ch_summary["ID"]
).fit(method="nm")
ch_model_df = statsmodels_to_results(ch_model)

# Plot the two conditions
plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo"),
    ch_model_df.query("Condition in ['Tapping_Left']"),
    colorbar=False,
    axes=axes[0, 0],
    vlim=(0, 20),
    cmap=mpl.cm.Oranges,
)

plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo"),
    ch_model_df.query("Condition in ['Tapping_Right']"),
    colorbar=True,
    axes=axes[0, 1],
    vlim=(0, 20),
    cmap=mpl.cm.Oranges,
)

# Cut down the dataframe just to the conditions we are interested in
ch_summary = df_cha.query("Condition in ['Tapping_Left', 'Tapping_Right']")
ch_summary = ch_summary.query("Chroma in ['hbr']")

# Run group level model and convert to dataframe
ch_model = smf.mixedlm(
    "theta ~ -1 + ch_name:Chroma:Condition", ch_summary, groups=ch_summary["ID"]
).fit(method="nm")
ch_model_df = statsmodels_to_results(ch_model)

# Plot the two conditions
plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbr"),
    ch_model_df.query("Condition in ['Tapping_Left']"),
    colorbar=False,
    axes=axes[1, 0],
    vlim=(-10, 0),
    cmap=mpl.cm.Blues_r,
)
plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbr"),
    ch_model_df.query("Condition in ['Tapping_Right']"),
    colorbar=True,
    axes=axes[1, 1],
    vlim=(-10, 0),
    cmap=mpl.cm.Blues_r,
)
Tapping_Left, Tapping_Right, Tapping_Left, Tapping_Right
<AxesSubplot: title={'center': 'Tapping_Right'}>

Contrasts#

Finally we can examine the difference between the left and right hand tapping conditions by viewing the contrast results in a topographic representation.

fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
con_summary = df_con.query("Chroma in ['hbo']")

# Run group level model and convert to dataframe
con_model = smf.mixedlm(
    "effect ~ -1 + ch_name:Chroma", con_summary, groups=con_summary["ID"]
).fit(method="nm")
con_model_df = statsmodels_to_results(
    con_model, order=raw_haemo.copy().pick(picks="hbo").ch_names
)

plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo"), con_model_df, colorbar=True, axes=axes
)
Contrast
/home/circleci/python_env/lib/python3.10/site-packages/statsmodels/regression/mixed_linear_model.py:2237: ConvergenceWarning: The MLE may be on the boundary of the parameter space.
  warnings.warn(msg, ConvergenceWarning)

<AxesSubplot: title={'center': 'Contrast'}>

Or we can view only the left hemisphere for the contrast. And set all channels that dont have a significant response to zero.

plot_glm_group_topo(
    raw_haemo.copy().pick(picks="hbo").pick(picks=range(10)),
    con_model_df,
    colorbar=True,
    threshold=True,
)
Contrast
Reducing GLM results to match MNE data

<AxesSubplot: title={'center': 'Contrast'}>

Cortical Surface Projections#

The topographic plots above can sometimes be difficult to interpret with respect to the underlying cortical locations. It is also possible to present the data by projecting the channel level GLM values to the nearest cortical surface. This can make it easier to understand the spatial aspects of your data. Note however, that this is not a complete forward model with photon migration simulations. In the figure below we project the group results from the two conditions to the cortical surface, and also present the contrast results in the same fashion. As in the topo plots above you can see that the activity is predominately contralateral to the side of finger tapping.

# Generate brain figure from data
clim = dict(kind="value", pos_lims=(0, 8, 11))
brain = plot_glm_surface_projection(
    raw_haemo.copy().pick("hbo"),
    con_model_df,
    clim=clim,
    view="dorsal",
    colorbar=True,
    size=(800, 700),
)
brain.add_text(0.05, 0.95, "Left-Right", "title", font_size=16, color="k")

# Run model code as above
clim = dict(kind="value", pos_lims=(0, 11.5, 17))
for idx, cond in enumerate(["Tapping_Left", "Tapping_Right"]):
    # Run same model as explained in the sections above
    ch_summary = df_cha.query("Condition in [@cond]")
    ch_summary = ch_summary.query("Chroma in ['hbo']")
    ch_model = smf.mixedlm(
        "theta ~ -1 + ch_name", ch_summary, groups=ch_summary["ID"]
    ).fit(method="nm")
    model_df = statsmodels_to_results(
        ch_model, order=raw_haemo.copy().pick("hbo").ch_names
    )

    # Generate brain figure from data
    brain = plot_glm_surface_projection(
        raw_haemo.copy().pick("hbo"),
        model_df,
        clim=clim,
        view="dorsal",
        colorbar=True,
        size=(800, 700),
    )
    brain.add_text(0.05, 0.95, cond, "title", font_size=16, color="k")
plot 12 group glm
  • plot 12 group glm
  • plot 12 group glm

Table of channel level results#

Sometimes a reviewer wants a long table of results per channel. This can be generated from the statistics dataframe.

ch_summary = df_cha.query("Condition in ['Tapping_Left', 'Tapping_Right']")
ch_summary = ch_summary.query("Chroma in ['hbo']")

# Run group level model and convert to dataframe
ch_model = smf.mixedlm(
    "theta ~ -1 + ch_name:Chroma:Condition", ch_summary, groups=ch_summary["ID"]
).fit(method="nm")

# Here we can use the order argument to ensure the channel name order
ch_model_df = statsmodels_to_results(
    ch_model, order=raw_haemo.copy().pick(picks="hbo").ch_names
)
# And make the table prettier
ch_model_df.reset_index(drop=True, inplace=True)
ch_model_df = ch_model_df.set_index(["ch_name", "Condition"])
ch_model_df
Coef. Std.Err. z P>|z| [0.025 0.975] Chroma Significant
ch_name Condition
S1_D1 hbo Tapping_Left 3.032008 3.536881 0.857255 3.913041e-01 -3.900152 9.964168 hbo False
Tapping_Right 6.415630 3.536881 1.813923 6.968959e-02 -0.51653 13.34779 hbo False
S1_D2 hbo Tapping_Left 1.777202 3.536881 0.502477 6.153319e-01 -5.154958 8.709362 hbo False
Tapping_Right 6.927322 3.536881 1.958596 5.016009e-02 -0.004838 13.859482 hbo False
S1_D3 hbo Tapping_Left 2.682615 3.536881 0.758469 4.481702e-01 -4.249545 9.614775 hbo False
Tapping_Right 15.816878 3.536881 4.471984 7.749711e-06 8.884718 22.749038 hbo True
S2_D1 hbo Tapping_Left 0.327063 3.536881 0.092472 9.263229e-01 -6.605097 7.259223 hbo False
Tapping_Right 1.270148 3.536881 0.359115 7.195088e-01 -5.662011 8.202308 hbo False
S2_D3 hbo Tapping_Left 1.471616 3.536881 0.416077 6.773534e-01 -5.460544 8.403776 hbo False
Tapping_Right 6.643579 3.536881 1.878372 6.033028e-02 -0.288581 13.575739 hbo False
S2_D4 hbo Tapping_Left -0.770107 3.536881 -0.217736 8.276347e-01 -7.702267 6.162053 hbo False
Tapping_Right -0.036873 3.536881 -0.010425 9.916820e-01 -6.969033 6.895287 hbo False
S3_D2 hbo Tapping_Left 4.722142 3.536881 1.335115 1.818388e-01 -2.210018 11.654302 hbo False
Tapping_Right 9.560889 3.536881 2.703198 6.867589e-03 2.628729 16.493049 hbo True
S3_D3 hbo Tapping_Left 3.624990 3.536881 1.024911 3.054050e-01 -3.30717 10.55715 hbo False
Tapping_Right 12.830392 3.536881 3.627600 2.860675e-04 5.898232 19.762552 hbo True
S4_D3 hbo Tapping_Left 3.286504 3.536881 0.929209 3.527805e-01 -3.645656 10.218664 hbo False
Tapping_Right 9.610416 3.536881 2.717201 6.583667e-03 2.678256 16.542576 hbo True
S4_D4 hbo Tapping_Left 5.673063 3.536881 1.603973 1.087199e-01 -1.259097 12.605223 hbo False
Tapping_Right 7.008687 3.536881 1.981601 4.752393e-02 0.076527 13.940847 hbo True
S5_D5 hbo Tapping_Left 5.856888 3.536881 1.655947 9.773257e-02 -1.075272 12.789048 hbo False
Tapping_Right 3.251541 3.536881 0.919324 3.579260e-01 -3.680619 10.183701 hbo False
S5_D6 hbo Tapping_Left 3.062157 3.536881 0.865779 3.866114e-01 -3.870003 9.994317 hbo False
Tapping_Right 1.885486 3.536881 0.533093 5.939693e-01 -5.046674 8.817646 hbo False
S5_D7 hbo Tapping_Left 17.880876 3.536881 5.055549 4.291550e-07 10.948716 24.813036 hbo True
Tapping_Right 4.686052 3.536881 1.324911 1.852007e-01 -2.246107 11.618212 hbo False
S6_D5 hbo Tapping_Left 2.600497 3.536881 0.735251 4.621864e-01 -4.331663 9.532657 hbo False
Tapping_Right 2.180645 3.536881 0.616544 5.375352e-01 -4.751515 9.112804 hbo False
S6_D7 hbo Tapping_Left 11.862971 3.536881 3.354077 7.963031e-04 4.930811 18.795131 hbo True
Tapping_Right 4.851630 3.536881 1.371725 1.701489e-01 -2.08053 11.78379 hbo False
S6_D8 hbo Tapping_Left 9.425000 3.536881 2.664777 7.703938e-03 2.49284 16.35716 hbo True
Tapping_Right 5.667118 3.536881 1.602292 1.090910e-01 -1.265042 12.599278 hbo False
S7_D6 hbo Tapping_Left 8.757646 3.536881 2.476093 1.328291e-02 1.825486 15.689806 hbo True
Tapping_Right 4.375866 3.536881 1.237210 2.160090e-01 -2.556294 11.308026 hbo False
S7_D7 hbo Tapping_Left 10.806506 3.536881 3.055377 2.247777e-03 3.874346 17.738666 hbo True
Tapping_Right 4.022244 3.536881 1.137229 2.554426e-01 -2.909916 10.954404 hbo False
S8_D7 hbo Tapping_Left 14.968237 3.536881 4.232044 2.315773e-05 8.036077 21.900397 hbo True
Tapping_Right 6.547861 3.536881 1.851309 6.412509e-02 -0.384299 13.480021 hbo False
S8_D8 hbo Tapping_Left 11.497395 3.536881 3.250716 1.151150e-03 4.565235 18.429555 hbo True
Tapping_Right 8.740161 3.536881 2.471149 1.346796e-02 1.808001 15.672321 hbo True


Relating Responses to Brain Landmarks#

It can be useful to understand what brain structures the measured response may have resulted from. Here we illustrate how to report the brain structures/landmarks that the source detector pair with the largest response was sensitive to.

First we determine the channel with the largest response.

Next, we query the fOLD dataset to determine the brain landmarks that this channel is most sensitive to. MNE-NIRS does not distribute the fOLD toolbox or the data that they provide. See the Notes section of mne_nirs.io.fold_channel_specificity() for more information.

Coef.          17.880876
Std.Err.        3.536881
z               5.055549
P>|z|                0.0
[0.025         10.948716
0.975]         24.813036
Chroma               hbo
Significant         True
Name: (S5_D7 hbo, Tapping_Left), dtype: object

Next we use information from the fOLD toolbox to report the channel specificity to different brain regions. For licensing reasons, these files are not distributed with MNE-NIRS. To set up your system to use the fOLD functions, see the Notes section of mne_nirs.io.fold_channel_specificity().

raw_channel = raw_haemo.copy().pick(largest_response_channel.name[0])
fold_channel_specificity(raw_channel)[0]
Channel Source Detector Distance (mm) brainSens X (mm) Y (mm) Z (mm) Landmark Specificity BestSource BestDetector BestMatchDistance MatchDistance
0 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Precentral Gyrus 48.935362 C2 C4 0.003683 0.003683
1 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Postcentral Gyrus 40.399862 C2 C4 0.003683 0.003683
2 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Inferior Parietal Lobule 2.960631 C2 C4 0.003683 0.003683
3 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Superior Frontal Gyrus 2.944289 C2 C4 0.003683 0.003683
4 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R SupraMarginal Gyrus 2.083585 C2 C4 0.003683 0.003683
5 100.0 C4 C2 40.0 7.944795 42.0 -21.0 62.0 R Middle Frontal Gyrus 1.981430 C2 C4 0.003683 0.003683


We observe that the channel with the largest response to tapping had the greatest specificity to the Precentral Gyrus, which is the site of the primary motor cortex. This is consistent with the expectation for a finger tapping task.

Conclusion#

This example has demonstrated how to perform a group level analysis using a GLM approach. We observed the responses were evoked primarily contralateral to the hand of tapping and most likely originate from the primary motor cortex.

Total running time of the script: (0 minutes 22.176 seconds)

Estimated memory usage: 862 MB

Gallery generated by Sphinx-Gallery