Receptive Field Estimation and Prediction#

This example reproduces figures from Lalor et al.’s mTRF toolbox in MATLAB [1]. We will show how the mne.decoding.ReceptiveField class can perform a similar function along with scikit-learn. We will first fit a linear encoding model using the continuously-varying speech envelope to predict activity of a 128 channel EEG system. Then, we will take the reverse approach and try to predict the speech envelope from the EEG (known in the literature as a decoding model, or simply stimulus reconstruction).

# Authors: Chris Holdgraf <choldgraf@gmail.com>
#          Eric Larson <larson.eric.d@gmail.com>
#          Nicolas Barascud <nicolas.barascud@ens.fr>
#
# License: BSD-3-Clause
# Copyright the MNE-Python contributors.

from os.path import join

import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale

import mne
from mne.decoding import ReceptiveField

Load the data from the publication#

First we will load the data collected in [1]. In this experiment subjects listened to natural speech. Raw EEG and the speech stimulus are provided. We will load these below, downsampling the data in order to speed up computation since we know that our features are primarily low-frequency in nature. Then we’ll visualize both the EEG and speech envelope.

path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, "speech_data.mat"))
raw = data["EEG"].T
speech = data["envelope"].T
sfreq = float(data["Fs"].item())
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, method="polyphase")
raw = mne.filter.resample(raw, down=decim, method="polyphase")

# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.make_standard_montage("biosemi128")
info = mne.create_info(montage.ch_names, sfreq, "eeg").set_montage(montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)

# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots(layout="constrained")
lns = ax.plot(scale(raw[:, :800][0].T), color="k", alpha=0.1)
ln1 = ax.plot(scale(speech[0, :800]), color="r", lw=2)
ax.legend([lns[0], ln1[0]], ["EEG", "Speech Envelope"], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
Sample activity
Polyphase resampling neighborhood: ±2 input samples
Polyphase resampling neighborhood: ±2 input samples
Creating RawArray with float64 data, n_channels=128, n_times=7677
    Range : 0 ... 7676 =      0.000 ...   119.938 secs
Ready.

Create and fit a receptive field model#

We will construct an encoding model to find the linear relationship between a time-delayed version of the speech envelope and the EEG signal. This allows us to make predictions about the response to new stimuli.

# Define the delays that we will use in the receptive field
tmin, tmax = -0.2, 0.4

# Initialize the model
rf = ReceptiveField(
    tmin, tmax, sfreq, feature_names=["envelope"], estimator=1.0, scoring="corrcoef"
)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2

n_splits = 3
cv = KFold(n_splits)

# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:]  # Outputs for the model
Y = Y.T

# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
    print(f"split {ii + 1} / {n_splits}")
    rf.fit(speech[train], Y[train])
    scores[ii] = rf.score(speech[test], Y[test])
    # coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
    coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)

# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)

# Plot mean prediction scores across all channels
fig, ax = plt.subplots(layout="constrained")
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls="--", color="r")
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
Mean prediction score
split 1 / 3
Fitting 1 epochs, 1 channels

  0%|          | Sample : 0/2 [00:00<?,       ?it/s]
 50%|█████     | Sample : 1/2 [00:01<00:01,    1.96s/it]
100%|██████████| Sample : 2/2 [00:01<00:00,    1.03it/s]
100%|██████████| Sample : 2/2 [00:01<00:00,    1.00it/s]
split 2 / 3
Fitting 1 epochs, 1 channels

  0%|          | Sample : 0/2 [00:00<?,       ?it/s]
 50%|█████     | Sample : 1/2 [00:00<00:00,   45.11it/s]
100%|██████████| Sample : 2/2 [00:00<00:00,   45.90it/s]
100%|██████████| Sample : 2/2 [00:00<00:00,   45.52it/s]
split 3 / 3
Fitting 1 epochs, 1 channels

  0%|          | Sample : 0/2 [00:00<?,       ?it/s]
 50%|█████     | Sample : 1/2 [00:00<00:00,   45.34it/s]
100%|██████████| Sample : 2/2 [00:00<00:00,   46.40it/s]
100%|██████████| Sample : 2/2 [00:00<00:00,   46.01it/s]

Investigate model coefficients#

Finally, we will look at how the linear coefficients (sometimes referred to as beta values) are distributed across time delays as well as across the scalp. We will recreate figure 1 and figure 2 from [1].

# Print mean coefficients across all time delays / channels (see Fig 1)
time_plot = 0.180  # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8), layout="constrained")
max_coef = mean_coefs.max()
ax.pcolormesh(
    times,
    ix_chs,
    mean_coefs,
    cmap="RdBu_r",
    vmin=-max_coef,
    vmax=max_coef,
    shading="gouraud",
)
ax.axvline(time_plot, ls="--", color="k", lw=2)
ax.set(
    xlabel="Delay (s)",
    ylabel="Channel",
    title="Mean Model\nCoefficients",
    xlim=times[[0, -1]],
    ylim=[len(ix_chs) - 1, 0],
    xticks=np.arange(tmin, tmax + 0.2, 0.2),
)
plt.setp(ax.get_xticklabels(), rotation=45)

# Make a topographic map of coefficients for a given delay (see Fig 2C)
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots(layout="constrained")
mne.viz.plot_topomap(
    mean_coefs[:, ix_plot], pos=info, axes=ax, show=False, vlim=(-max_coef, max_coef)
)
ax.set(title="Topomap of model coefficients\nfor delay %s" % time_plot)
  • Mean Model Coefficients
  • Topomap of model coefficients for delay 0.18

Create and fit a stimulus reconstruction model#

We will now demonstrate another use case for the for the mne.decoding.ReceptiveField class as we try to predict the stimulus activity from the EEG data. This is known in the literature as a decoding, or stimulus reconstruction model [1]. A decoding model aims to find the relationship between the speech signal and a time-delayed version of the EEG. This can be useful as we exploit all of the available neural data in a multivariate context, compared to the encoding case which treats each M/EEG channel as an independent feature. Therefore, decoding models might provide a better quality of fit (at the expense of not controlling for stimulus covariance), especially for low SNR stimuli such as speech.

# We use the same lags as in :footcite:`CrosseEtAl2016`. Negative lags now
# index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -0.2, 0.0

# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section and :footcite:`HaufeEtAl2014`).
# We'll use a ridge regression estimator with an alpha value similar to
# Crosse et al.
sr = ReceptiveField(
    tmin,
    tmax,
    sfreq,
    feature_names=raw.ch_names,
    estimator=1e4,
    scoring="corrcoef",
    patterns=True,
)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2

n_splits = 3
cv = KFold(n_splits)

# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
    print(f"split {ii + 1} / {n_splits}")
    sr.fit(Y[train], speech[train])
    scores[ii] = sr.score(Y[test], speech[test])[0]
    # coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
    coefs[ii] = sr.coef_[0, :, :]
    patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)

# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
split 1 / 3
Fitting 1 epochs, 128 channels

  0%|          | Sample : 0/8384 [00:00<?,       ?it/s]
  0%|          | Sample : 1/8384 [00:00<05:45,   24.29it/s]
  1%|          | Sample : 86/8384 [00:00<00:05, 1556.53it/s]
  2%|▏         | Sample : 169/8384 [00:00<00:03, 2402.19it/s]
  3%|▎         | Sample : 254/8384 [00:00<00:02, 2971.31it/s]
  4%|▍         | Sample : 338/8384 [00:00<00:02, 3358.52it/s]
  5%|▌         | Sample : 422/8384 [00:00<00:02, 3645.30it/s]
  6%|▌         | Sample : 507/8384 [00:00<00:02, 3875.55it/s]
  7%|▋         | Sample : 590/8384 [00:00<00:01, 4039.65it/s]
  8%|▊         | Sample : 675/8384 [00:00<00:01, 4187.52it/s]
  9%|▉         | Sample : 760/8384 [00:00<00:01, 4305.67it/s]
 10%|█         | Sample : 843/8384 [00:00<00:01, 4396.21it/s]
 11%|█         | Sample : 927/8384 [00:00<00:01, 4476.85it/s]
 12%|█▏        | Sample : 1012/8384 [00:00<00:01, 4551.01it/s]
 13%|█▎        | Sample : 1095/8384 [00:00<00:01, 4608.10it/s]
 14%|█▍        | Sample : 1178/8384 [00:00<00:01, 4657.61it/s]
 15%|█▌        | Sample : 1263/8384 [00:00<00:01, 4708.83it/s]
 16%|█▌        | Sample : 1344/8384 [00:00<00:01, 4737.33it/s]
 17%|█▋        | Sample : 1426/8384 [00:00<00:01, 4766.42it/s]
 18%|█▊        | Sample : 1510/8384 [00:00<00:01, 4802.87it/s]
 19%|█▉        | Sample : 1594/8384 [00:00<00:01, 4832.15it/s]
 20%|██        | Sample : 1677/8384 [00:00<00:01, 4857.50it/s]
 21%|██        | Sample : 1763/8384 [00:00<00:01, 4891.04it/s]
 22%|██▏       | Sample : 1846/8384 [00:00<00:01, 4909.40it/s]
 23%|██▎       | Sample : 1931/8384 [00:00<00:01, 4936.05it/s]
 24%|██▍       | Sample : 2011/8384 [00:00<00:01, 4940.01it/s]
 25%|██▍       | Sample : 2094/8384 [00:00<00:01, 4953.66it/s]
 26%|██▌       | Sample : 2178/8384 [00:00<00:01, 4971.25it/s]
 27%|██▋       | Sample : 2262/8384 [00:00<00:01, 4987.64it/s]
 28%|██▊       | Sample : 2343/8384 [00:00<00:01, 4990.28it/s]
 29%|██▉       | Sample : 2427/8384 [00:00<00:01, 5003.21it/s]
 30%|██▉       | Sample : 2513/8384 [00:00<00:01, 5024.50it/s]
 31%|███       | Sample : 2596/8384 [00:00<00:01, 5032.42it/s]
 32%|███▏      | Sample : 2682/8384 [00:00<00:01, 5050.83it/s]
 33%|███▎      | Sample : 2768/8384 [00:00<00:01, 5066.93it/s]
 34%|███▍      | Sample : 2855/8384 [00:00<00:01, 5086.46it/s]
 35%|███▌      | Sample : 2943/8384 [00:00<00:01, 5108.37it/s]
 36%|███▌      | Sample : 3031/8384 [00:00<00:01, 5130.22it/s]
 37%|███▋      | Sample : 3118/8384 [00:00<00:01, 5147.46it/s]
 38%|███▊      | Sample : 3206/8384 [00:00<00:01, 5166.51it/s]
 39%|███▉      | Sample : 3293/8384 [00:00<00:00, 5180.97it/s]
 40%|████      | Sample : 3377/8384 [00:00<00:00, 5179.75it/s]
 41%|████      | Sample : 3450/8384 [00:00<00:00, 5142.37it/s]
 42%|████▏     | Sample : 3537/8384 [00:00<00:00, 5158.04it/s]
 43%|████▎     | Sample : 3623/8384 [00:00<00:00, 5167.45it/s]
 44%|████▍     | Sample : 3709/8384 [00:00<00:00, 5176.46it/s]
 45%|████▌     | Sample : 3792/8384 [00:00<00:00, 5176.46it/s]
 46%|████▌     | Sample : 3877/8384 [00:00<00:00, 5181.88it/s]
 47%|████▋     | Sample : 3961/8384 [00:00<00:00, 5182.93it/s]
 48%|████▊     | Sample : 4048/8384 [00:00<00:00, 5195.19it/s]
 49%|████▉     | Sample : 4134/8384 [00:00<00:00, 5203.73it/s]
 50%|█████     | Sample : 4220/8384 [00:00<00:00, 5209.92it/s]
 51%|█████▏    | Sample : 4305/8384 [00:00<00:00, 5213.50it/s]
 52%|█████▏    | Sample : 4392/8384 [00:00<00:00, 5223.53it/s]
 53%|█████▎    | Sample : 4479/8384 [00:00<00:00, 5234.31it/s]
 54%|█████▍    | Sample : 4565/8384 [00:00<00:00, 5240.89it/s]
 55%|█████▌    | Sample : 4650/8384 [00:00<00:00, 5243.82it/s]
 57%|█████▋    | Sample : 4738/8384 [00:00<00:00, 5254.48it/s]
 58%|█████▊    | Sample : 4826/8384 [00:00<00:00, 5264.94it/s]
 59%|█████▊    | Sample : 4913/8384 [00:00<00:00, 5270.93it/s]
 60%|█████▉    | Sample : 5000/8384 [00:00<00:00, 5279.41it/s]
 61%|██████    | Sample : 5087/8384 [00:01<00:00, 5285.23it/s]
 62%|██████▏   | Sample : 5175/8384 [00:01<00:00, 5293.33it/s]
 63%|██████▎   | Sample : 5263/8384 [00:01<00:00, 5301.43it/s]
 64%|██████▍   | Sample : 5346/8384 [00:01<00:00, 5292.35it/s]
 64%|██████▍   | Sample : 5403/8384 [00:01<00:00, 5201.55it/s]
 65%|██████▌   | Sample : 5483/8384 [00:01<00:00, 5190.07it/s]
 66%|██████▋   | Sample : 5570/8384 [00:01<00:00, 5198.67it/s]
 67%|██████▋   | Sample : 5655/8384 [00:01<00:00, 5201.92it/s]
 69%|██████▊   | Sample : 5744/8384 [00:01<00:00, 5219.53it/s]
 70%|██████▉   | Sample : 5834/8384 [00:01<00:00, 5238.30it/s]
 71%|███████   | Sample : 5921/8384 [00:01<00:00, 5247.34it/s]
 72%|███████▏  | Sample : 6008/8384 [00:01<00:00, 5256.02it/s]
 73%|███████▎  | Sample : 6096/8384 [00:01<00:00, 5268.40it/s]
 74%|███████▍  | Sample : 6184/8384 [00:01<00:00, 5279.09it/s]
 75%|███████▍  | Sample : 6275/8384 [00:01<00:00, 5296.92it/s]
 76%|███████▌  | Sample : 6359/8384 [00:01<00:00, 5293.99it/s]
 77%|███████▋  | Sample : 6445/8384 [00:01<00:00, 5297.27it/s]
 78%|███████▊  | Sample : 6535/8384 [00:01<00:00, 5311.85it/s]
 79%|███████▉  | Sample : 6619/8384 [00:01<00:00, 5306.76it/s]
 80%|███████▉  | Sample : 6698/8384 [00:01<00:00, 5283.52it/s]
 81%|████████  | Sample : 6757/8384 [00:01<00:00, 5199.26it/s]
 82%|████████▏ | Sample : 6837/8384 [00:01<00:00, 5188.15it/s]
 83%|████████▎ | Sample : 6920/8384 [00:01<00:00, 5187.54it/s]
 84%|████████▎ | Sample : 7008/8384 [00:01<00:00, 5200.55it/s]
 85%|████████▍ | Sample : 7098/8384 [00:01<00:00, 5219.28it/s]
 86%|████████▌ | Sample : 7179/8384 [00:01<00:00, 5208.38it/s]
 87%|████████▋ | Sample : 7253/8384 [00:01<00:00, 5177.30it/s]
 88%|████████▊ | Sample : 7340/8384 [00:01<00:00, 5190.25it/s]
 89%|████████▊ | Sample : 7430/8384 [00:01<00:00, 5211.63it/s]
 90%|████████▉ | Sample : 7517/8384 [00:01<00:00, 5220.94it/s]
 91%|█████████ | Sample : 7595/8384 [00:01<00:00, 5203.14it/s]
 92%|█████████▏| Sample : 7674/8384 [00:01<00:00, 5188.70it/s]
 92%|█████████▏| Sample : 7750/8384 [00:01<00:00, 5166.16it/s]
 93%|█████████▎| Sample : 7828/8384 [00:01<00:00, 5150.25it/s]
 94%|█████████▍| Sample : 7920/8384 [00:01<00:00, 5179.22it/s]
 95%|█████████▌| Sample : 7995/8384 [00:01<00:00, 5152.12it/s]
 96%|█████████▋| Sample : 8078/8384 [00:01<00:00, 5153.80it/s]
 97%|█████████▋| Sample : 8166/8384 [00:01<00:00, 5170.33it/s]
 98%|█████████▊| Sample : 8255/8384 [00:01<00:00, 5189.75it/s]
100%|█████████▉| Sample : 8346/8384 [00:01<00:00, 5211.51it/s]
100%|██████████| Sample : 8384/8384 [00:01<00:00, 5105.88it/s]
split 2 / 3
Fitting 1 epochs, 128 channels

  0%|          | Sample : 0/8384 [00:00<?,       ?it/s]
  0%|          | Sample : 1/8384 [00:00<03:24,   40.99it/s]
  1%|          | Sample : 80/8384 [00:00<00:04, 2032.46it/s]
  2%|▏         | Sample : 166/8384 [00:00<00:02, 3033.10it/s]
  3%|▎         | Sample : 253/8384 [00:00<00:02, 3603.18it/s]
  4%|▍         | Sample : 336/8384 [00:00<00:02, 3912.46it/s]
  5%|▌         | Sample : 420/8384 [00:00<00:01, 4141.06it/s]
  6%|▌         | Sample : 496/8384 [00:00<00:01, 4231.95it/s]
  7%|▋         | Sample : 579/8384 [00:00<00:01, 4363.86it/s]
  8%|▊         | Sample : 666/8384 [00:00<00:01, 4498.38it/s]
  9%|▉         | Sample : 754/8384 [00:00<00:01, 4612.59it/s]
 10%|▉         | Sample : 828/8384 [00:00<00:01, 4610.47it/s]
 11%|█         | Sample : 912/8384 [00:00<00:01, 4674.65it/s]
 12%|█▏        | Sample : 1000/8384 [00:00<00:01, 4750.17it/s]
 13%|█▎        | Sample : 1086/8384 [00:00<00:01, 4807.44it/s]
 14%|█▍        | Sample : 1174/8384 [00:00<00:01, 4866.55it/s]
 15%|█▌        | Sample : 1261/8384 [00:00<00:01, 4913.52it/s]
 16%|█▌        | Sample : 1333/8384 [00:00<00:01, 4878.34it/s]
 17%|█▋        | Sample : 1420/8384 [00:00<00:01, 4920.80it/s]
 18%|█▊        | Sample : 1507/8384 [00:00<00:01, 4959.50it/s]
 19%|█▉        | Sample : 1594/8384 [00:00<00:01, 4995.80it/s]
 20%|██        | Sample : 1681/8384 [00:00<00:01, 5025.00it/s]
 21%|██        | Sample : 1754/8384 [00:00<00:01, 4987.07it/s]
 22%|██▏       | Sample : 1838/8384 [00:00<00:01, 5004.46it/s]
 23%|██▎       | Sample : 1918/8384 [00:00<00:01, 5003.74it/s]
 24%|██▎       | Sample : 1981/8384 [00:00<00:01, 4929.87it/s]
 25%|██▍       | Sample : 2056/8384 [00:00<00:01, 4913.27it/s]
 26%|██▌       | Sample : 2142/8384 [00:00<00:01, 4940.78it/s]
 26%|██▋       | Sample : 2219/8384 [00:00<00:01, 4931.22it/s]
 27%|██▋       | Sample : 2302/8384 [00:00<00:01, 4944.63it/s]
 29%|██▊       | Sample : 2390/8384 [00:00<00:01, 4976.40it/s]
 30%|██▉       | Sample : 2477/8384 [00:00<00:01, 5002.45it/s]
 31%|███       | Sample : 2562/8384 [00:00<00:01, 5020.93it/s]
 32%|███▏      | Sample : 2645/8384 [00:00<00:01, 5029.05it/s]
 32%|███▏      | Sample : 2719/8384 [00:00<00:01, 5003.32it/s]
 33%|███▎      | Sample : 2805/8384 [00:00<00:01, 5023.63it/s]
 34%|███▍      | Sample : 2892/8384 [00:00<00:01, 5044.77it/s]
 36%|███▌      | Sample : 2978/8384 [00:00<00:01, 5063.74it/s]
 37%|███▋      | Sample : 3066/8384 [00:00<00:01, 5085.53it/s]
 37%|███▋      | Sample : 3139/8384 [00:00<00:01, 5054.75it/s]
 38%|███▊      | Sample : 3226/8384 [00:00<00:01, 5074.68it/s]
 39%|███▉      | Sample : 3311/8384 [00:00<00:00, 5086.75it/s]
 41%|████      | Sample : 3397/8384 [00:00<00:00, 5102.54it/s]
 42%|████▏     | Sample : 3482/8384 [00:00<00:00, 5113.95it/s]
 42%|████▏     | Sample : 3555/8384 [00:00<00:00, 5083.38it/s]
 43%|████▎     | Sample : 3642/8384 [00:00<00:00, 5100.52it/s]
 44%|████▍     | Sample : 3729/8384 [00:00<00:00, 5117.39it/s]
 46%|████▌     | Sample : 3815/8384 [00:00<00:00, 5128.81it/s]
 46%|████▌     | Sample : 3875/8384 [00:00<00:00, 5053.02it/s]
 47%|████▋     | Sample : 3936/8384 [00:00<00:00, 4983.84it/s]
 48%|████▊     | Sample : 3993/8384 [00:00<00:00, 4906.72it/s]
 49%|████▊     | Sample : 4078/8384 [00:00<00:00, 4926.76it/s]
 50%|████▉     | Sample : 4164/8384 [00:00<00:00, 4948.14it/s]
 51%|█████     | Sample : 4244/8384 [00:00<00:00, 4948.63it/s]
 52%|█████▏    | Sample : 4331/8384 [00:00<00:00, 4973.48it/s]
 53%|█████▎    | Sample : 4411/8384 [00:00<00:00, 4971.81it/s]
 54%|█████▎    | Sample : 4487/8384 [00:00<00:00, 4959.46it/s]
 55%|█████▍    | Sample : 4571/8384 [00:00<00:00, 4971.81it/s]
 56%|█████▌    | Sample : 4657/8384 [00:00<00:00, 4992.18it/s]
 57%|█████▋    | Sample : 4743/8384 [00:00<00:00, 5011.59it/s]
 58%|█████▊    | Sample : 4827/8384 [00:00<00:00, 5021.76it/s]
 58%|█████▊    | Sample : 4880/8384 [00:00<00:00, 4929.40it/s]
 59%|█████▉    | Sample : 4965/8384 [00:01<00:00, 4946.68it/s]
 60%|██████    | Sample : 5049/8384 [00:01<00:00, 4960.41it/s]
 61%|██████    | Sample : 5135/8384 [00:01<00:00, 4981.46it/s]
 62%|██████▏   | Sample : 5222/8384 [00:01<00:00, 5003.42it/s]
 63%|██████▎   | Sample : 5307/8384 [00:01<00:00, 5018.56it/s]
 64%|██████▍   | Sample : 5394/8384 [00:01<00:00, 5038.75it/s]
 65%|██████▌   | Sample : 5479/8384 [00:01<00:00, 5052.03it/s]
 66%|██████▌   | Sample : 5547/8384 [00:01<00:00, 5008.31it/s]
 67%|██████▋   | Sample : 5607/8384 [00:01<00:00, 4941.71it/s]
 68%|██████▊   | Sample : 5676/8384 [00:01<00:00, 4907.03it/s]
 69%|██████▊   | Sample : 5762/8384 [00:01<00:00, 4930.71it/s]
 70%|██████▉   | Sample : 5849/8384 [00:01<00:00, 4956.33it/s]
 71%|███████   | Sample : 5917/8384 [00:01<00:00, 4919.81it/s]
 72%|███████▏  | Sample : 5997/8384 [00:01<00:00, 4923.24it/s]
 73%|███████▎  | Sample : 6083/8384 [00:01<00:00, 4945.85it/s]
 73%|███████▎  | Sample : 6154/8384 [00:01<00:00, 4919.88it/s]
 74%|███████▍  | Sample : 6221/8384 [00:01<00:00, 4878.92it/s]
 75%|███████▌  | Sample : 6300/8384 [00:01<00:00, 4881.79it/s]
 76%|███████▌  | Sample : 6387/8384 [00:01<00:00, 4909.48it/s]
 77%|███████▋  | Sample : 6473/8384 [00:01<00:00, 4930.39it/s]
 78%|███████▊  | Sample : 6560/8384 [00:01<00:00, 4954.89it/s]
 79%|███████▉  | Sample : 6649/8384 [00:01<00:00, 4983.81it/s]
 80%|████████  | Sample : 6737/8384 [00:01<00:00, 5008.87it/s]
 81%|████████▏ | Sample : 6826/8384 [00:01<00:00, 5034.50it/s]
 82%|████████▏ | Sample : 6912/8384 [00:01<00:00, 5050.62it/s]
 83%|████████▎ | Sample : 6999/8384 [00:01<00:00, 5069.87it/s]
 85%|████████▍ | Sample : 7085/8384 [00:01<00:00, 5084.75it/s]
 86%|████████▌ | Sample : 7173/8384 [00:01<00:00, 5103.09it/s]
 87%|████████▋ | Sample : 7265/8384 [00:01<00:00, 5134.97it/s]
 88%|████████▊ | Sample : 7354/8384 [00:01<00:00, 5155.68it/s]
 89%|████████▉ | Sample : 7442/8384 [00:01<00:00, 5170.87it/s]
 90%|████████▉ | Sample : 7521/8384 [00:01<00:00, 5159.06it/s]
 91%|█████████ | Sample : 7611/8384 [00:01<00:00, 5179.62it/s]
 92%|█████████▏| Sample : 7700/8384 [00:01<00:00, 5196.33it/s]
 93%|█████████▎| Sample : 7773/8384 [00:01<00:00, 5162.20it/s]
 94%|█████████▎| Sample : 7846/8384 [00:01<00:00, 5131.24it/s]
 94%|█████████▍| Sample : 7900/8384 [00:01<00:00, 5040.72it/s]
 95%|█████████▌| Sample : 7982/8384 [00:01<00:00, 5044.46it/s]
 96%|█████████▋| Sample : 8077/8384 [00:01<00:00, 5086.55it/s]
 97%|█████████▋| Sample : 8160/8384 [00:01<00:00, 5091.22it/s]
 98%|█████████▊| Sample : 8255/8384 [00:01<00:00, 5132.05it/s]
100%|█████████▉| Sample : 8350/8384 [00:01<00:00, 5170.58it/s]
100%|██████████| Sample : 8384/8384 [00:01<00:00, 5015.27it/s]
split 3 / 3
Fitting 1 epochs, 128 channels

  0%|          | Sample : 0/8384 [00:00<?,       ?it/s]
  0%|          | Sample : 1/8384 [00:00<03:25,   40.73it/s]
  1%|          | Sample : 87/8384 [00:00<00:03, 2200.98it/s]
  2%|▏         | Sample : 172/8384 [00:00<00:02, 3130.69it/s]
  3%|▎         | Sample : 260/8384 [00:00<00:02, 3688.24it/s]
  4%|▍         | Sample : 348/8384 [00:00<00:01, 4043.69it/s]
  5%|▌         | Sample : 434/8384 [00:00<00:01, 4269.74it/s]
  6%|▌         | Sample : 522/8384 [00:00<00:01, 4460.14it/s]
  7%|▋         | Sample : 610/8384 [00:00<00:01, 4605.11it/s]
  8%|▊         | Sample : 695/8384 [00:00<00:01, 4693.87it/s]
  9%|▉         | Sample : 782/8384 [00:00<00:01, 4781.97it/s]
 10%|█         | Sample : 869/8384 [00:00<00:01, 4854.52it/s]
 11%|█▏        | Sample : 953/8384 [00:00<00:01, 4895.90it/s]
 12%|█▏        | Sample : 1039/8384 [00:00<00:01, 4940.62it/s]
 13%|█▎        | Sample : 1127/8384 [00:00<00:01, 4989.98it/s]
 14%|█▍        | Sample : 1213/8384 [00:00<00:01, 5021.57it/s]
 15%|█▌        | Sample : 1299/8384 [00:00<00:01, 5051.99it/s]
 16%|█▋        | Sample : 1377/8384 [00:00<00:01, 5035.40it/s]
 17%|█▋        | Sample : 1431/8384 [00:00<00:01, 4895.68it/s]
 18%|█▊        | Sample : 1496/8384 [00:00<00:01, 4830.13it/s]
 19%|█▉        | Sample : 1583/8384 [00:00<00:01, 4875.83it/s]
 20%|█▉        | Sample : 1669/8384 [00:00<00:01, 4911.24it/s]
 21%|██        | Sample : 1757/8384 [00:00<00:01, 4950.93it/s]
 22%|██▏       | Sample : 1844/8384 [00:00<00:01, 4982.94it/s]
 23%|██▎       | Sample : 1930/8384 [00:00<00:01, 5006.74it/s]
 24%|██▍       | Sample : 2017/8384 [00:00<00:01, 5034.68it/s]
 25%|██▌       | Sample : 2103/8384 [00:00<00:01, 5057.25it/s]
 26%|██▌       | Sample : 2186/8384 [00:00<00:01, 5063.96it/s]
 27%|██▋       | Sample : 2272/8384 [00:00<00:01, 5080.77it/s]
 28%|██▊       | Sample : 2357/8384 [00:00<00:01, 5094.65it/s]
 29%|██▉       | Sample : 2443/8384 [00:00<00:01, 5109.80it/s]
 30%|███       | Sample : 2532/8384 [00:00<00:01, 5135.12it/s]
 31%|███       | Sample : 2618/8384 [00:00<00:01, 5149.06it/s]
 32%|███▏      | Sample : 2705/8384 [00:00<00:01, 5164.16it/s]
 33%|███▎      | Sample : 2789/8384 [00:00<00:01, 5166.88it/s]
 34%|███▍      | Sample : 2873/8384 [00:00<00:01, 5170.92it/s]
 35%|███▌      | Sample : 2956/8384 [00:00<00:01, 5168.76it/s]
 36%|███▋      | Sample : 3042/8384 [00:00<00:01, 5177.72it/s]
 37%|███▋      | Sample : 3126/8384 [00:00<00:01, 5178.53it/s]
 38%|███▊      | Sample : 3212/8384 [00:00<00:00, 5186.20it/s]
 39%|███▉      | Sample : 3271/8384 [00:00<00:01, 5099.18it/s]
 40%|████      | Sample : 3357/8384 [00:00<00:00, 5113.37it/s]
 41%|████      | Sample : 3439/8384 [00:00<00:00, 5113.68it/s]
 42%|████▏     | Sample : 3517/8384 [00:00<00:00, 5100.06it/s]
 43%|████▎     | Sample : 3598/8384 [00:00<00:00, 5096.05it/s]
 44%|████▍     | Sample : 3678/8384 [00:00<00:00, 5089.81it/s]
 45%|████▍     | Sample : 3753/8384 [00:00<00:00, 5064.83it/s]
 46%|████▌     | Sample : 3832/8384 [00:00<00:00, 5056.23it/s]
 47%|████▋     | Sample : 3916/8384 [00:00<00:00, 5066.17it/s]
 48%|████▊     | Sample : 4001/8384 [00:00<00:00, 5079.45it/s]
 49%|████▉     | Sample : 4088/8384 [00:00<00:00, 5097.24it/s]
 50%|████▉     | Sample : 4172/8384 [00:00<00:00, 5105.24it/s]
 51%|█████     | Sample : 4253/8384 [00:00<00:00, 5101.82it/s]
 52%|█████▏    | Sample : 4332/8384 [00:00<00:00, 5091.92it/s]
 53%|█████▎    | Sample : 4415/8384 [00:00<00:00, 5095.92it/s]
 54%|█████▎    | Sample : 4502/8384 [00:00<00:00, 5111.38it/s]
 55%|█████▍    | Sample : 4588/8384 [00:00<00:00, 5122.86it/s]
 56%|█████▌    | Sample : 4675/8384 [00:00<00:00, 5137.43it/s]
 57%|█████▋    | Sample : 4761/8384 [00:00<00:00, 5148.55it/s]
 58%|█████▊    | Sample : 4848/8384 [00:00<00:00, 5162.99it/s]
 59%|█████▉    | Sample : 4935/8384 [00:00<00:00, 5175.12it/s]
 60%|█████▉    | Sample : 5017/8384 [00:00<00:00, 5172.46it/s]
 61%|██████    | Sample : 5104/8384 [00:01<00:00, 5183.87it/s]
 62%|██████▏   | Sample : 5191/8384 [00:01<00:00, 5194.09it/s]
 63%|██████▎   | Sample : 5278/8384 [00:01<00:00, 5204.82it/s]
 64%|██████▍   | Sample : 5364/8384 [00:01<00:00, 5211.49it/s]
 65%|██████▌   | Sample : 5452/8384 [00:01<00:00, 5223.38it/s]
 66%|██████▌   | Sample : 5541/8384 [00:01<00:00, 5237.61it/s]
 67%|██████▋   | Sample : 5627/8384 [00:01<00:00, 5244.40it/s]
 68%|██████▊   | Sample : 5713/8384 [00:01<00:00, 5248.40it/s]
 69%|██████▉   | Sample : 5800/8384 [00:01<00:00, 5255.65it/s]
 70%|███████   | Sample : 5888/8384 [00:01<00:00, 5267.20it/s]
 71%|███████▏  | Sample : 5977/8384 [00:01<00:00, 5279.41it/s]
 72%|███████▏  | Sample : 6065/8384 [00:01<00:00, 5289.40it/s]
 73%|███████▎  | Sample : 6133/8384 [00:01<00:00, 5234.23it/s]
 74%|███████▍  | Sample : 6220/8384 [00:01<00:00, 5243.76it/s]
 75%|███████▌  | Sample : 6307/8384 [00:01<00:00, 5250.76it/s]
 76%|███████▋  | Sample : 6394/8384 [00:01<00:00, 5258.80it/s]
 77%|███████▋  | Sample : 6480/8384 [00:01<00:00, 5262.80it/s]
 78%|███████▊  | Sample : 6565/8384 [00:01<00:00, 5263.88it/s]
 79%|███████▉  | Sample : 6653/8384 [00:01<00:00, 5274.34it/s]
 80%|████████  | Sample : 6741/8384 [00:01<00:00, 5282.82it/s]
 81%|████████▏ | Sample : 6828/8384 [00:01<00:00, 5288.20it/s]
 83%|████████▎ | Sample : 6917/8384 [00:01<00:00, 5299.83it/s]
 84%|████████▎ | Sample : 7005/8384 [00:01<00:00, 5307.77it/s]
 85%|████████▍ | Sample : 7093/8384 [00:01<00:00, 5316.80it/s]
 86%|████████▌ | Sample : 7182/8384 [00:01<00:00, 5328.01it/s]
 87%|████████▋ | Sample : 7270/8384 [00:01<00:00, 5335.65it/s]
 88%|████████▊ | Sample : 7360/8384 [00:01<00:00, 5347.62it/s]
 89%|████████▉ | Sample : 7448/8384 [00:01<00:00, 5353.27it/s]
 90%|████████▉ | Sample : 7538/8384 [00:01<00:00, 5365.73it/s]
 91%|█████████ | Sample : 7630/8384 [00:01<00:00, 5384.36it/s]
 92%|█████████▏| Sample : 7721/8384 [00:01<00:00, 5398.20it/s]
 93%|█████████▎| Sample : 7813/8384 [00:01<00:00, 5415.57it/s]
 94%|█████████▍| Sample : 7907/8384 [00:01<00:00, 5438.47it/s]
 95%|█████████▌| Sample : 8003/8384 [00:01<00:00, 5464.35it/s]
 97%|█████████▋| Sample : 8101/8384 [00:01<00:00, 5496.06it/s]
 98%|█████████▊| Sample : 8196/8384 [00:01<00:00, 5516.01it/s]
 99%|█████████▉| Sample : 8294/8384 [00:01<00:00, 5544.38it/s]
100%|██████████| Sample : 8384/8384 [00:01<00:00, 5238.50it/s]

Visualize stimulus reconstruction#

To get a sense of our model performance, we can plot the actual and predicted stimulus envelopes side by side.

y_pred = sr.predict(Y[test])
time = np.linspace(0, 2.0, 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4), layout="constrained")
ax.plot(
    time, speech[test][sr.valid_samples_][: int(5 * sfreq)], color="grey", lw=2, ls="--"
)
ax.plot(time, y_pred[sr.valid_samples_][: int(5 * sfreq)], color="r", lw=2)
ax.legend([lns[0], ln1[0]], ["Envelope", "Reconstruction"], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel("Time (s)")
Stimulus reconstruction

Investigate model coefficients#

Finally, we will look at how the decoding model coefficients are distributed across the scalp. We will attempt to recreate figure 5 from [1]. The decoding model weights reflect the channels that contribute most toward reconstructing the stimulus signal, but are not directly interpretable in a neurophysiological sense. Here we also look at the coefficients obtained via an inversion procedure [2], which have a more straightforward interpretation as their value (and sign) directly relates to the stimulus signal’s strength (and effect direction).

time_plot = (-0.140, -0.125)  # To average between two timepoints.
ix_plot = np.arange(
    np.argmin(np.abs(time_plot[0] - times)), np.argmin(np.abs(time_plot[1] - times))
)
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(
    np.mean(mean_coefs[:, ix_plot], axis=1),
    pos=info,
    axes=ax[0],
    show=False,
    vlim=(-max_coef, max_coef),
)
ax[0].set(title=f"Model coefficients\nbetween delays {time_plot[0]} and {time_plot[1]}")

mne.viz.plot_topomap(
    np.mean(mean_patterns[:, ix_plot], axis=1),
    pos=info,
    axes=ax[1],
    show=False,
    vlim=(-max_patterns, max_patterns),
)
ax[1].set(
    title=(
        f"Inverse-transformed coefficients\nbetween delays {time_plot[0]} and "
        f"{time_plot[1]}"
    )
)
Model coefficients between delays -0.14 and -0.125, Inverse-transformed coefficients between delays -0.14 and -0.125

References#

Total running time of the script: (0 minutes 22.604 seconds)

Estimated memory usage: 81 MB

Gallery generated by Sphinx-Gallery