Note
Go to the end to download the full example code.
Receptive Field Estimation and Prediction#
This example reproduces figures from Lalor et al.’s mTRF toolbox in
MATLAB [1]. We will show how the
mne.decoding.ReceptiveField
class
can perform a similar function along with scikit-learn. We will first fit a
linear encoding model using the continuously-varying speech envelope to predict
activity of a 128 channel EEG system. Then, we will take the reverse approach
and try to predict the speech envelope from the EEG (known in the literature
as a decoding model, or simply stimulus reconstruction).
# Authors: Chris Holdgraf <choldgraf@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Nicolas Barascud <nicolas.barascud@ens.fr>
#
# License: BSD-3-Clause
# Copyright the MNE-Python contributors.
from os.path import join
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
import mne
from mne.decoding import ReceptiveField
Load the data from the publication#
First we will load the data collected in [1]. In this experiment subjects listened to natural speech. Raw EEG and the speech stimulus are provided. We will load these below, downsampling the data in order to speed up computation since we know that our features are primarily low-frequency in nature. Then we’ll visualize both the EEG and speech envelope.
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, "speech_data.mat"))
raw = data["EEG"].T
speech = data["envelope"].T
sfreq = float(data["Fs"].item())
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, method="polyphase")
raw = mne.filter.resample(raw, down=decim, method="polyphase")
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.make_standard_montage("biosemi128")
info = mne.create_info(montage.ch_names, sfreq, "eeg").set_montage(montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots(layout="constrained")
lns = ax.plot(scale(raw[:, :800][0].T), color="k", alpha=0.1)
ln1 = ax.plot(scale(speech[0, :800]), color="r", lw=2)
ax.legend([lns[0], ln1[0]], ["EEG", "Speech Envelope"], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
Polyphase resampling neighborhood: ±2 input samples
Polyphase resampling neighborhood: ±2 input samples
Creating RawArray with float64 data, n_channels=128, n_times=7677
Range : 0 ... 7676 = 0.000 ... 119.938 secs
Ready.
Create and fit a receptive field model#
We will construct an encoding model to find the linear relationship between a time-delayed version of the speech envelope and the EEG signal. This allows us to make predictions about the response to new stimuli.
# Define the delays that we will use in the receptive field
tmin, tmax = -0.2, 0.4
# Initialize the model
rf = ReceptiveField(
tmin, tmax, sfreq, feature_names=["envelope"], estimator=1.0, scoring="corrcoef"
)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:] # Outputs for the model
Y = Y.T
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
print(f"split {ii + 1} / {n_splits}")
rf.fit(speech[train], Y[train])
scores[ii] = rf.score(speech[test], Y[test])
# coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)
# Plot mean prediction scores across all channels
fig, ax = plt.subplots(layout="constrained")
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls="--", color="r")
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
split 1 / 3
Fitting 1 epochs, 1 channels
0%| | Sample : 0/2 [00:00<?, ?it/s]
50%|█████ | Sample : 1/2 [00:02<00:02, 2.31s/it]
100%|██████████| Sample : 2/2 [00:02<00:00, 1.14s/it]
100%|██████████| Sample : 2/2 [00:02<00:00, 1.17s/it]
split 2 / 3
Fitting 1 epochs, 1 channels
0%| | Sample : 0/2 [00:00<?, ?it/s]
50%|█████ | Sample : 1/2 [00:00<00:00, 56.91it/s]
100%|██████████| Sample : 2/2 [00:00<00:00, 56.84it/s]
100%|██████████| Sample : 2/2 [00:00<00:00, 56.41it/s]
split 3 / 3
Fitting 1 epochs, 1 channels
0%| | Sample : 0/2 [00:00<?, ?it/s]
100%|██████████| Sample : 2/2 [00:00<00:00, 61.93it/s]
100%|██████████| Sample : 2/2 [00:00<00:00, 61.34it/s]
Investigate model coefficients#
Finally, we will look at how the linear coefficients (sometimes referred to as beta values) are distributed across time delays as well as across the scalp. We will recreate figure 1 and figure 2 from [1].
# Print mean coefficients across all time delays / channels (see Fig 1)
time_plot = 0.180 # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8), layout="constrained")
max_coef = mean_coefs.max()
ax.pcolormesh(
times,
ix_chs,
mean_coefs,
cmap="RdBu_r",
vmin=-max_coef,
vmax=max_coef,
shading="gouraud",
)
ax.axvline(time_plot, ls="--", color="k", lw=2)
ax.set(
xlabel="Delay (s)",
ylabel="Channel",
title="Mean Model\nCoefficients",
xlim=times[[0, -1]],
ylim=[len(ix_chs) - 1, 0],
xticks=np.arange(tmin, tmax + 0.2, 0.2),
)
plt.setp(ax.get_xticklabels(), rotation=45)
# Make a topographic map of coefficients for a given delay (see Fig 2C)
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots(layout="constrained")
mne.viz.plot_topomap(
mean_coefs[:, ix_plot], pos=info, axes=ax, show=False, vlim=(-max_coef, max_coef)
)
ax.set(title=f"Topomap of model coefficients\nfor delay {time_plot}")
Create and fit a stimulus reconstruction model#
We will now demonstrate another use case for the for the
mne.decoding.ReceptiveField
class as we try to predict the stimulus
activity from the EEG data. This is known in the literature as a decoding, or
stimulus reconstruction model [1].
A decoding model aims to find the
relationship between the speech signal and a time-delayed version of the EEG.
This can be useful as we exploit all of the available neural data in a
multivariate context, compared to the encoding case which treats each M/EEG
channel as an independent feature. Therefore, decoding models might provide a
better quality of fit (at the expense of not controlling for stimulus
covariance), especially for low SNR stimuli such as speech.
# We use the same lags as in :footcite:`CrosseEtAl2016`. Negative lags now
# index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -0.2, 0.0
# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section and :footcite:`HaufeEtAl2014`).
# We'll use a ridge regression estimator with an alpha value similar to
# Crosse et al.
sr = ReceptiveField(
tmin,
tmax,
sfreq,
feature_names=raw.ch_names,
estimator=1e4,
scoring="corrcoef",
patterns=True,
)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
print(f"split {ii + 1} / {n_splits}")
sr.fit(Y[train], speech[train])
scores[ii] = sr.score(Y[test], speech[test])[0]
# coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
coefs[ii] = sr.coef_[0, :, :]
patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
split 1 / 3
Fitting 1 epochs, 128 channels
0%| | Sample : 0/8384 [00:00<?, ?it/s]
0%| | Sample : 1/8384 [00:00<02:28, 56.53it/s]
2%|▏ | Sample : 130/8384 [00:00<00:02, 3960.07it/s]
3%|▎ | Sample : 266/8384 [00:00<00:01, 5490.26it/s]
5%|▍ | Sample : 404/8384 [00:00<00:01, 6306.51it/s]
6%|▋ | Sample : 542/8384 [00:00<00:01, 6803.16it/s]
8%|▊ | Sample : 679/8384 [00:00<00:01, 7123.76it/s]
10%|▉ | Sample : 816/8384 [00:00<00:01, 7357.34it/s]
11%|█▏ | Sample : 954/8384 [00:00<00:00, 7537.64it/s]
13%|█▎ | Sample : 1090/8384 [00:00<00:00, 7660.86it/s]
15%|█▍ | Sample : 1226/8384 [00:00<00:00, 7763.71it/s]
16%|█▋ | Sample : 1365/8384 [00:00<00:00, 7866.31it/s]
18%|█▊ | Sample : 1503/8384 [00:00<00:00, 7943.98it/s]
20%|█▉ | Sample : 1644/8384 [00:00<00:00, 8027.09it/s]
21%|██▏ | Sample : 1785/8384 [00:00<00:00, 8100.50it/s]
23%|██▎ | Sample : 1928/8384 [00:00<00:00, 8175.32it/s]
25%|██▍ | Sample : 2070/8384 [00:00<00:00, 8233.58it/s]
26%|██▋ | Sample : 2207/8384 [00:00<00:00, 8260.75it/s]
28%|██▊ | Sample : 2347/8384 [00:00<00:00, 8296.45it/s]
30%|██▉ | Sample : 2485/8384 [00:00<00:00, 8317.91it/s]
31%|███▏ | Sample : 2623/8384 [00:00<00:00, 8341.05it/s]
33%|███▎ | Sample : 2761/8384 [00:00<00:00, 8360.36it/s]
35%|███▍ | Sample : 2902/8384 [00:00<00:00, 8392.25it/s]
36%|███▋ | Sample : 3042/8384 [00:00<00:00, 8416.46it/s]
38%|███▊ | Sample : 3182/8384 [00:00<00:00, 8436.24it/s]
40%|███▉ | Sample : 3324/8384 [00:00<00:00, 8464.14it/s]
41%|████▏ | Sample : 3463/8384 [00:00<00:00, 8477.80it/s]
43%|████▎ | Sample : 3601/8384 [00:00<00:00, 8486.03it/s]
45%|████▍ | Sample : 3743/8384 [00:00<00:00, 8507.45it/s]
46%|████▋ | Sample : 3882/8384 [00:00<00:00, 8516.75it/s]
48%|████▊ | Sample : 4025/8384 [00:00<00:00, 8541.28it/s]
50%|████▉ | Sample : 4161/8384 [00:00<00:00, 8538.49it/s]
51%|█████▏ | Sample : 4302/8384 [00:00<00:00, 8554.64it/s]
53%|█████▎ | Sample : 4445/8384 [00:00<00:00, 8574.67it/s]
55%|█████▍ | Sample : 4586/8384 [00:00<00:00, 8587.25it/s]
56%|█████▋ | Sample : 4728/8384 [00:00<00:00, 8602.02it/s]
58%|█████▊ | Sample : 4869/8384 [00:00<00:00, 8612.87it/s]
60%|█████▉ | Sample : 5012/8384 [00:00<00:00, 8628.50it/s]
61%|██████▏ | Sample : 5155/8384 [00:00<00:00, 8646.19it/s]
63%|██████▎ | Sample : 5299/8384 [00:00<00:00, 8663.43it/s]
65%|██████▍ | Sample : 5441/8384 [00:00<00:00, 8672.21it/s]
67%|██████▋ | Sample : 5581/8384 [00:00<00:00, 8676.60it/s]
68%|██████▊ | Sample : 5719/8384 [00:00<00:00, 8671.47it/s]
70%|██████▉ | Sample : 5860/8384 [00:00<00:00, 8678.46it/s]
72%|███████▏ | Sample : 6002/8384 [00:00<00:00, 8689.02it/s]
73%|███████▎ | Sample : 6145/8384 [00:00<00:00, 8702.31it/s]
75%|███████▌ | Sample : 6289/8384 [00:00<00:00, 8715.41it/s]
77%|███████▋ | Sample : 6432/8384 [00:00<00:00, 8724.72it/s]
78%|███████▊ | Sample : 6575/8384 [00:00<00:00, 8734.58it/s]
80%|████████ | Sample : 6718/8384 [00:00<00:00, 8744.21it/s]
82%|████████▏ | Sample : 6858/8384 [00:00<00:00, 8741.62it/s]
84%|████████▎ | Sample : 7002/8384 [00:00<00:00, 8755.05it/s]
85%|████████▌ | Sample : 7146/8384 [00:00<00:00, 8765.08it/s]
87%|████████▋ | Sample : 7289/8384 [00:00<00:00, 8771.06it/s]
89%|████████▊ | Sample : 7432/8384 [00:00<00:00, 8777.04it/s]
90%|█████████ | Sample : 7576/8384 [00:00<00:00, 8785.94it/s]
92%|█████████▏| Sample : 7716/8384 [00:00<00:00, 8782.95it/s]
94%|█████████▍| Sample : 7861/8384 [00:00<00:00, 8796.28it/s]
95%|█████████▌| Sample : 8006/8384 [00:00<00:00, 8809.11it/s]
97%|█████████▋| Sample : 8148/8384 [00:00<00:00, 8810.36it/s]
99%|█████████▉| Sample : 8291/8384 [00:00<00:00, 8815.94it/s]
100%|██████████| Sample : 8384/8384 [00:00<00:00, 8592.03it/s]
split 2 / 3
Fitting 1 epochs, 128 channels
0%| | Sample : 0/8384 [00:00<?, ?it/s]
0%| | Sample : 12/8384 [00:00<00:11, 749.78it/s]
2%|▏ | Sample : 143/8384 [00:00<00:01, 4553.93it/s]
3%|▎ | Sample : 281/8384 [00:00<00:01, 5962.41it/s]
5%|▌ | Sample : 421/8384 [00:00<00:01, 6701.55it/s]
7%|▋ | Sample : 561/8384 [00:00<00:01, 7151.48it/s]
8%|▊ | Sample : 693/8384 [00:00<00:01, 7350.81it/s]
10%|▉ | Sample : 822/8384 [00:00<00:01, 7462.55it/s]
11%|█▏ | Sample : 956/8384 [00:00<00:00, 7591.92it/s]
13%|█▎ | Sample : 1084/8384 [00:00<00:00, 7642.32it/s]
15%|█▍ | Sample : 1216/8384 [00:00<00:00, 7714.30it/s]
16%|█▌ | Sample : 1350/8384 [00:00<00:00, 7786.86it/s]
18%|█▊ | Sample : 1482/8384 [00:00<00:00, 7833.23it/s]
19%|█▉ | Sample : 1616/8384 [00:00<00:00, 7883.38it/s]
21%|██ | Sample : 1752/8384 [00:00<00:00, 7940.39it/s]
22%|██▏ | Sample : 1885/8384 [00:00<00:00, 7971.17it/s]
24%|██▍ | Sample : 2019/8384 [00:00<00:00, 8006.80it/s]
26%|██▌ | Sample : 2156/8384 [00:00<00:00, 8051.76it/s]
27%|██▋ | Sample : 2292/8384 [00:00<00:00, 8085.42it/s]
29%|██▉ | Sample : 2427/8384 [00:00<00:00, 8109.95it/s]
31%|███ | Sample : 2560/8384 [00:00<00:00, 8124.92it/s]
32%|███▏ | Sample : 2694/8384 [00:00<00:00, 8140.68it/s]
34%|███▍ | Sample : 2833/8384 [00:00<00:00, 8178.07it/s]
35%|███▌ | Sample : 2972/8384 [00:00<00:00, 8213.29it/s]
37%|███▋ | Sample : 3109/8384 [00:00<00:00, 8236.83it/s]
39%|███▊ | Sample : 3246/8384 [00:00<00:00, 8255.81it/s]
40%|████ | Sample : 3385/8384 [00:00<00:00, 8282.86it/s]
42%|████▏ | Sample : 3519/8384 [00:00<00:00, 8287.78it/s]
44%|████▎ | Sample : 3649/8384 [00:00<00:00, 8274.65it/s]
45%|████▌ | Sample : 3786/8384 [00:00<00:00, 8291.88it/s]
47%|████▋ | Sample : 3924/8384 [00:00<00:00, 8310.27it/s]
48%|████▊ | Sample : 4061/8384 [00:00<00:00, 8324.53it/s]
50%|█████ | Sample : 4199/8384 [00:00<00:00, 8342.99it/s]
52%|█████▏ | Sample : 4340/8384 [00:00<00:00, 8369.37it/s]
53%|█████▎ | Sample : 4468/8384 [00:00<00:00, 8343.40it/s]
55%|█████▍ | Sample : 4600/8384 [00:00<00:00, 8334.34it/s]
56%|█████▋ | Sample : 4728/8384 [00:00<00:00, 8312.09it/s]
58%|█████▊ | Sample : 4861/8384 [00:00<00:00, 8308.83it/s]
60%|█████▉ | Sample : 4991/8384 [00:00<00:00, 8295.77it/s]
61%|██████ | Sample : 5126/8384 [00:00<00:00, 8303.49it/s]
63%|██████▎ | Sample : 5257/8384 [00:00<00:00, 8293.45it/s]
64%|██████▍ | Sample : 5388/8384 [00:00<00:00, 8284.34it/s]
66%|██████▌ | Sample : 5525/8384 [00:00<00:00, 8298.38it/s]
67%|██████▋ | Sample : 5657/8384 [00:00<00:00, 8295.17it/s]
69%|██████▉ | Sample : 5790/8384 [00:00<00:00, 8293.08it/s]
71%|███████ | Sample : 5925/8384 [00:00<00:00, 8299.83it/s]
72%|███████▏ | Sample : 6064/8384 [00:00<00:00, 8320.13it/s]
74%|███████▍ | Sample : 6199/8384 [00:00<00:00, 8326.21it/s]
76%|███████▌ | Sample : 6334/8384 [00:00<00:00, 8330.20it/s]
77%|███████▋ | Sample : 6474/8384 [00:00<00:00, 8351.58it/s]
79%|███████▉ | Sample : 6616/8384 [00:00<00:00, 8379.48it/s]
81%|████████ | Sample : 6758/8384 [00:00<00:00, 8404.02it/s]
82%|████████▏ | Sample : 6899/8384 [00:00<00:00, 8424.33it/s]
84%|████████▍ | Sample : 7043/8384 [00:00<00:00, 8453.61it/s]
86%|████████▌ | Sample : 7186/8384 [00:00<00:00, 8478.35it/s]
87%|████████▋ | Sample : 7330/8384 [00:00<00:00, 8505.37it/s]
89%|████████▉ | Sample : 7474/8384 [00:00<00:00, 8530.90it/s]
91%|█████████ | Sample : 7618/8384 [00:00<00:00, 8554.71it/s]
93%|█████████▎| Sample : 7763/8384 [00:00<00:00, 8580.73it/s]
94%|█████████▍| Sample : 7905/8384 [00:00<00:00, 8594.75it/s]
96%|█████████▌| Sample : 8050/8384 [00:00<00:00, 8616.65it/s]
98%|█████████▊| Sample : 8195/8384 [00:00<00:00, 8637.32it/s]
99%|█████████▉| Sample : 8338/8384 [00:00<00:00, 8649.99it/s]
100%|██████████| Sample : 8384/8384 [00:01<00:00, 8375.06it/s]
split 3 / 3
Fitting 1 epochs, 128 channels
0%| | Sample : 0/8384 [00:00<?, ?it/s]
0%| | Sample : 7/8384 [00:00<00:19, 436.69it/s]
2%|▏ | Sample : 139/8384 [00:00<00:01, 4424.97it/s]
3%|▎ | Sample : 275/8384 [00:00<00:01, 5849.32it/s]
5%|▍ | Sample : 414/8384 [00:00<00:01, 6607.87it/s]
7%|▋ | Sample : 552/8384 [00:00<00:01, 7049.19it/s]
8%|▊ | Sample : 689/8384 [00:00<00:01, 7327.40it/s]
10%|▉ | Sample : 825/8384 [00:00<00:01, 7517.11it/s]
11%|█▏ | Sample : 962/8384 [00:00<00:00, 7669.34it/s]
13%|█▎ | Sample : 1099/8384 [00:00<00:00, 7786.77it/s]
15%|█▍ | Sample : 1236/8384 [00:00<00:00, 7878.80it/s]
16%|█▋ | Sample : 1374/8384 [00:00<00:00, 7962.47it/s]
18%|█▊ | Sample : 1510/8384 [00:00<00:00, 8020.47it/s]
20%|█▉ | Sample : 1647/8384 [00:00<00:00, 8074.50it/s]
21%|██▏ | Sample : 1786/8384 [00:00<00:00, 8130.94it/s]
23%|██▎ | Sample : 1923/8384 [00:00<00:00, 8170.00it/s]
25%|██▍ | Sample : 2063/8384 [00:00<00:00, 8219.07it/s]
26%|██▋ | Sample : 2202/8384 [00:00<00:00, 8258.94it/s]
28%|██▊ | Sample : 2341/8384 [00:00<00:00, 8289.89it/s]
30%|██▉ | Sample : 2482/8384 [00:00<00:00, 8331.27it/s]
31%|███▏ | Sample : 2623/8384 [00:00<00:00, 8366.53it/s]
33%|███▎ | Sample : 2764/8384 [00:00<00:00, 8396.29it/s]
35%|███▍ | Sample : 2901/8384 [00:00<00:00, 8407.91it/s]
36%|███▋ | Sample : 3041/8384 [00:00<00:00, 8429.25it/s]
38%|███▊ | Sample : 3182/8384 [00:00<00:00, 8452.37it/s]
40%|███▉ | Sample : 3324/8384 [00:00<00:00, 8479.33it/s]
41%|████▏ | Sample : 3465/8384 [00:00<00:00, 8500.86it/s]
43%|████▎ | Sample : 3602/8384 [00:00<00:00, 8504.91it/s]
45%|████▍ | Sample : 3739/8384 [00:00<00:00, 8507.01it/s]
46%|████▌ | Sample : 3874/8384 [00:00<00:00, 8502.24it/s]
48%|████▊ | Sample : 4012/8384 [00:00<00:00, 8508.58it/s]
49%|████▉ | Sample : 4146/8384 [00:00<00:00, 8498.66it/s]
51%|█████ | Sample : 4283/8384 [00:00<00:00, 8501.11it/s]
53%|█████▎ | Sample : 4407/8384 [00:00<00:00, 8452.39it/s]
54%|█████▍ | Sample : 4547/8384 [00:00<00:00, 8467.57it/s]
56%|█████▌ | Sample : 4688/8384 [00:00<00:00, 8487.59it/s]
58%|█████▊ | Sample : 4828/8384 [00:00<00:00, 8498.98it/s]
59%|█████▉ | Sample : 4964/8384 [00:00<00:00, 8495.89it/s]
61%|██████ | Sample : 5102/8384 [00:00<00:00, 8500.74it/s]
62%|██████▏ | Sample : 5238/8384 [00:00<00:00, 8499.75it/s]
64%|██████▍ | Sample : 5375/8384 [00:00<00:00, 8500.75it/s]
66%|██████▌ | Sample : 5513/8384 [00:00<00:00, 8505.75it/s]
67%|██████▋ | Sample : 5649/8384 [00:00<00:00, 8502.58it/s]
69%|██████▉ | Sample : 5787/8384 [00:00<00:00, 8506.02it/s]
71%|███████ | Sample : 5928/8384 [00:00<00:00, 8522.76it/s]
72%|███████▏ | Sample : 6071/8384 [00:00<00:00, 8544.21it/s]
74%|███████▍ | Sample : 6209/8384 [00:00<00:00, 8547.77it/s]
76%|███████▌ | Sample : 6350/8384 [00:00<00:00, 8561.99it/s]
77%|███████▋ | Sample : 6492/8384 [00:00<00:00, 8578.40it/s]
79%|███████▉ | Sample : 6634/8384 [00:00<00:00, 8593.39it/s]
81%|████████ | Sample : 6774/8384 [00:00<00:00, 8601.81it/s]
82%|████████▏ | Sample : 6916/8384 [00:00<00:00, 8616.18it/s]
84%|████████▍ | Sample : 7058/8384 [00:00<00:00, 8629.45it/s]
86%|████████▌ | Sample : 7201/8384 [00:00<00:00, 8644.32it/s]
88%|████████▊ | Sample : 7343/8384 [00:00<00:00, 8655.58it/s]
89%|████████▉ | Sample : 7487/8384 [00:00<00:00, 8671.77it/s]
91%|█████████ | Sample : 7630/8384 [00:00<00:00, 8683.72it/s]
93%|█████████▎| Sample : 7774/8384 [00:00<00:00, 8700.30it/s]
94%|█████████▍| Sample : 7917/8384 [00:00<00:00, 8710.69it/s]
96%|█████████▌| Sample : 8061/8384 [00:00<00:00, 8725.80it/s]
98%|█████████▊| Sample : 8205/8384 [00:00<00:00, 8738.32it/s]
100%|█████████▉| Sample : 8350/8384 [00:00<00:00, 8752.90it/s]
100%|██████████| Sample : 8384/8384 [00:00<00:00, 8528.23it/s]
Visualize stimulus reconstruction#
To get a sense of our model performance, we can plot the actual and predicted stimulus envelopes side by side.
y_pred = sr.predict(Y[test])
time = np.linspace(0, 2.0, 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4), layout="constrained")
ax.plot(
time, speech[test][sr.valid_samples_][: int(5 * sfreq)], color="grey", lw=2, ls="--"
)
ax.plot(time, y_pred[sr.valid_samples_][: int(5 * sfreq)], color="r", lw=2)
ax.legend([lns[0], ln1[0]], ["Envelope", "Reconstruction"], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel("Time (s)")
Investigate model coefficients#
Finally, we will look at how the decoding model coefficients are distributed across the scalp. We will attempt to recreate figure 5 from [1]. The decoding model weights reflect the channels that contribute most toward reconstructing the stimulus signal, but are not directly interpretable in a neurophysiological sense. Here we also look at the coefficients obtained via an inversion procedure [2], which have a more straightforward interpretation as their value (and sign) directly relates to the stimulus signal’s strength (and effect direction).
time_plot = (-0.140, -0.125) # To average between two timepoints.
ix_plot = np.arange(
np.argmin(np.abs(time_plot[0] - times)), np.argmin(np.abs(time_plot[1] - times))
)
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(
np.mean(mean_coefs[:, ix_plot], axis=1),
pos=info,
axes=ax[0],
show=False,
vlim=(-max_coef, max_coef),
)
ax[0].set(title=f"Model coefficients\nbetween delays {time_plot[0]} and {time_plot[1]}")
mne.viz.plot_topomap(
np.mean(mean_patterns[:, ix_plot], axis=1),
pos=info,
axes=ax[1],
show=False,
vlim=(-max_patterns, max_patterns),
)
ax[1].set(
title=(
f"Inverse-transformed coefficients\nbetween delays {time_plot[0]} and "
f"{time_plot[1]}"
)
)
References#
Total running time of the script: (0 minutes 12.361 seconds)