Note
Go to the end to download the full example code.
Decoding sensor space data with generalization across time and conditions#
This example runs the analysis described in [1]. It illustrates how one can fit a linear classifier to identify a discriminatory topography at a given time instant and subsequently assess whether this linear model can accurately predict all of the time samples of a second set of conditions.
# Authors: Jean-Rémi King <jeanremi.king@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD-3-Clause
# Copyright the MNE-Python contributors.
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
meg_path = data_path / "MEG" / "sample"
raw_fname = meg_path / "sample_audvis_filt-0-40_raw.fif"
events_fname = meg_path / "sample_audvis_filt-0-40_raw-eve.fif"
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude="bads") # Pick MEG channels
raw.filter(1.0, 30.0, fir_design="firwin") # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {
"Auditory/Left": 1,
"Auditory/Right": 2,
"Visual/Left": 3,
"Visual/Right": 4,
}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(
raw,
events,
event_id=event_id,
tmin=tmin,
tmax=tmax,
proj=True,
picks=picks,
baseline=None,
preload=True,
reject=dict(mag=5e-12),
decim=decim,
verbose="error",
)
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
Read a total of 4 projection items:
PCA-v1 (1 x 102) idle
PCA-v2 (1 x 102) idle
PCA-v3 (1 x 102) idle
Average EEG reference (1 x 60) idle
Range : 6450 ... 48149 = 42.956 ... 320.665 secs
Ready.
Reading 0 ... 41699 = 0.000 ... 277.709 secs...
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 1 - 30 Hz
FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 1.00
- Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
- Upper passband edge: 30.00 Hz
- Upper transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 33.75 Hz)
- Filter length: 497 samples (3.310 s)
[Parallel(n_jobs=1)]: Done 17 tasks | elapsed: 0.0s
[Parallel(n_jobs=1)]: Done 71 tasks | elapsed: 0.2s
[Parallel(n_jobs=1)]: Done 161 tasks | elapsed: 0.4s
[Parallel(n_jobs=1)]: Done 287 tasks | elapsed: 0.6s
We will train the classifier on all left visual vs auditory trials and test on all right visual vs auditory trials.
clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver="liblinear"), # liblinear is faster than lbfgs
)
time_gen = GeneralizingEstimator(clf, scoring="roc_auc", n_jobs=None, verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs["Left"].get_data(copy=False), y=epochs["Left"].events[:, 2] > 2)
0%| | Fitting GeneralizingEstimator : 0/35 [00:00<?, ?it/s]
6%|▌ | Fitting GeneralizingEstimator : 2/35 [00:00<00:00, 55.91it/s]
11%|█▏ | Fitting GeneralizingEstimator : 4/35 [00:00<00:00, 57.65it/s]
17%|█▋ | Fitting GeneralizingEstimator : 6/35 [00:00<00:00, 58.25it/s]
26%|██▌ | Fitting GeneralizingEstimator : 9/35 [00:00<00:00, 66.35it/s]
34%|███▍ | Fitting GeneralizingEstimator : 12/35 [00:00<00:00, 71.33it/s]
43%|████▎ | Fitting GeneralizingEstimator : 15/35 [00:00<00:00, 74.66it/s]
51%|█████▏ | Fitting GeneralizingEstimator : 18/35 [00:00<00:00, 77.05it/s]
60%|██████ | Fitting GeneralizingEstimator : 21/35 [00:00<00:00, 78.84it/s]
69%|██████▊ | Fitting GeneralizingEstimator : 24/35 [00:00<00:00, 79.41it/s]
74%|███████▍ | Fitting GeneralizingEstimator : 26/35 [00:00<00:00, 76.96it/s]
83%|████████▎ | Fitting GeneralizingEstimator : 29/35 [00:00<00:00, 78.36it/s]
91%|█████████▏| Fitting GeneralizingEstimator : 32/35 [00:00<00:00, 79.53it/s]
97%|█████████▋| Fitting GeneralizingEstimator : 34/35 [00:00<00:00, 77.49it/s]
100%|██████████| Fitting GeneralizingEstimator : 35/35 [00:00<00:00, 78.67it/s]
Score on the epochs where the stimulus was presented to the right.
scores = time_gen.score(
X=epochs["Right"].get_data(copy=False), y=epochs["Right"].events[:, 2] > 2
)
0%| | Scoring GeneralizingEstimator : 0/1225 [00:00<?, ?it/s]
1%| | Scoring GeneralizingEstimator : 12/1225 [00:00<00:03, 345.63it/s]
2%|▏ | Scoring GeneralizingEstimator : 25/1225 [00:00<00:03, 366.14it/s]
3%|▎ | Scoring GeneralizingEstimator : 38/1225 [00:00<00:03, 372.24it/s]
4%|▍ | Scoring GeneralizingEstimator : 51/1225 [00:00<00:03, 375.37it/s]
5%|▌ | Scoring GeneralizingEstimator : 64/1225 [00:00<00:03, 373.51it/s]
6%|▋ | Scoring GeneralizingEstimator : 77/1225 [00:00<00:03, 375.81it/s]
7%|▋ | Scoring GeneralizingEstimator : 85/1225 [00:00<00:03, 347.17it/s]
8%|▊ | Scoring GeneralizingEstimator : 97/1225 [00:00<00:03, 348.17it/s]
9%|▉ | Scoring GeneralizingEstimator : 109/1225 [00:00<00:03, 346.37it/s]
9%|▉ | Scoring GeneralizingEstimator : 114/1225 [00:00<00:03, 319.76it/s]
10%|▉ | Scoring GeneralizingEstimator : 121/1225 [00:00<00:03, 307.19it/s]
11%|█ | Scoring GeneralizingEstimator : 133/1225 [00:00<00:03, 312.42it/s]
12%|█▏ | Scoring GeneralizingEstimator : 145/1225 [00:00<00:03, 316.79it/s]
13%|█▎ | Scoring GeneralizingEstimator : 156/1225 [00:00<00:03, 316.72it/s]
13%|█▎ | Scoring GeneralizingEstimator : 163/1225 [00:00<00:03, 304.49it/s]
14%|█▍ | Scoring GeneralizingEstimator : 169/1225 [00:00<00:03, 293.22it/s]
14%|█▍ | Scoring GeneralizingEstimator : 176/1225 [00:00<00:03, 286.10it/s]
15%|█▌ | Scoring GeneralizingEstimator : 186/1225 [00:00<00:03, 287.01it/s]
16%|█▌ | Scoring GeneralizingEstimator : 198/1225 [00:00<00:03, 292.42it/s]
17%|█▋ | Scoring GeneralizingEstimator : 210/1225 [00:00<00:03, 297.10it/s]
18%|█▊ | Scoring GeneralizingEstimator : 224/1225 [00:00<00:03, 305.75it/s]
19%|█▉ | Scoring GeneralizingEstimator : 237/1225 [00:00<00:03, 311.58it/s]
20%|██ | Scoring GeneralizingEstimator : 250/1225 [00:00<00:03, 315.94it/s]
21%|██▏ | Scoring GeneralizingEstimator : 262/1225 [00:00<00:03, 318.77it/s]
23%|██▎ | Scoring GeneralizingEstimator : 276/1225 [00:00<00:02, 325.41it/s]
24%|██▎ | Scoring GeneralizingEstimator : 289/1225 [00:00<00:02, 329.46it/s]
25%|██▍ | Scoring GeneralizingEstimator : 303/1225 [00:00<00:02, 335.03it/s]
26%|██▌ | Scoring GeneralizingEstimator : 317/1225 [00:00<00:02, 338.58it/s]
27%|██▋ | Scoring GeneralizingEstimator : 330/1225 [00:00<00:02, 341.60it/s]
28%|██▊ | Scoring GeneralizingEstimator : 344/1225 [00:01<00:02, 346.22it/s]
29%|██▉ | Scoring GeneralizingEstimator : 356/1225 [00:01<00:02, 346.86it/s]
30%|███ | Scoring GeneralizingEstimator : 368/1225 [00:01<00:02, 347.37it/s]
31%|███ | Scoring GeneralizingEstimator : 377/1225 [00:01<00:02, 341.02it/s]
31%|███ | Scoring GeneralizingEstimator : 382/1225 [00:01<00:02, 325.30it/s]
32%|███▏ | Scoring GeneralizingEstimator : 387/1225 [00:01<00:02, 314.53it/s]
33%|███▎ | Scoring GeneralizingEstimator : 401/1225 [00:01<00:02, 318.86it/s]
34%|███▍ | Scoring GeneralizingEstimator : 414/1225 [00:01<00:02, 316.64it/s]
34%|███▍ | Scoring GeneralizingEstimator : 422/1225 [00:01<00:02, 312.10it/s]
35%|███▌ | Scoring GeneralizingEstimator : 434/1225 [00:01<00:02, 314.33it/s]
36%|███▋ | Scoring GeneralizingEstimator : 447/1225 [00:01<00:02, 318.22it/s]
38%|███▊ | Scoring GeneralizingEstimator : 461/1225 [00:01<00:02, 323.51it/s]
39%|███▉ | Scoring GeneralizingEstimator : 475/1225 [00:01<00:02, 328.55it/s]
40%|███▉ | Scoring GeneralizingEstimator : 488/1225 [00:01<00:02, 331.69it/s]
41%|████ | Scoring GeneralizingEstimator : 500/1225 [00:01<00:02, 331.68it/s]
41%|████▏ | Scoring GeneralizingEstimator : 508/1225 [00:01<00:02, 325.40it/s]
42%|████▏ | Scoring GeneralizingEstimator : 517/1225 [00:01<00:02, 322.21it/s]
43%|████▎ | Scoring GeneralizingEstimator : 523/1225 [00:01<00:02, 313.39it/s]
43%|████▎ | Scoring GeneralizingEstimator : 530/1225 [00:01<00:02, 303.91it/s]
44%|████▍ | Scoring GeneralizingEstimator : 538/1225 [00:01<00:02, 299.21it/s]
45%|████▍ | Scoring GeneralizingEstimator : 550/1225 [00:01<00:02, 301.16it/s]
45%|████▌ | Scoring GeneralizingEstimator : 557/1225 [00:01<00:02, 296.40it/s]
46%|████▌ | Scoring GeneralizingEstimator : 566/1225 [00:01<00:02, 294.89it/s]
47%|████▋ | Scoring GeneralizingEstimator : 580/1225 [00:01<00:02, 300.93it/s]
48%|████▊ | Scoring GeneralizingEstimator : 587/1225 [00:01<00:02, 296.21it/s]
48%|████▊ | Scoring GeneralizingEstimator : 594/1225 [00:01<00:02, 288.55it/s]
49%|████▉ | Scoring GeneralizingEstimator : 605/1225 [00:01<00:02, 290.40it/s]
51%|█████ | Scoring GeneralizingEstimator : 619/1225 [00:01<00:02, 296.74it/s]
52%|█████▏ | Scoring GeneralizingEstimator : 633/1225 [00:02<00:01, 302.57it/s]
53%|█████▎ | Scoring GeneralizingEstimator : 646/1225 [00:02<00:01, 306.76it/s]
54%|█████▍ | Scoring GeneralizingEstimator : 660/1225 [00:02<00:01, 312.25it/s]
55%|█████▍ | Scoring GeneralizingEstimator : 671/1225 [00:02<00:01, 308.65it/s]
55%|█████▌ | Scoring GeneralizingEstimator : 676/1225 [00:02<00:01, 300.71it/s]
56%|█████▌ | Scoring GeneralizingEstimator : 687/1225 [00:02<00:01, 302.00it/s]
57%|█████▋ | Scoring GeneralizingEstimator : 700/1225 [00:02<00:01, 306.15it/s]
58%|█████▊ | Scoring GeneralizingEstimator : 705/1225 [00:02<00:01, 298.24it/s]
58%|█████▊ | Scoring GeneralizingEstimator : 711/1225 [00:02<00:01, 288.68it/s]
59%|█████▉ | Scoring GeneralizingEstimator : 721/1225 [00:02<00:01, 289.04it/s]
60%|█████▉ | Scoring GeneralizingEstimator : 734/1225 [00:02<00:01, 293.81it/s]
61%|██████ | Scoring GeneralizingEstimator : 748/1225 [00:02<00:01, 299.84it/s]
62%|██████▏ | Scoring GeneralizingEstimator : 757/1225 [00:02<00:01, 296.83it/s]
62%|██████▏ | Scoring GeneralizingEstimator : 764/1225 [00:02<00:01, 292.30it/s]
63%|██████▎ | Scoring GeneralizingEstimator : 771/1225 [00:02<00:01, 287.09it/s]
64%|██████▍ | Scoring GeneralizingEstimator : 783/1225 [00:02<00:01, 290.22it/s]
65%|██████▌ | Scoring GeneralizingEstimator : 797/1225 [00:02<00:01, 296.04it/s]
66%|██████▌ | Scoring GeneralizingEstimator : 810/1225 [00:02<00:01, 300.42it/s]
67%|██████▋ | Scoring GeneralizingEstimator : 824/1225 [00:02<00:01, 306.07it/s]
68%|██████▊ | Scoring GeneralizingEstimator : 837/1225 [00:02<00:01, 310.01it/s]
70%|██████▉ | Scoring GeneralizingEstimator : 852/1225 [00:02<00:01, 315.63it/s]
71%|███████ | Scoring GeneralizingEstimator : 865/1225 [00:02<00:01, 319.04it/s]
72%|███████▏ | Scoring GeneralizingEstimator : 878/1225 [00:02<00:01, 322.33it/s]
73%|███████▎ | Scoring GeneralizingEstimator : 891/1225 [00:02<00:01, 325.47it/s]
74%|███████▍ | Scoring GeneralizingEstimator : 904/1225 [00:02<00:00, 328.45it/s]
75%|███████▍ | Scoring GeneralizingEstimator : 916/1225 [00:02<00:00, 329.83it/s]
76%|███████▌ | Scoring GeneralizingEstimator : 928/1225 [00:02<00:00, 330.98it/s]
77%|███████▋ | Scoring GeneralizingEstimator : 942/1225 [00:02<00:00, 335.17it/s]
78%|███████▊ | Scoring GeneralizingEstimator : 955/1225 [00:02<00:00, 337.66it/s]
79%|███████▉ | Scoring GeneralizingEstimator : 969/1225 [00:03<00:00, 340.19it/s]
80%|████████ | Scoring GeneralizingEstimator : 980/1225 [00:03<00:00, 339.54it/s]
81%|████████ | Scoring GeneralizingEstimator : 993/1225 [00:03<00:00, 341.82it/s]
82%|████████▏ | Scoring GeneralizingEstimator : 1006/1225 [00:03<00:00, 343.97it/s]
83%|████████▎ | Scoring GeneralizingEstimator : 1019/1225 [00:03<00:00, 346.04it/s]
84%|████████▍ | Scoring GeneralizingEstimator : 1031/1225 [00:03<00:00, 345.77it/s]
85%|████████▌ | Scoring GeneralizingEstimator : 1045/1225 [00:03<00:00, 349.10it/s]
86%|████████▋ | Scoring GeneralizingEstimator : 1059/1225 [00:03<00:00, 352.41it/s]
88%|████████▊ | Scoring GeneralizingEstimator : 1072/1225 [00:03<00:00, 354.05it/s]
89%|████████▊ | Scoring GeneralizingEstimator : 1086/1225 [00:03<00:00, 357.11it/s]
90%|████████▉ | Scoring GeneralizingEstimator : 1099/1225 [00:03<00:00, 358.54it/s]
91%|█████████ | Scoring GeneralizingEstimator : 1112/1225 [00:03<00:00, 359.23it/s]
92%|█████████▏| Scoring GeneralizingEstimator : 1124/1225 [00:03<00:00, 359.00it/s]
93%|█████████▎| Scoring GeneralizingEstimator : 1137/1225 [00:03<00:00, 360.36it/s]
94%|█████████▍| Scoring GeneralizingEstimator : 1150/1225 [00:03<00:00, 361.63it/s]
95%|█████████▍| Scoring GeneralizingEstimator : 1163/1225 [00:03<00:00, 362.86it/s]
96%|█████████▌| Scoring GeneralizingEstimator : 1175/1225 [00:03<00:00, 361.39it/s]
97%|█████████▋| Scoring GeneralizingEstimator : 1189/1225 [00:03<00:00, 364.04it/s]
98%|█████████▊| Scoring GeneralizingEstimator : 1203/1225 [00:03<00:00, 366.60it/s]
99%|█████████▉| Scoring GeneralizingEstimator : 1216/1225 [00:03<00:00, 367.45it/s]
100%|██████████| Scoring GeneralizingEstimator : 1225/1225 [00:03<00:00, 369.25it/s]
100%|██████████| Scoring GeneralizingEstimator : 1225/1225 [00:03<00:00, 330.90it/s]
Plot
fig, ax = plt.subplots(layout="constrained")
im = ax.matshow(
scores,
vmin=0,
vmax=1.0,
cmap="RdBu_r",
origin="lower",
extent=epochs.times[[0, -1, 0, -1]],
)
ax.axhline(0.0, color="k")
ax.axvline(0.0, color="k")
ax.xaxis.set_ticks_position("bottom")
ax.set_xlabel(
'Condition: "Right"\nTesting Time (s)',
)
ax.set_ylabel('Condition: "Left"\nTraining Time (s)')
ax.set_title("Generalization across time and condition", fontweight="bold")
fig.colorbar(im, ax=ax, label="Performance (ROC AUC)")
plt.show()
References#
Total running time of the script: (0 minutes 6.074 seconds)