Note
Go to the end to download the full example code
Decoding sensor space data with generalization across time and conditions#
This example runs the analysis described in [1]. It illustrates how one can fit a linear classifier to identify a discriminatory topography at a given time instant and subsequently assess whether this linear model can accurately predict all of the time samples of a second set of conditions.
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
meg_path = data_path / "MEG" / "sample"
raw_fname = meg_path / "sample_audvis_filt-0-40_raw.fif"
events_fname = meg_path / "sample_audvis_filt-0-40_raw-eve.fif"
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude="bads") # Pick MEG channels
raw.filter(1.0, 30.0, fir_design="firwin") # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {
"Auditory/Left": 1,
"Auditory/Right": 2,
"Visual/Left": 3,
"Visual/Right": 4,
}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(
raw,
events,
event_id=event_id,
tmin=tmin,
tmax=tmax,
proj=True,
picks=picks,
baseline=None,
preload=True,
reject=dict(mag=5e-12),
decim=decim,
verbose="error",
)
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
Read a total of 4 projection items:
PCA-v1 (1 x 102) idle
PCA-v2 (1 x 102) idle
PCA-v3 (1 x 102) idle
Average EEG reference (1 x 60) idle
Range : 6450 ... 48149 = 42.956 ... 320.665 secs
Ready.
Reading 0 ... 41699 = 0.000 ... 277.709 secs...
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 1 - 30 Hz
FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 1.00
- Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
- Upper passband edge: 30.00 Hz
- Upper transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 33.75 Hz)
- Filter length: 497 samples (3.310 s)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 4 out of 4 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 366 out of 366 | elapsed: 0.8s finished
We will train the classifier on all left visual vs auditory trials and test on all right visual vs auditory trials.
clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver="liblinear"), # liblinear is faster than lbfgs
)
time_gen = GeneralizingEstimator(clf, scoring="roc_auc", n_jobs=None, verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs["Left"].get_data(), y=epochs["Left"].events[:, 2] > 2)
0%| | Fitting GeneralizingEstimator : 0/35 [00:00<?, ?it/s]
3%|2 | Fitting GeneralizingEstimator : 1/35 [00:00<00:01, 29.05it/s]
11%|#1 | Fitting GeneralizingEstimator : 4/35 [00:00<00:00, 58.64it/s]
14%|#4 | Fitting GeneralizingEstimator : 5/35 [00:00<00:00, 48.47it/s]
20%|## | Fitting GeneralizingEstimator : 7/35 [00:00<00:00, 51.30it/s]
26%|##5 | Fitting GeneralizingEstimator : 9/35 [00:00<00:00, 52.98it/s]
34%|###4 | Fitting GeneralizingEstimator : 12/35 [00:00<00:00, 59.66it/s]
40%|#### | Fitting GeneralizingEstimator : 14/35 [00:00<00:00, 59.58it/s]
46%|####5 | Fitting GeneralizingEstimator : 16/35 [00:00<00:00, 59.52it/s]
51%|#####1 | Fitting GeneralizingEstimator : 18/35 [00:00<00:00, 59.47it/s]
57%|#####7 | Fitting GeneralizingEstimator : 20/35 [00:00<00:00, 59.43it/s]
63%|######2 | Fitting GeneralizingEstimator : 22/35 [00:00<00:00, 59.38it/s]
71%|#######1 | Fitting GeneralizingEstimator : 25/35 [00:00<00:00, 62.56it/s]
77%|#######7 | Fitting GeneralizingEstimator : 27/35 [00:00<00:00, 62.22it/s]
83%|########2 | Fitting GeneralizingEstimator : 29/35 [00:00<00:00, 61.92it/s]
91%|#########1| Fitting GeneralizingEstimator : 32/35 [00:00<00:00, 64.42it/s]
97%|#########7| Fitting GeneralizingEstimator : 34/35 [00:00<00:00, 63.96it/s]
100%|##########| Fitting GeneralizingEstimator : 35/35 [00:00<00:00, 63.49it/s]
Score on the epochs where the stimulus was presented to the right.
scores = time_gen.score(
X=epochs["Right"].get_data(), y=epochs["Right"].events[:, 2] > 2
)
0%| | Scoring GeneralizingEstimator : 0/1225 [00:00<?, ?it/s]
0%| | Scoring GeneralizingEstimator : 6/1225 [00:00<00:06, 174.65it/s]
1%|1 | Scoring GeneralizingEstimator : 15/1225 [00:00<00:05, 219.66it/s]
2%|1 | Scoring GeneralizingEstimator : 24/1225 [00:00<00:05, 235.54it/s]
3%|2 | Scoring GeneralizingEstimator : 34/1225 [00:00<00:04, 251.50it/s]
4%|3 | Scoring GeneralizingEstimator : 43/1225 [00:00<00:04, 254.58it/s]
4%|4 | Scoring GeneralizingEstimator : 53/1225 [00:00<00:04, 262.07it/s]
6%|5 | Scoring GeneralizingEstimator : 68/1225 [00:00<00:03, 291.86it/s]
7%|6 | Scoring GeneralizingEstimator : 84/1225 [00:00<00:03, 318.76it/s]
8%|8 | Scoring GeneralizingEstimator : 98/1225 [00:00<00:03, 331.67it/s]
9%|9 | Scoring GeneralizingEstimator : 114/1225 [00:00<00:03, 349.03it/s]
11%|# | Scoring GeneralizingEstimator : 130/1225 [00:00<00:03, 363.52it/s]
12%|#1 | Scoring GeneralizingEstimator : 144/1225 [00:00<00:02, 368.83it/s]
12%|#2 | Scoring GeneralizingEstimator : 152/1225 [00:00<00:03, 355.04it/s]
13%|#3 | Scoring GeneralizingEstimator : 161/1225 [00:00<00:03, 345.82it/s]
14%|#3 | Scoring GeneralizingEstimator : 171/1225 [00:00<00:03, 341.06it/s]
15%|#4 | Scoring GeneralizingEstimator : 180/1225 [00:00<00:03, 334.33it/s]
16%|#5 | Scoring GeneralizingEstimator : 191/1225 [00:00<00:03, 333.24it/s]
17%|#6 | Scoring GeneralizingEstimator : 204/1225 [00:00<00:03, 337.14it/s]
18%|#7 | Scoring GeneralizingEstimator : 216/1225 [00:00<00:02, 338.59it/s]
19%|#8 | Scoring GeneralizingEstimator : 229/1225 [00:00<00:02, 342.10it/s]
20%|#9 | Scoring GeneralizingEstimator : 241/1225 [00:00<00:02, 343.00it/s]
21%|## | Scoring GeneralizingEstimator : 254/1225 [00:00<00:02, 345.97it/s]
22%|##2 | Scoring GeneralizingEstimator : 270/1225 [00:00<00:02, 354.92it/s]
24%|##3 | Scoring GeneralizingEstimator : 288/1225 [00:00<00:02, 367.36it/s]
25%|##4 | Scoring GeneralizingEstimator : 306/1225 [00:00<00:02, 378.76it/s]
26%|##6 | Scoring GeneralizingEstimator : 324/1225 [00:00<00:02, 389.14it/s]
28%|##7 | Scoring GeneralizingEstimator : 341/1225 [00:00<00:02, 396.41it/s]
29%|##9 | Scoring GeneralizingEstimator : 356/1225 [00:00<00:02, 399.27it/s]
31%|### | Scoring GeneralizingEstimator : 374/1225 [00:00<00:02, 407.79it/s]
32%|###1 | Scoring GeneralizingEstimator : 391/1225 [00:01<00:02, 413.29it/s]
33%|###3 | Scoring GeneralizingEstimator : 408/1225 [00:01<00:01, 418.90it/s]
35%|###4 | Scoring GeneralizingEstimator : 426/1225 [00:01<00:01, 425.92it/s]
36%|###5 | Scoring GeneralizingEstimator : 440/1225 [00:01<00:01, 425.13it/s]
37%|###7 | Scoring GeneralizingEstimator : 455/1225 [00:01<00:01, 426.19it/s]
38%|###8 | Scoring GeneralizingEstimator : 469/1225 [00:01<00:01, 425.09it/s]
39%|###9 | Scoring GeneralizingEstimator : 482/1225 [00:01<00:01, 422.63it/s]
40%|#### | Scoring GeneralizingEstimator : 495/1225 [00:01<00:01, 420.28it/s]
41%|####1 | Scoring GeneralizingEstimator : 507/1225 [00:01<00:01, 416.16it/s]
43%|####2 | Scoring GeneralizingEstimator : 521/1225 [00:01<00:01, 415.98it/s]
44%|####3 | Scoring GeneralizingEstimator : 534/1225 [00:01<00:01, 414.16it/s]
45%|####4 | Scoring GeneralizingEstimator : 546/1225 [00:01<00:01, 410.87it/s]
46%|####5 | Scoring GeneralizingEstimator : 560/1225 [00:01<00:01, 411.01it/s]
47%|####6 | Scoring GeneralizingEstimator : 574/1225 [00:01<00:01, 411.20it/s]
48%|####8 | Scoring GeneralizingEstimator : 590/1225 [00:01<00:01, 414.51it/s]
50%|####9 | Scoring GeneralizingEstimator : 608/1225 [00:01<00:01, 421.02it/s]
51%|##### | Scoring GeneralizingEstimator : 624/1225 [00:01<00:01, 423.93it/s]
52%|#####2 | Scoring GeneralizingEstimator : 638/1225 [00:01<00:01, 423.30it/s]
53%|#####3 | Scoring GeneralizingEstimator : 653/1225 [00:01<00:01, 424.40it/s]
55%|#####4 | Scoring GeneralizingEstimator : 671/1225 [00:01<00:01, 430.28it/s]
56%|#####6 | Scoring GeneralizingEstimator : 686/1225 [00:01<00:01, 431.02it/s]
57%|#####6 | Scoring GeneralizingEstimator : 696/1225 [00:01<00:01, 423.60it/s]
58%|#####7 | Scoring GeneralizingEstimator : 706/1225 [00:01<00:01, 416.80it/s]
58%|#####8 | Scoring GeneralizingEstimator : 716/1225 [00:01<00:01, 410.34it/s]
59%|#####9 | Scoring GeneralizingEstimator : 727/1225 [00:01<00:01, 405.81it/s]
61%|###### | Scoring GeneralizingEstimator : 744/1225 [00:01<00:01, 410.95it/s]
62%|######1 | Scoring GeneralizingEstimator : 759/1225 [00:01<00:01, 412.47it/s]
63%|######3 | Scoring GeneralizingEstimator : 776/1225 [00:01<00:01, 417.17it/s]
65%|######4 | Scoring GeneralizingEstimator : 793/1225 [00:01<00:01, 421.72it/s]
66%|######6 | Scoring GeneralizingEstimator : 809/1225 [00:02<00:00, 424.47it/s]
67%|######7 | Scoring GeneralizingEstimator : 826/1225 [00:02<00:00, 428.60it/s]
69%|######8 | Scoring GeneralizingEstimator : 842/1225 [00:02<00:00, 430.61it/s]
70%|######9 | Scoring GeneralizingEstimator : 853/1225 [00:02<00:00, 425.11it/s]
71%|####### | Scoring GeneralizingEstimator : 864/1225 [00:02<00:00, 419.94it/s]
72%|#######1 | Scoring GeneralizingEstimator : 876/1225 [00:02<00:00, 416.58it/s]
72%|#######2 | Scoring GeneralizingEstimator : 886/1225 [00:02<00:00, 410.22it/s]
73%|#######3 | Scoring GeneralizingEstimator : 897/1225 [00:02<00:00, 405.78it/s]
74%|#######4 | Scoring GeneralizingEstimator : 908/1225 [00:02<00:00, 401.61it/s]
75%|#######5 | Scoring GeneralizingEstimator : 920/1225 [00:02<00:00, 399.26it/s]
76%|#######6 | Scoring GeneralizingEstimator : 932/1225 [00:02<00:00, 396.89it/s]
77%|#######7 | Scoring GeneralizingEstimator : 944/1225 [00:02<00:00, 394.76it/s]
78%|#######7 | Scoring GeneralizingEstimator : 955/1225 [00:02<00:00, 391.11it/s]
79%|#######8 | Scoring GeneralizingEstimator : 966/1225 [00:02<00:00, 387.71it/s]
80%|#######9 | Scoring GeneralizingEstimator : 977/1225 [00:02<00:00, 384.54it/s]
81%|######## | Scoring GeneralizingEstimator : 988/1225 [00:02<00:00, 381.45it/s]
81%|########1 | Scoring GeneralizingEstimator : 998/1225 [00:02<00:00, 377.02it/s]
82%|########2 | Scoring GeneralizingEstimator : 1009/1225 [00:02<00:00, 374.39it/s]
83%|########3 | Scoring GeneralizingEstimator : 1019/1225 [00:02<00:00, 369.60it/s]
84%|########3 | Scoring GeneralizingEstimator : 1028/1225 [00:02<00:00, 364.25it/s]
85%|########4 | Scoring GeneralizingEstimator : 1039/1225 [00:02<00:00, 362.32it/s]
86%|########5 | Scoring GeneralizingEstimator : 1049/1225 [00:02<00:00, 358.92it/s]
86%|########6 | Scoring GeneralizingEstimator : 1059/1225 [00:02<00:00, 355.70it/s]
87%|########7 | Scoring GeneralizingEstimator : 1069/1225 [00:02<00:00, 352.55it/s]
88%|########8 | Scoring GeneralizingEstimator : 1082/1225 [00:02<00:00, 354.09it/s]
89%|########9 | Scoring GeneralizingEstimator : 1094/1225 [00:02<00:00, 354.06it/s]
90%|######### | Scoring GeneralizingEstimator : 1106/1225 [00:02<00:00, 354.12it/s]
91%|#########1| Scoring GeneralizingEstimator : 1118/1225 [00:02<00:00, 354.17it/s]
92%|#########2| Scoring GeneralizingEstimator : 1129/1225 [00:02<00:00, 352.51it/s]
93%|#########3| Scoring GeneralizingEstimator : 1140/1225 [00:02<00:00, 351.11it/s]
94%|#########3| Scoring GeneralizingEstimator : 1150/1225 [00:03<00:00, 348.36it/s]
95%|#########5| Scoring GeneralizingEstimator : 1164/1225 [00:03<00:00, 351.65it/s]
96%|#########6| Scoring GeneralizingEstimator : 1181/1225 [00:03<00:00, 359.13it/s]
98%|#########7| Scoring GeneralizingEstimator : 1197/1225 [00:03<00:00, 364.88it/s]
99%|#########9| Scoring GeneralizingEstimator : 1213/1225 [00:03<00:00, 370.30it/s]
100%|##########| Scoring GeneralizingEstimator : 1225/1225 [00:03<00:00, 374.20it/s]
100%|##########| Scoring GeneralizingEstimator : 1225/1225 [00:03<00:00, 385.02it/s]
Plot
fig, ax = plt.subplots(constrained_layout=True)
im = ax.matshow(
scores,
vmin=0,
vmax=1.0,
cmap="RdBu_r",
origin="lower",
extent=epochs.times[[0, -1, 0, -1]],
)
ax.axhline(0.0, color="k")
ax.axvline(0.0, color="k")
ax.xaxis.set_ticks_position("bottom")
ax.set_xlabel(
'Condition: "Right"\nTesting Time (s)',
)
ax.set_ylabel('Condition: "Left"\nTraining Time (s)')
ax.set_title("Generalization across time and condition", fontweight="bold")
fig.colorbar(im, ax=ax, label="Performance (ROC AUC)")
plt.show()
References#
Total running time of the script: ( 0 minutes 9.148 seconds)
Estimated memory usage: 129 MB