Decoding sensor space data with generalization across time and conditions#

This example runs the analysis described in [1]. It illustrates how one can fit a linear classifier to identify a discriminatory topography at a given time instant and subsequently assess whether this linear model can accurately predict all of the time samples of a second set of conditions.

# Authors: Jean-Remi King <jeanremi.king@gmail.com>
#          Alexandre Gramfort <alexandre.gramfort@inria.fr>
#          Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt

from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression

import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator

print(__doc__)

# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
events_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude='bads')  # Pick MEG channels
raw.filter(1., 30., fir_design='firwin')  # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
            'Visual/Left': 3, 'Visual/Right': 4}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
                    proj=True, picks=picks, baseline=None, preload=True,
                    reject=dict(mag=5e-12), decim=decim, verbose='error')
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
    Range : 6450 ... 48149 =     42.956 ...   320.665 secs
Ready.
Reading 0 ... 41699  =      0.000 ...   277.709 secs...
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 1 - 30 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 1.00
- Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
- Upper passband edge: 30.00 Hz
- Upper transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 33.75 Hz)
- Filter length: 497 samples (3.310 sec)

[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=1)]: Done   2 out of   2 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=1)]: Done   3 out of   3 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=1)]: Done   4 out of   4 | elapsed:    0.0s remaining:    0.0s
[Parallel(n_jobs=1)]: Done 366 out of 366 | elapsed:    0.7s finished

We will train the classifier on all left visual vs auditory trials and test on all right visual vs auditory trials.

clf = make_pipeline(
    StandardScaler(),
    LogisticRegression(solver='liblinear')  # liblinear is faster than lbfgs
)
time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=None,
                                 verbose=True)

# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs['Left'].get_data(),
             y=epochs['Left'].events[:, 2] > 2)
  0%|          | Fitting GeneralizingEstimator : 0/35 [00:00<?,       ?it/s]
  6%|5         | Fitting GeneralizingEstimator : 2/35 [00:00<00:00,   58.21it/s]
 11%|#1        | Fitting GeneralizingEstimator : 4/35 [00:00<00:00,   58.69it/s]
 17%|#7        | Fitting GeneralizingEstimator : 6/35 [00:00<00:00,   58.84it/s]
 23%|##2       | Fitting GeneralizingEstimator : 8/35 [00:00<00:00,   58.92it/s]
 31%|###1      | Fitting GeneralizingEstimator : 11/35 [00:00<00:00,   65.46it/s]
 40%|####      | Fitting GeneralizingEstimator : 14/35 [00:00<00:00,   69.87it/s]
 49%|####8     | Fitting GeneralizingEstimator : 17/35 [00:00<00:00,   72.99it/s]
 54%|#####4    | Fitting GeneralizingEstimator : 19/35 [00:00<00:00,   70.95it/s]
 63%|######2   | Fitting GeneralizingEstimator : 22/35 [00:00<00:00,   73.36it/s]
 71%|#######1  | Fitting GeneralizingEstimator : 25/35 [00:00<00:00,   75.17it/s]
 77%|#######7  | Fitting GeneralizingEstimator : 27/35 [00:00<00:00,   73.33it/s]
 86%|########5 | Fitting GeneralizingEstimator : 30/35 [00:00<00:00,   74.91it/s]
 91%|#########1| Fitting GeneralizingEstimator : 32/35 [00:00<00:00,   73.28it/s]
100%|##########| Fitting GeneralizingEstimator : 35/35 [00:00<00:00,   75.35it/s]
100%|##########| Fitting GeneralizingEstimator : 35/35 [00:00<00:00,   74.08it/s]

Score on the epochs where the stimulus was presented to the right.

scores = time_gen.score(X=epochs['Right'].get_data(),
                        y=epochs['Right'].events[:, 2] > 2)
  0%|          | Scoring GeneralizingEstimator : 0/1225 [00:00<?,       ?it/s]
  1%|1         | Scoring GeneralizingEstimator : 14/1225 [00:00<00:02,  409.21it/s]
  3%|2         | Scoring GeneralizingEstimator : 32/1225 [00:00<00:02,  471.58it/s]
  4%|4         | Scoring GeneralizingEstimator : 50/1225 [00:00<00:02,  493.03it/s]
  6%|5         | Scoring GeneralizingEstimator : 69/1225 [00:00<00:02,  511.93it/s]
  7%|7         | Scoring GeneralizingEstimator : 89/1225 [00:00<00:02,  529.58it/s]
  9%|8         | Scoring GeneralizingEstimator : 109/1225 [00:00<00:02,  541.42it/s]
 11%|#         | Scoring GeneralizingEstimator : 129/1225 [00:00<00:01,  549.96it/s]
 12%|#1        | Scoring GeneralizingEstimator : 145/1225 [00:00<00:02,  538.48it/s]
 14%|#3        | Scoring GeneralizingEstimator : 166/1225 [00:00<00:01,  549.50it/s]
 15%|#5        | Scoring GeneralizingEstimator : 186/1225 [00:00<00:01,  554.60it/s]
 17%|#6        | Scoring GeneralizingEstimator : 206/1225 [00:00<00:01,  558.57it/s]
 19%|#8        | Scoring GeneralizingEstimator : 227/1225 [00:00<00:01,  565.41it/s]
 20%|#9        | Scoring GeneralizingEstimator : 241/1225 [00:00<00:01,  549.67it/s]
 21%|##1       | Scoring GeneralizingEstimator : 260/1225 [00:00<00:01,  550.65it/s]
 23%|##2       | Scoring GeneralizingEstimator : 280/1225 [00:00<00:01,  554.44it/s]
 24%|##4       | Scoring GeneralizingEstimator : 299/1225 [00:00<00:01,  554.79it/s]
 26%|##5       | Scoring GeneralizingEstimator : 318/1225 [00:00<00:01,  555.53it/s]
 27%|##7       | Scoring GeneralizingEstimator : 331/1225 [00:00<00:01,  541.41it/s]
 28%|##8       | Scoring GeneralizingEstimator : 347/1225 [00:00<00:01,  535.97it/s]
 29%|##9       | Scoring GeneralizingEstimator : 359/1225 [00:00<00:01,  521.68it/s]
 30%|###       | Scoring GeneralizingEstimator : 373/1225 [00:00<00:01,  513.12it/s]
 32%|###1      | Scoring GeneralizingEstimator : 386/1225 [00:00<00:01,  503.62it/s]
 33%|###2      | Scoring GeneralizingEstimator : 400/1225 [00:00<00:01,  496.96it/s]
 34%|###3      | Scoring GeneralizingEstimator : 415/1225 [00:00<00:01,  493.01it/s]
 35%|###4      | Scoring GeneralizingEstimator : 427/1225 [00:00<00:01,  483.40it/s]
 36%|###5      | Scoring GeneralizingEstimator : 439/1225 [00:00<00:01,  474.62it/s]
 37%|###6      | Scoring GeneralizingEstimator : 452/1225 [00:00<00:01,  468.57it/s]
 38%|###7      | Scoring GeneralizingEstimator : 465/1225 [00:00<00:01,  463.06it/s]
 39%|###9      | Scoring GeneralizingEstimator : 478/1225 [00:00<00:01,  458.04it/s]
 40%|####      | Scoring GeneralizingEstimator : 490/1225 [00:01<00:01,  451.53it/s]
 41%|####      | Scoring GeneralizingEstimator : 501/1225 [00:01<00:01,  443.63it/s]
 42%|####2     | Scoring GeneralizingEstimator : 515/1225 [00:01<00:01,  441.66it/s]
 43%|####2     | Scoring GeneralizingEstimator : 524/1225 [00:01<00:01,  430.77it/s]
 44%|####3     | Scoring GeneralizingEstimator : 537/1225 [00:01<00:01,  427.85it/s]
 45%|####4     | Scoring GeneralizingEstimator : 548/1225 [00:01<00:01,  421.63it/s]
 46%|####5     | Scoring GeneralizingEstimator : 559/1225 [00:01<00:01,  415.86it/s]
 47%|####6     | Scoring GeneralizingEstimator : 573/1225 [00:01<00:01,  415.48it/s]
 48%|####8     | Scoring GeneralizingEstimator : 588/1225 [00:01<00:01,  415.66it/s]
 49%|####8     | Scoring GeneralizingEstimator : 600/1225 [00:01<00:01,  412.14it/s]
 50%|#####     | Scoring GeneralizingEstimator : 614/1225 [00:01<00:01,  412.22it/s]
 51%|#####1    | Scoring GeneralizingEstimator : 626/1225 [00:01<00:01,  408.40it/s]
 52%|#####2    | Scoring GeneralizingEstimator : 638/1225 [00:01<00:01,  405.36it/s]
 53%|#####3    | Scoring GeneralizingEstimator : 653/1225 [00:01<00:01,  407.34it/s]
 55%|#####4    | Scoring GeneralizingEstimator : 668/1225 [00:01<00:01,  409.14it/s]
 56%|#####5    | Scoring GeneralizingEstimator : 681/1225 [00:01<00:01,  407.64it/s]
 57%|#####6    | Scoring GeneralizingEstimator : 694/1225 [00:01<00:01,  406.35it/s]
 58%|#####7    | Scoring GeneralizingEstimator : 706/1225 [00:01<00:01,  403.52it/s]
 59%|#####8    | Scoring GeneralizingEstimator : 719/1225 [00:01<00:01,  402.46it/s]
 60%|#####9    | Scoring GeneralizingEstimator : 731/1225 [00:01<00:01,  399.67it/s]
 61%|######    | Scoring GeneralizingEstimator : 746/1225 [00:01<00:01,  401.99it/s]
 62%|######1   | Scoring GeneralizingEstimator : 756/1225 [00:01<00:01,  396.23it/s]
 63%|######2   | Scoring GeneralizingEstimator : 768/1225 [00:01<00:01,  394.04it/s]
 64%|######3   | Scoring GeneralizingEstimator : 780/1225 [00:01<00:01,  391.96it/s]
 65%|######4   | Scoring GeneralizingEstimator : 792/1225 [00:01<00:01,  389.95it/s]
 66%|######5   | Scoring GeneralizingEstimator : 804/1225 [00:01<00:01,  388.06it/s]
 67%|######6   | Scoring GeneralizingEstimator : 817/1225 [00:01<00:01,  387.83it/s]
 68%|######7   | Scoring GeneralizingEstimator : 831/1225 [00:01<00:01,  389.24it/s]
 69%|######9   | Scoring GeneralizingEstimator : 846/1225 [00:01<00:00,  392.07it/s]
 70%|#######   | Scoring GeneralizingEstimator : 859/1225 [00:02<00:00,  391.44it/s]
 71%|#######1  | Scoring GeneralizingEstimator : 871/1225 [00:02<00:00,  389.39it/s]
 72%|#######2  | Scoring GeneralizingEstimator : 885/1225 [00:02<00:00,  390.52it/s]
 73%|#######3  | Scoring GeneralizingEstimator : 899/1225 [00:02<00:00,  391.63it/s]
 74%|#######4  | Scoring GeneralizingEstimator : 911/1225 [00:02<00:00,  389.71it/s]
 76%|#######5  | Scoring GeneralizingEstimator : 925/1225 [00:02<00:00,  390.93it/s]
 77%|#######6  | Scoring GeneralizingEstimator : 942/1225 [00:02<00:00,  396.67it/s]
 78%|#######8  | Scoring GeneralizingEstimator : 956/1225 [00:02<00:00,  397.18it/s]
 79%|#######9  | Scoring GeneralizingEstimator : 970/1225 [00:02<00:00,  397.95it/s]
 80%|########  | Scoring GeneralizingEstimator : 983/1225 [00:02<00:00,  397.00it/s]
 81%|########1 | Scoring GeneralizingEstimator : 996/1225 [00:02<00:00,  396.39it/s]
 83%|########2 | Scoring GeneralizingEstimator : 1014/1225 [00:02<00:00,  403.27it/s]
 84%|########3 | Scoring GeneralizingEstimator : 1025/1225 [00:02<00:00,  399.19it/s]
 85%|########4 | Scoring GeneralizingEstimator : 1036/1225 [00:02<00:00,  395.46it/s]
 86%|########5 | Scoring GeneralizingEstimator : 1051/1225 [00:02<00:00,  397.87it/s]
 87%|########6 | Scoring GeneralizingEstimator : 1065/1225 [00:02<00:00,  398.70it/s]
 88%|########8 | Scoring GeneralizingEstimator : 1078/1225 [00:02<00:00,  397.73it/s]
 89%|########8 | Scoring GeneralizingEstimator : 1090/1225 [00:02<00:00,  395.54it/s]
 90%|########9 | Scoring GeneralizingEstimator : 1101/1225 [00:02<00:00,  391.94it/s]
 91%|######### | Scoring GeneralizingEstimator : 1113/1225 [00:02<00:00,  390.05it/s]
 92%|#########1| Scoring GeneralizingEstimator : 1126/1225 [00:02<00:00,  389.72it/s]
 93%|#########3| Scoring GeneralizingEstimator : 1141/1225 [00:02<00:00,  391.81it/s]
 94%|#########4| Scoring GeneralizingEstimator : 1152/1225 [00:02<00:00,  388.48it/s]
 95%|#########5| Scoring GeneralizingEstimator : 1164/1225 [00:02<00:00,  386.57it/s]
 96%|#########6| Scoring GeneralizingEstimator : 1176/1225 [00:02<00:00,  384.75it/s]
 97%|#########6| Scoring GeneralizingEstimator : 1188/1225 [00:02<00:00,  383.05it/s]
 98%|#########8| Scoring GeneralizingEstimator : 1201/1225 [00:02<00:00,  383.13it/s]
 99%|#########8| Scoring GeneralizingEstimator : 1212/1225 [00:02<00:00,  380.15it/s]
100%|#########9| Scoring GeneralizingEstimator : 1224/1225 [00:02<00:00,  378.81it/s]
100%|##########| Scoring GeneralizingEstimator : 1225/1225 [00:02<00:00,  414.40it/s]

Plot

fig, ax = plt.subplots(1)
im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',
                extent=epochs.times[[0, -1, 0, -1]])
ax.axhline(0., color='k')
ax.axvline(0., color='k')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Generalization across time and condition')
plt.colorbar(im, ax=ax)
plt.show()
Generalization across time and condition

References#

Total running time of the script: ( 0 minutes 8.033 seconds)

Estimated memory usage: 128 MB

Gallery generated by Sphinx-Gallery

On this page