Decoding sensor space data with Generalization Across TimeΒΆ

This example runs the analysis computed in:

Jean-Remi King, Alexandre Gramfort, Aaron Schurger, Lionel Naccache and Stanislas Dehaene, “Two distinct dynamic modes subtend the detection of unexpected sounds”, PLOS ONE, 2013,

The idea is to learn at one time instant and assess if the decoder can predict accurately over time.

  • ../../_images/sphx_glr_plot_decoding_time_generalization_001.png
  • ../../_images/sphx_glr_plot_decoding_time_generalization_002.png
# Authors: Jean-Remi King <>
#          Alexandre Gramfort <>
#          Denis Engemann <>
# License: BSD (3-clause)

import mne
from mne.datasets import spm_face
from mne.decoding import GeneralizationAcrossTime


# Preprocess data
data_path = spm_face.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D_raw.fif'

raw = % 1, preload=True)  # Take first run

picks = mne.pick_types(, meg=True, exclude='bads')
raw.filter(1, 45, method='iir')

events = mne.find_events(raw, stim_channel='UPPT001')
event_id = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.1, 0.5

decim = 4  # decimate to make the example faster to run
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
                    picks=picks, baseline=None, preload=True,
                    reject=dict(mag=1.5e-12), decim=decim, verbose=False)

# Define decoder. The decision function is employed to use cross-validation
gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1)

# fit and score
gat.plot(vmin=0.1, vmax=0.9,
         title="Generalization Across Time (faces vs. scrambled)")
gat.plot_diagonal()  # plot decoding across time (correspond to GAT diagonal)

Total running time of the script: (0 minutes 11.471 seconds)

Download Python source code: