mne.decoding.GeneralizationAcrossTime

class mne.decoding.GeneralizationAcrossTime(picks=None, cv=5, clf=None, train_times=None, test_times=None, predict_method='predict', predict_mode='cross-validation', scorer=None, score_mode='mean-fold-wise', n_jobs=1)

Generalize across time and conditions

Creates an estimator object used to 1) fit a series of classifiers on multidimensional time-resolved data, and 2) test the ability of each classifier to generalize across other time samples, as in [R26].

Parameters:

picks : array-like of int | None

The channels indices to include. If None the data channels in info, except bad channels, are used.

cv : int | object

If an integer is passed, it is the number of folds. Specific cross-validation objects can be passed, see scikit-learn.cross_validation module for the list of possible objects. If clf is a classifier, defaults to StratifiedKFold(n_folds=5), else defaults to KFold(n_folds=5).

clf : object | None

An estimator compliant with the scikit-learn API (fit & predict). If None the classifier will be a standard pipeline including StandardScaler and LogisticRegression with default parameters.

train_times : dict | None

A dictionary to configure the training times:

  • slices
    : ndarray, shape (n_clfs,)

    Array of time slices (in indices) used for each classifier. If not given, computed from ‘start’, ‘stop’, ‘length’, ‘step’.

  • start
    : float

    Time at which to start decoding (in seconds). Defaults to min(epochs.times).

  • stop
    : float

    Maximal time at which to stop decoding (in seconds). Defaults to max(times).

  • step
    : float

    Duration separating the start of subsequent classifiers (in seconds). Defaults to one time sample.

  • length
    : float

    Duration of each classifier (in seconds). Defaults to one time sample.

If None, empty dict.

test_times : ‘diagonal’ | dict | None, optional

Configures the testing times. If set to ‘diagonal’, predictions are made at the time at which each classifier is trained. If set to None, predictions are made at all time points. If set to dict, the dict should contain slices or be contructed in a similar way to train_times:

slices
: ndarray, shape (n_clfs,)

Array of time slices (in indices) used for each classifier. If not given, computed from ‘start’, ‘stop’, ‘length’, ‘step’.

If None, empty dict.

predict_method : str

Name of the method used to make predictions from the estimator. For example, both predict_proba and predict are supported for sklearn.linear_model.LogisticRegression. Note that the scorer must be adapted to the prediction outputs of the method. Defaults to ‘predict’.

predict_mode : {‘cross-validation’, ‘mean-prediction’}

Indicates how predictions are achieved with regards to the cross- validation procedure:

  • cross-validation
    : estimates a single prediction per sample

    based on the unique independent classifier fitted in the cross-validation.

  • mean-prediction
    : estimates k predictions per sample, based

    on each of the k-fold cross-validation classifiers, and average these predictions into a single estimate per sample.

Defaults to ‘cross-validation’.

scorer : object | None | str

scikit-learn Scorer instance or str type indicating the name of the scorer such as accuracy, roc_auc. If None, set to accuracy.

score_mode : {‘fold-wise’, ‘mean-fold-wise’, ‘mean-sample-wise’}

Determines how the scorer is estimated:

  • fold-wise : returns the score obtained in each fold.

  • mean-fold-wise : returns the average of the fold-wise scores.

  • mean-sample-wise
    : returns score estimated across across all

    y_pred independently of the cross-validation. This method is faster than mean-fold-wise but less conventional, use at your own risk.

Defaults to ‘mean-fold-wise’.

n_jobs : int

Number of jobs to run in parallel. Defaults to 1.

See also

TimeDecoding

References

[R26](1, 2) Jean-Remi King, Alexandre Gramfort, Aaron Schurger, Lionel Naccache and Stanislas Dehaene, “Two distinct dynamic modes subtend the detection of unexpected sounds”, PLoS ONE, 2014 DOI: 10.1371/journal.pone.0085791

New in version 0.9.0.

Attributes

picks_ (array-like of int | None) The channels indices to include.
ch_names (list, array-like, shape (n_channels,)) Names of the channels used for training.
y_train_ (list | ndarray, shape (n_samples,)) The categories used for training.
train_times_ (dict) A dictionary that configures the training times: * slices : ndarray, shape (n_clfs,) Array of time slices (in indices) used for each classifier. If not given, computed from ‘start’, ‘stop’, ‘length’, ‘step’. * times : ndarray, shape (n_clfs,) The training times (in seconds).
test_times_ (dict) A dictionary that configures the testing times for each training time: slices : ndarray, shape (n_clfs, n_testing_times) Array of time slices (in indices) used for each classifier. times : ndarray, shape (n_clfs, n_testing_times) The testing times (in seconds) for each training time.
cv_ (CrossValidation object) The actual CrossValidation input depending on y.
estimators_ (list of list of scikit-learn.base.BaseEstimator subclasses.) The estimators for each time point and each fold.
y_pred_ (list of lists of arrays of floats, shape (n_train_times, n_test_times, n_epochs, n_prediction_dims)) The single-trial predictions estimated by self.predict() at each training time and each testing time. Note that the number of testing times per training time need not be regular, else np.shape(y_pred_) = (n_train_time, n_test_time, n_epochs).
y_true_ (list | ndarray, shape (n_samples,)) The categories used for scoring y_pred_.
scorer_ (object) scikit-learn Scorer instance.
scores_ (list of lists of float) The scores estimated by self.scorer_ at each training time and each testing time (e.g. mean accuracy of self.predict(X)). Note that the number of testing times per training time need not be regular; else, np.shape(scores) = (n_train_time, n_test_time).

Methods

__hash__() <==> hash(x)
fit(epochs[, y]) Train a classifier on each specified time slice.
plot([title, vmin, vmax, tlim, ax, cmap, ...]) Plotting function of GeneralizationAcrossTime object
plot_diagonal([title, xmin, xmax, ymin, ...]) Plotting function of GeneralizationAcrossTime object
plot_times(train_time[, title, xmin, xmax, ...]) Plotting function of GeneralizationAcrossTime object
predict(epochs) Classifiers’ predictions on each specified testing time slice.
score([epochs, y]) Score Epochs
__hash__() <==> hash(x)
fit(epochs, y=None)

Train a classifier on each specified time slice.

Note

This function sets the picks_, ch_names, cv_, y_train, train_times_ and estimators_ attributes.

Parameters:

epochs : instance of Epochs

The epochs.

y : list or ndarray of int, shape (n_samples,) or None, optional

To-be-fitted model values. If None, y = epochs.events[:, 2].

Returns:

self : GeneralizationAcrossTime

Returns fitted GeneralizationAcrossTime object.

Notes

If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied.

If X is a dense array, then the other methods will not support sparse matrices as input.

plot(title=None, vmin=None, vmax=None, tlim=None, ax=None, cmap='RdBu_r', show=True, colorbar=True, xlabel=True, ylabel=True)

Plotting function of GeneralizationAcrossTime object

Plot the score of each classifier at each tested time window.

Parameters:

title : str | None

Figure title.

vmin : float | None

Min color value for scores. If None, sets to min(gat.scores_).

vmax : float | None

Max color value for scores. If None, sets to max(gat.scores_).

tlim : ndarray, (train_min, test_max) | None

The time limits used for plotting.

ax : object | None

Plot pointer. If None, generate new figure.

cmap : str | cmap object

The color map to be used. Defaults to 'RdBu_r'.

show : bool

If True, the figure will be shown. Defaults to True.

colorbar : bool

If True, the colorbar of the figure is displayed. Defaults to True.

xlabel : bool

If True, the xlabel is displayed. Defaults to True.

ylabel : bool

If True, the ylabel is displayed. Defaults to True.

Returns:

fig : instance of matplotlib.figure.Figure

The figure.

plot_diagonal(title=None, xmin=None, xmax=None, ymin=None, ymax=None, ax=None, show=True, color=None, xlabel=True, ylabel=True, legend=True, chance=True, label='Classif. score')

Plotting function of GeneralizationAcrossTime object

Plot each classifier score trained and tested at identical time windows.

Parameters:

title : str | None

Figure title.

xmin : float | None, optional

Min time value.

xmax : float | None, optional

Max time value.

ymin : float | None, optional

Min score value. If None, sets to min(scores).

ymax : float | None, optional

Max score value. If None, sets to max(scores).

ax : object | None

Instance of mataplotlib.axes.Axis. If None, generate new figure.

show : bool

If True, the figure will be shown. Defaults to True.

color : str

Score line color.

xlabel : bool

If True, the xlabel is displayed. Defaults to True.

ylabel : bool

If True, the ylabel is displayed. Defaults to True.

legend : bool

If True, a legend is displayed. Defaults to True.

chance : bool | float. Defaults to None

Plot chance level. If True, chance level is estimated from the type of scorer.

label : str

Score label used in the legend. Defaults to ‘Classif. score’.

Returns:

fig : instance of matplotlib.figure.Figure

The figure.

plot_times(train_time, title=None, xmin=None, xmax=None, ymin=None, ymax=None, ax=None, show=True, color=None, xlabel=True, ylabel=True, legend=True, chance=True, label='Classif. score')

Plotting function of GeneralizationAcrossTime object

Plot the scores of the classifier trained at specific training time(s).

Parameters:

train_time : float | list or array of float

Plots scores of the classifier trained at train_time.

title : str | None

Figure title.

xmin : float | None, optional

Min time value.

xmax : float | None, optional

Max time value.

ymin : float | None, optional

Min score value. If None, sets to min(scores).

ymax : float | None, optional

Max score value. If None, sets to max(scores).

ax : object | None

Instance of mataplotlib.axes.Axis. If None, generate new figure.

show : bool

If True, the figure will be shown. Defaults to True.

color : str or list of str

Score line color(s).

xlabel : bool

If True, the xlabel is displayed. Defaults to True.

ylabel : bool

If True, the ylabel is displayed. Defaults to True.

legend : bool

If True, a legend is displayed. Defaults to True.

chance : bool | float.

Plot chance level. If True, chance level is estimated from the type of scorer.

label : str

Score label used in the legend. Defaults to ‘Classif. score’.

Returns:

fig : instance of matplotlib.figure.Figure

The figure.

predict(epochs)

Classifiers’ predictions on each specified testing time slice.

Note

This function sets the y_pred_ and test_times_ attributes.

Parameters:

epochs : instance of Epochs

The epochs. Can be similar to fitted epochs or not. See predict_mode parameter.

Returns:

y_pred : list of lists of arrays of floats, shape (n_train_t, n_test_t, n_epochs, n_prediction_dims)

The single-trial predictions at each training time and each testing time. Note that the number of testing times per training time need not be regular; else np.shape(y_pred_) = (n_train_time, n_test_time, n_epochs).

score(epochs=None, y=None)

Score Epochs

Estimate scores across trials by comparing the prediction estimated for each trial to its true value.

Calls predict() if it has not been already.

Note

The function updates the scorer_, scores_, and y_true_ attributes.

Note

If predict_mode is ‘mean-prediction’, score_mode is automatically set to ‘mean-sample-wise’.

Parameters:

epochs : instance of Epochs | None, optional

The epochs. Can be similar to fitted epochs or not. If None, it needs to rely on the predictions y_pred_ generated with predict().

y : list | ndarray, shape (n_epochs,) | None, optional

True values to be compared with the predictions y_pred_ generated with predict() via scorer_. If None and predict_mode``=='cross-validation' y = ``y_train_.

Returns:

scores : list of lists of float

The scores estimated by scorer_ at each training time and each testing time (e.g. mean accuracy of predict(X)). Note that the number of testing times per training time need not be regular; else, np.shape(scores) = (n_train_time, n_test_time). If score_mode is ‘fold-wise’, np.shape(scores) = (n_train_time, n_test_time, n_folds).