Spectro-temporal receptive field (STRF) estimation on continuous data

This demonstrates how an encoding model can be fit with multiple continuous inputs. In this case, we simulate the model behind a spectro-temporal receptive field (or STRF). First, we create a linear filter that maps patterns in spectro-temporal space onto an output, representing neural activity. We fit a receptive field model that attempts to recover the original linear filter that was used to create this data.

# Authors: Chris Holdgraf <choldgraf@gmail.com>
#          Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)


import numpy as np
import matplotlib.pyplot as plt

import mne
from mne.decoding import ReceptiveField, TimeDelayingRidge

from scipy.stats import multivariate_normal
from scipy.io import loadmat
from sklearn.preprocessing import scale
rng = np.random.RandomState(1337)  # To make this example reproducible

Load audio data

We’ll read in the audio data from 1 in order to simulate a response.

In addition, we’ll downsample the data along the time dimension in order to speed up computation. Note that depending on the input values, this may not be desired. For example if your input stimulus varies more quickly than 1/2 the sampling rate to which we are downsampling.

# Read in audio that's been recorded in epochs.
path_audio = mne.datasets.mtrf.data_path()
data = loadmat(path_audio + '/speech_data.mat')
audio = data['spectrogram'].T
sfreq = float(data['Fs'][0, 0])
n_decim = 2
audio = mne.filter.resample(audio, down=n_decim, npad='auto')
sfreq /= n_decim

Create a receptive field

We’ll simulate a linear receptive field for a theoretical neural signal. This defines how the signal will respond to power in this receptive field space.

n_freqs = 20
tmin, tmax = -0.1, 0.4

# To simulate the data we'll create explicit delays here
delays_samp = np.arange(np.round(tmin * sfreq),
                        np.round(tmax * sfreq) + 1).astype(int)
delays_sec = delays_samp / sfreq
freqs = np.linspace(50, 5000, n_freqs)
grid = np.array(np.meshgrid(delays_sec, freqs))

# We need data to be shaped as n_epochs, n_features, n_times, so swap axes here
grid = grid.swapaxes(0, -1).swapaxes(0, 1)

# Simulate a temporal receptive field with a Gabor filter
means_high = [.1, 500]
means_low = [.2, 2500]
cov = [[.001, 0], [0, 500000]]
gauss_high = multivariate_normal.pdf(grid, means_high, cov)
gauss_low = -1 * multivariate_normal.pdf(grid, means_low, cov)
weights = gauss_high + gauss_low  # Combine to create the "true" STRF
kwargs = dict(vmax=np.abs(weights).max(), vmin=-np.abs(weights).max(),
              cmap='RdBu_r', shading='gouraud')

fig, ax = plt.subplots()
ax.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax.set(title='Simulated STRF', xlabel='Time Lags (s)', ylabel='Frequency (Hz)')
plt.setp(ax.get_xticklabels(), rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
Simulated STRF

Simulate a neural response

Using this receptive field, we’ll create an artificial neural response to a stimulus.

To do this, we’ll create a time-delayed version of the receptive field, and then calculate the dot product between this and the stimulus. Note that this is effectively doing a convolution between the stimulus and the receptive field. See here for more information.

# Reshape audio to split into epochs, then make epochs the first dimension.
n_epochs, n_seconds = 16, 5
audio = audio[:, :int(n_seconds * sfreq * n_epochs)]
X = audio.reshape([n_freqs, n_epochs, -1]).swapaxes(0, 1)
n_times = X.shape[-1]

# Delay the spectrogram according to delays so it can be combined w/ the STRF
# Lags will now be in axis 1, then we reshape to vectorize
delays = np.arange(np.round(tmin * sfreq),
                   np.round(tmax * sfreq) + 1).astype(int)

# Iterate through indices and append
X_del = np.zeros((len(delays),) + X.shape)
for ii, ix_delay in enumerate(delays):
    # These arrays will take/put particular indices in the data
    take = [slice(None)] * X.ndim
    put = [slice(None)] * X.ndim
    if ix_delay > 0:
        take[-1] = slice(None, -ix_delay)
        put[-1] = slice(ix_delay, None)
    elif ix_delay < 0:
        take[-1] = slice(-ix_delay, None)
        put[-1] = slice(None, ix_delay)
    X_del[ii][tuple(put)] = X[tuple(take)]

# Now set the delayed axis to the 2nd dimension
X_del = np.rollaxis(X_del, 0, 3)
X_del = X_del.reshape([n_epochs, -1, n_times])
n_features = X_del.shape[1]
weights_sim = weights.ravel()

# Simulate a neural response to the sound, given this STRF
y = np.zeros((n_epochs, n_times))
for ii, iep in enumerate(X_del):
    # Simulate this epoch and add random noise
    noise_amp = .002
    y[ii] = np.dot(weights_sim, iep) + noise_amp * rng.randn(n_times)

# Plot the first 2 trials of audio and the simulated electrode activity
X_plt = scale(np.hstack(X[:2]).T).T
y_plt = scale(np.hstack(y[:2]))
time = np.arange(X_plt.shape[-1]) / sfreq
_, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 6), sharex=True)
ax1.pcolormesh(time, freqs, X_plt, vmin=0, vmax=4, cmap='Reds',
               shading='gouraud')
ax1.set_title('Input auditory features')
ax1.set(ylim=[freqs.min(), freqs.max()], ylabel='Frequency (Hz)')
ax2.plot(time, y_plt)
ax2.set(xlim=[time.min(), time.max()], title='Simulated response',
        xlabel='Time (s)', ylabel='Activity (a.u.)')
mne.viz.tight_layout()
Input auditory features, Simulated response

Fit a model to recover this receptive field

Finally, we’ll use the mne.decoding.ReceptiveField class to recover the linear receptive field of this signal. Note that properties of the receptive field (e.g. smoothness) will depend on the autocorrelation in the inputs and outputs.

# Create training and testing data
train, test = np.arange(n_epochs - 1), n_epochs - 1
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
X_train, X_test, y_train, y_test = [np.rollaxis(ii, -1, 0) for ii in
                                    (X_train, X_test, y_train, y_test)]
# Model the simulated data as a function of the spectrogram input
alphas = np.logspace(-3, 3, 7)
scores = np.zeros_like(alphas)
models = []
for ii, alpha in enumerate(alphas):
    rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=alpha)
    rf.fit(X_train, y_train)

    # Now make predictions about the model output, given input stimuli.
    scores[ii] = rf.score(X_test, y_test)
    models.append(rf)

times = rf.delays_ / float(rf.sfreq)

# Choose the model that performed best on the held out data
ix_best_alpha = np.argmax(scores)
best_mod = models[ix_best_alpha]
coefs = best_mod.coef_[0]
best_pred = best_mod.predict(X_test)[:, 0]

# Plot the original STRF, and the one that we recovered with modeling.
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(6, 3), sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, coefs, **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Reconstructed STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()

# Plot the actual response and the predicted response on a held out stimulus
time_pred = np.arange(best_pred.shape[0]) / sfreq
fig, ax = plt.subplots()
ax.plot(time_pred, y_test, color='k', alpha=.2, lw=4)
ax.plot(time_pred, best_pred, color='r', lw=1)
ax.set(title='Original and predicted activity', xlabel='Time (s)')
ax.legend(['Original', 'Predicted'])
plt.autoscale(tight=True)
mne.viz.tight_layout()
  • Original STRF, Best Reconstructed STRF
  • Original and predicted activity

Out:

Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
  0%|          | Sample : 1/3450 [00:05<5:25:07,    5.66s/it]
 12%|#2        | Sample : 426/3450 [00:05<00:38,   79.04it/s]
 25%|##5       | Sample : 865/3450 [00:05<00:15,  164.27it/s]
 38%|###8      | Sample : 1312/3450 [00:05<00:08,  255.05it/s]
 51%|#####     | Sample : 1759/3450 [00:05<00:04,  349.97it/s]
 64%|######3   | Sample : 2204/3450 [00:05<00:02,  448.74it/s]
 77%|#######6  | Sample : 2646/3450 [00:05<00:01,  551.23it/s]
 90%|########9 | Sample : 3091/3450 [00:05<00:00,  659.02it/s]
100%|##########| Sample : 3450/3450 [00:05<00:00,  596.78it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 13%|#2        | Sample : 435/3450 [00:00<00:00, 27134.07it/s]
 25%|##5       | Sample : 867/3450 [00:00<00:00, 27064.53it/s]
 31%|###1      | Sample : 1075/3450 [00:00<00:00, 14492.08it/s]
 43%|####3     | Sample : 1491/3450 [00:00<00:00, 16727.45it/s]
 55%|#####5    | Sample : 1914/3450 [00:00<00:00, 18376.83it/s]
 68%|######7   | Sample : 2343/3450 [00:00<00:00, 19655.48it/s]
 80%|########  | Sample : 2774/3450 [00:00<00:00, 20654.28it/s]
 93%|#########2| Sample : 3206/3450 [00:00<00:00, 21454.32it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 21368.95it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#2        | Sample : 425/3450 [00:00<00:00, 26556.11it/s]
 25%|##5       | Sample : 863/3450 [00:00<00:00, 26960.73it/s]
 38%|###7      | Sample : 1308/3450 [00:00<00:00, 27254.21it/s]
 42%|####2     | Sample : 1452/3450 [00:00<00:00, 14240.32it/s]
 54%|#####4    | Sample : 1878/3450 [00:00<00:00, 16135.27it/s]
 67%|######7   | Sample : 2319/3450 [00:00<00:00, 17720.01it/s]
 80%|########  | Sample : 2765/3450 [00:00<00:00, 19012.67it/s]
 93%|#########3| Sample : 3213/3450 [00:00<00:00, 20071.38it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 20247.94it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 13%|#2        | Sample : 439/3450 [00:00<00:00, 27389.28it/s]
 25%|##5       | Sample : 878/3450 [00:00<00:00, 27413.33it/s]
 38%|###8      | Sample : 1316/3450 [00:00<00:00, 27395.45it/s]
 51%|#####     | Sample : 1759/3450 [00:00<00:00, 27464.85it/s]
 57%|#####6    | Sample : 1965/3450 [00:00<00:00, 16225.72it/s]
 69%|######9   | Sample : 2386/3450 [00:00<00:00, 17579.59it/s]
 82%|########1 | Sample : 2814/3450 [00:00<00:00, 18717.49it/s]
 94%|#########4| Sample : 3250/3450 [00:00<00:00, 19698.44it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 20121.60it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#2        | Sample : 426/3450 [00:00<00:00, 26577.82it/s]
 19%|#9        | Sample : 657/3450 [00:00<00:00, 19743.70it/s]
 31%|###1      | Sample : 1080/3450 [00:00<00:00, 22030.97it/s]
 44%|####3     | Sample : 1509/3450 [00:00<00:00, 23298.51it/s]
 56%|#####6    | Sample : 1946/3450 [00:00<00:00, 24164.40it/s]
 59%|#####9    | Sample : 2046/3450 [00:00<00:00, 14651.84it/s]
 71%|#######1  | Sample : 2458/3450 [00:00<00:00, 15997.73it/s]
 84%|########3 | Sample : 2889/3450 [00:00<00:00, 17235.47it/s]
 97%|#########6| Sample : 3333/3450 [00:00<00:00, 18354.09it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 18590.08it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#2        | Sample : 431/3450 [00:00<00:00, 26886.16it/s]
 25%|##5       | Sample : 876/3450 [00:00<00:00, 27342.79it/s]
 35%|###4      | Sample : 1202/3450 [00:00<00:00, 24891.81it/s]
 48%|####7     | Sample : 1642/3450 [00:00<00:00, 25581.66it/s]
 61%|######    | Sample : 2090/3450 [00:00<00:00, 26104.18it/s]
 68%|######8   | Sample : 2351/3450 [00:00<00:00, 16266.11it/s]
 81%|########  | Sample : 2782/3450 [00:00<00:00, 17516.02it/s]
 94%|#########3| Sample : 3231/3450 [00:00<00:00, 18672.65it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 19410.86it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#2        | Sample : 429/3450 [00:00<00:00, 26778.93it/s]
 25%|##5       | Sample : 867/3450 [00:00<00:00, 27056.50it/s]
 26%|##6       | Sample : 914/3450 [00:00<00:00, 17812.05it/s]
 39%|###8      | Sample : 1343/3450 [00:00<00:00, 20158.52it/s]
 52%|#####1    | Sample : 1783/3450 [00:00<00:00, 21739.02it/s]
 64%|######4   | Sample : 2224/3450 [00:00<00:00, 22808.06it/s]
 77%|#######7  | Sample : 2659/3450 [00:00<00:00, 23514.03it/s]
 90%|########9 | Sample : 3089/3450 [00:00<00:00, 23999.00it/s]
 96%|#########5| Sample : 3296/3450 [00:00<00:00, 21809.64it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 22121.69it/s]

Visualize the effects of regularization

Above we fit a mne.decoding.ReceptiveField model for one of many values for the ridge regularization parameter. Here we will plot the model score as well as the model coefficients for each value, in order to visualize how coefficients change with different levels of regularization. These issues as well as the STRF pipeline are described in detail in 234.

# Plot model score for each ridge parameter
fig = plt.figure(figsize=(10, 4))
ax = plt.subplot2grid([2, len(alphas)], [1, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores, marker='o', color='r')
ax.annotate('Best parameter', (ix_best_alpha, scores[ix_best_alpha]),
            (ix_best_alpha, scores[ix_best_alpha] - .1),
            arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Ridge regularization value", ylabel="Score ($R^2$)",
       xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()

# Plot the STRF of each ridge parameter
for ii, (rf, i_alpha) in enumerate(zip(models, alphas)):
    ax = plt.subplot2grid([2, len(alphas)], [0, ii], 1, 1)
    ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
    plt.xticks([], [])
    plt.yticks([], [])
    plt.autoscale(tight=True)
fig.suptitle('Model coefficients / scores for many ridge parameters', y=1)
mne.viz.tight_layout()
Model coefficients / scores for many ridge parameters

Using different regularization types

In addition to the standard ridge regularization, the mne.decoding.TimeDelayingRidge class also exposes Laplacian regularization term as:

\[\begin{split}\left[\begin{matrix} 1 & -1 & & & & \\ -1 & 2 & -1 & & & \\ & -1 & 2 & -1 & & \\ & & \ddots & \ddots & \ddots & \\ & & & -1 & 2 & -1 \\ & & & & -1 & 1\end{matrix}\right]\end{split}\]

This imposes a smoothness constraint of nearby time samples and/or features. Quoting 1 :

Tikhonov [identity] regularization (Equation 5) reduces overfitting by smoothing the TRF estimate in a way that is insensitive to the amplitude of the signal of interest. However, the Laplacian approach (Equation 6) reduces off-sample error whilst preserving signal amplitude (Lalor et al., 2006). As a result, this approach usually leads to an improved estimate of the system’s response (as indexed by MSE) compared to Tikhonov regularization.

scores_lap = np.zeros_like(alphas)
models_lap = []
for ii, alpha in enumerate(alphas):
    estimator = TimeDelayingRidge(tmin, tmax, sfreq, reg_type='laplacian',
                                  alpha=alpha)
    rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=estimator)
    rf.fit(X_train, y_train)

    # Now make predictions about the model output, given input stimuli.
    scores_lap[ii] = rf.score(X_test, y_test)
    models_lap.append(rf)

ix_best_alpha_lap = np.argmax(scores_lap)

Out:

Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#1        | Sample : 413/3450 [00:00<00:00, 25785.55it/s]
 25%|##4       | Sample : 850/3450 [00:00<00:00, 26548.51it/s]
 37%|###7      | Sample : 1293/3450 [00:00<00:00, 26947.32it/s]
 50%|#####     | Sample : 1742/3450 [00:00<00:00, 27236.35it/s]
 63%|######3   | Sample : 2190/3450 [00:00<00:00, 27402.29it/s]
 76%|#######6  | Sample : 2628/3450 [00:00<00:00, 27387.66it/s]
 89%|########9 | Sample : 3071/3450 [00:00<00:00, 27431.96it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 27414.79it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#2        | Sample : 431/3450 [00:00<00:00, 26880.57it/s]
 25%|##5       | Sample : 867/3450 [00:00<00:00, 27043.78it/s]
 38%|###7      | Sample : 1304/3450 [00:00<00:00, 27123.70it/s]
 50%|#####     | Sample : 1739/3450 [00:00<00:00, 27136.18it/s]
 53%|#####3    | Sample : 1831/3450 [00:00<00:00, 15011.98it/s]
 65%|######5   | Sample : 2253/3450 [00:00<00:00, 16538.85it/s]
 78%|#######8  | Sample : 2693/3450 [00:00<00:00, 17896.61it/s]
 91%|#########1| Sample : 3142/3450 [00:00<00:00, 19066.76it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 19728.75it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 13%|#2        | Sample : 437/3450 [00:00<00:00, 27299.02it/s]
 25%|##5       | Sample : 873/3450 [00:00<00:00, 27255.54it/s]
 38%|###8      | Sample : 1323/3450 [00:00<00:00, 27539.58it/s]
 51%|#####1    | Sample : 1772/3450 [00:00<00:00, 27668.95it/s]
 64%|######3   | Sample : 2193/3450 [00:00<00:00, 16584.45it/s]
 76%|#######6  | Sample : 2634/3450 [00:00<00:00, 17938.14it/s]
 89%|########9 | Sample : 3087/3450 [00:00<00:00, 19126.51it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 20047.53it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#2        | Sample : 426/3450 [00:00<00:00, 26586.52it/s]
 25%|##5       | Sample : 865/3450 [00:00<00:00, 26994.55it/s]
 31%|###1      | Sample : 1078/3450 [00:00<00:00, 22201.25it/s]
 44%|####4     | Sample : 1521/3450 [00:00<00:00, 23671.32it/s]
 57%|#####7    | Sample : 1967/3450 [00:00<00:00, 24595.50it/s]
 70%|######9   | Sample : 2411/3450 [00:00<00:00, 25185.75it/s]
 76%|#######6  | Sample : 2637/3450 [00:00<00:00, 18049.06it/s]
 89%|########8 | Sample : 3068/3450 [00:00<00:00, 19102.02it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 20246.60it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
  7%|7         | Sample : 257/3450 [00:00<00:00, 16022.12it/s]
 14%|#3        | Sample : 472/3450 [00:00<00:00, 8675.66it/s]
 22%|##1       | Sample : 756/3450 [00:00<00:00, 10864.12it/s]
 34%|###3      | Sample : 1168/3450 [00:00<00:00, 13877.45it/s]
 46%|####6     | Sample : 1587/3450 [00:00<00:00, 16035.21it/s]
 58%|#####8    | Sample : 2013/3450 [00:00<00:00, 17683.17it/s]
 71%|#######   | Sample : 2434/3450 [00:00<00:00, 18896.40it/s]
 83%|########3 | Sample : 2869/3450 [00:00<00:00, 19964.34it/s]
 96%|#########5| Sample : 3302/3450 [00:00<00:00, 20810.66it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 20083.23it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
 12%|#1        | Sample : 412/3450 [00:00<00:00, 25689.85it/s]
 24%|##4       | Sample : 844/3450 [00:00<00:00, 26336.80it/s]
 32%|###1      | Sample : 1090/3450 [00:00<00:00, 15305.50it/s]
 39%|###9      | Sample : 1352/3450 [00:00<00:00, 15516.49it/s]
 47%|####6     | Sample : 1619/3450 [00:00<00:00, 15712.40it/s]
 57%|#####6    | Sample : 1959/3450 [00:00<00:00, 16569.16it/s]
 69%|######9   | Sample : 2393/3450 [00:00<00:00, 18043.28it/s]
 82%|########2 | Sample : 2832/3450 [00:00<00:00, 19249.15it/s]
 95%|#########4| Sample : 3266/3450 [00:00<00:00, 20186.23it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 19973.29it/s]
Fitting 15 epochs, 20 channels

  0%|          | Sample : 0/3450 [00:00<?,       ?it/s]
  9%|8         | Sample : 295/3450 [00:00<00:00, 18407.29it/s]
 16%|#6        | Sample : 555/3450 [00:00<00:00, 17296.44it/s]
 22%|##1       | Sample : 748/3450 [00:00<00:00, 8553.74it/s]
 29%|##9       | Sample : 1008/3450 [00:00<00:00, 9853.88it/s]
 42%|####1     | Sample : 1441/3450 [00:00<00:00, 12455.85it/s]
 54%|#####4    | Sample : 1872/3450 [00:00<00:00, 14443.56it/s]
 67%|######6   | Sample : 2307/3450 [00:00<00:00, 16051.11it/s]
 80%|#######9  | Sample : 2743/3450 [00:00<00:00, 17363.38it/s]
 92%|#########2| Sample : 3175/3450 [00:00<00:00, 18422.08it/s]
100%|##########| Sample : 3450/3450 [00:00<00:00, 18031.45it/s]

Compare model performance

Below we visualize the model performance of each regularization method (ridge vs. Laplacian) for different levels of alpha. As you can see, the Laplacian method performs better in general, because it imposes a smoothness constraint along the time and feature dimensions of the coefficients. This matches the “true” receptive field structure and results in a better model fit.

fig = plt.figure(figsize=(10, 6))
ax = plt.subplot2grid([3, len(alphas)], [2, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores_lap, marker='o', color='r')
ax.plot(np.arange(len(alphas)), scores, marker='o', color='0.5', ls=':')
ax.annotate('Best Laplacian', (ix_best_alpha_lap,
                               scores_lap[ix_best_alpha_lap]),
            (ix_best_alpha_lap, scores_lap[ix_best_alpha_lap] - .1),
            arrowprops={'arrowstyle': '->'})
ax.annotate('Best Ridge', (ix_best_alpha, scores[ix_best_alpha]),
            (ix_best_alpha, scores[ix_best_alpha] - .1),
            arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Laplacian regularization value", ylabel="Score ($R^2$)",
       xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()

# Plot the STRF of each ridge parameter
xlim = times[[0, -1]]
for ii, (rf_lap, rf, i_alpha) in enumerate(zip(models_lap, models, alphas)):
    ax = plt.subplot2grid([3, len(alphas)], [0, ii], 1, 1)
    ax.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
    ax.set(xticks=[], yticks=[], xlim=xlim)
    if ii == 0:
        ax.set(ylabel='Laplacian')
    ax = plt.subplot2grid([3, len(alphas)], [1, ii], 1, 1)
    ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
    ax.set(xticks=[], yticks=[], xlim=xlim)
    if ii == 0:
        ax.set(ylabel='Ridge')
fig.suptitle('Model coefficients / scores for laplacian regularization', y=1)
mne.viz.tight_layout()
Model coefficients / scores for laplacian regularization

Plot the original STRF, and the one that we recovered with modeling.

rf = models[ix_best_alpha]
rf_lap = models_lap[ix_best_alpha_lap]
_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(9, 3),
                                  sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
ax3.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Ridge STRF')
ax3.set_title('Best Laplacian STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2, ax3]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
Original STRF, Best Ridge STRF, Best Laplacian STRF

References

1(1,2)

Michael J. Crosse, Giovanni M. Di Liberto, Adam Bednar, and Edmund C. Lalor. The multivariate temporal response function (mTRF) toolbox: a MATLAB toolbox for relating neural signals to continuous stimuli. Frontiers in Human Neuroscience, 2016. doi:10.3389/fnhum.2016.00604.

2

Frédéric E. Theunissen, Stephen V. David, Nandini C. Singh, Ann Hsu, William E. Vinje, and Jack L. Gallant. Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network: Computation in Neural Systems, 12(3):289–316, 2001. doi:10.1080/net.12.3.289.316.

3

Ben Willmore and Darragh Smyth. Methods for first-order kernel estimation: simple-cell receptive fields from responses to natural scenes. Network: Computation in Neural Systems, 14(3):553–577, 2003. doi:10.1088/0954-898X_14_3_309.

4

Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen. Rapid tuning shifts in human auditory cortex enhance speech intelligibility. Nature Communications, 2016. doi:10.1038/ncomms13654.

Total running time of the script: ( 0 minutes 25.493 seconds)

Estimated memory usage: 9 MB

Gallery generated by Sphinx-Gallery