Note
Go to the end to download the full example code.
Spatiotemporal permutation F-test on full sensor data#
Tests for differential evoked responses in at least one condition using a permutation clustering test. The FieldTrip neighbor templates will be used to determine the adjacency between sensors. This serves as a spatial prior to the clustering. Spatiotemporal clusters will then be visualized using custom matplotlib code.
Here, the unit of observation is epochs from a specific study subject. However, the same logic applies when the unit observation is a number of study subject each of whom contribute their own averaged data (i.e., an average of their epochs). This would then be considered an analysis at the “2nd level”.
See the FieldTrip tutorial for a caveat regarding the possible interpretation of “significant” clusters.
For more information on cluster-based permutation testing in MNE-Python, see also: Non-parametric 1 sample cluster statistic on single trial power.
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
# Alex Rockhill <aprockhill@mailbox.org>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
#
# License: BSD-3-Clause
# Copyright the MNE-Python contributors.
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats
from mpl_toolkits.axes_grid1 import make_axes_locatable
import mne
from mne.channels import find_ch_adjacency
from mne.datasets import sample
from mne.stats import combine_adjacency, spatio_temporal_cluster_test
from mne.viz import plot_compare_evokeds
Set parameters#
data_path = sample.data_path()
meg_path = data_path / "MEG" / "sample"
raw_fname = meg_path / "sample_audvis_filt-0-40_raw.fif"
event_fname = meg_path / "sample_audvis_filt-0-40_raw-eve.fif"
event_id = {"Aud/L": 1, "Aud/R": 2, "Vis/L": 3, "Vis/R": 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 25)
events = mne.read_events(event_fname)
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
Read a total of 4 projection items:
PCA-v1 (1 x 102) idle
PCA-v2 (1 x 102) idle
PCA-v3 (1 x 102) idle
Average EEG reference (1 x 60) idle
Range : 6450 ... 48149 = 42.956 ... 320.665 secs
Ready.
Reading 0 ... 41699 = 0.000 ... 277.709 secs...
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 1 - 25 Hz
FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 1.00
- Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
- Upper passband edge: 25.00 Hz
- Upper transition bandwidth: 6.25 Hz (-6 dB cutoff frequency: 28.12 Hz)
- Filter length: 497 samples (3.310 s)
[Parallel(n_jobs=1)]: Done 17 tasks | elapsed: 0.0s
[Parallel(n_jobs=1)]: Done 71 tasks | elapsed: 0.1s
[Parallel(n_jobs=1)]: Done 161 tasks | elapsed: 0.2s
[Parallel(n_jobs=1)]: Done 287 tasks | elapsed: 0.3s
Read epochs for the channel of interest#
picks = mne.pick_types(raw.info, meg="mag", eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(
raw,
events,
event_id,
tmin,
tmax,
picks=picks,
decim=2, # just for speed!
baseline=None,
reject=reject,
preload=True,
)
epochs.drop_channels(["EOG 061"])
epochs.equalize_event_counts(event_id)
# Obtain the data as a 3D matrix and transpose it such that
# the dimensions are as expected for the cluster permutation test:
# n_epochs × n_times × n_channels
X = [epochs[event_name].get_data(copy=False) for event_name in event_id]
X = [np.transpose(x, (0, 2, 1)) for x in X]
Not setting metadata
288 matching events found
No baseline correction applied
Created an SSP operator (subspace dimension = 3)
3 projection items activated
Using data from preloaded Raw for 288 events and 106 original time points (prior to decimation) ...
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on MAG : ['MEG 1711']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on MAG : ['MEG 1711']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
Rejecting epoch based on EOG : ['EOG 061']
48 bad epochs dropped
Dropped 16 epochs: 50, 51, 84, 93, 95, 96, 129, 146, 149, 154, 158, 195, 201, 203, 211, 212
Find the FieldTrip neighbor definition to setup sensor adjacency#
adjacency, ch_names = find_ch_adjacency(epochs.info, ch_type="mag")
print(type(adjacency)) # it's a sparse matrix!
mne.viz.plot_ch_adjacency(epochs.info, adjacency, ch_names)
Reading adjacency matrix for neuromag306mag.
<class 'scipy.sparse._csr.csr_array'>
Compute permutation statistic#
How does it work? We use clustering to “bind” together features which are similar. Our features are the magnetic fields measured over our sensor array at different times. This reduces the multiple comparison problem. To compute the actual test-statistic, we first sum all F-values in all clusters. We end up with one statistic for each cluster. Then we generate a distribution from the data by shuffling our conditions between our samples and recomputing our clusters and the test statistics. We test for the significance of a given cluster by computing the probability of observing a cluster of that size [1][2].
# We are running an F test, so we look at the upper tail
# see also: https://stats.stackexchange.com/a/73993
tail = 1
# We want to set a critical test statistic (here: F), to determine when
# clusters are being formed. Using Scipy's percent point function of the F
# distribution, we can conveniently select a threshold that corresponds to
# some alpha level that we arbitrarily pick.
alpha_cluster_forming = 0.001
# For an F test we need the degrees of freedom for the numerator
# (number of conditions - 1) and the denominator (number of observations
# - number of conditions):
n_conditions = len(event_id)
n_observations = len(X[0])
dfn = n_conditions - 1
dfd = n_observations - n_conditions
# Note: we calculate 1 - alpha_cluster_forming to get the critical value
# on the right tail
f_thresh = scipy.stats.f.ppf(1 - alpha_cluster_forming, dfn=dfn, dfd=dfd)
# run the cluster based permutation analysis
cluster_stats = spatio_temporal_cluster_test(
X,
n_permutations=1000,
threshold=f_thresh,
tail=tail,
n_jobs=None,
buffer_size=None,
adjacency=adjacency,
)
F_obs, clusters, p_values, _ = cluster_stats
stat_fun(H1): min=0.0033787374718191373 max=207.77535362731717
Running initial clustering …
Found 22 clusters
0%| | Permuting : 0/999 [00:00<?, ?it/s]
1%| | Permuting : 9/999 [00:00<00:03, 266.27it/s]
2%|▏ | Permuting : 20/999 [00:00<00:03, 296.99it/s]
3%|▎ | Permuting : 30/999 [00:00<00:03, 297.03it/s]
4%|▍ | Permuting : 40/999 [00:00<00:03, 297.08it/s]
5%|▌ | Permuting : 50/999 [00:00<00:03, 297.03it/s]
6%|▌ | Permuting : 60/999 [00:00<00:03, 297.00it/s]
7%|▋ | Permuting : 71/999 [00:00<00:03, 301.93it/s]
8%|▊ | Permuting : 81/999 [00:00<00:03, 301.10it/s]
9%|▉ | Permuting : 91/999 [00:00<00:03, 300.53it/s]
10%|█ | Permuting : 101/999 [00:00<00:02, 299.92it/s]
11%|█ | Permuting : 111/999 [00:00<00:02, 299.50it/s]
12%|█▏ | Permuting : 122/999 [00:00<00:02, 302.48it/s]
13%|█▎ | Permuting : 132/999 [00:00<00:02, 301.88it/s]
14%|█▍ | Permuting : 142/999 [00:00<00:02, 301.37it/s]
15%|█▌ | Permuting : 152/999 [00:00<00:02, 300.98it/s]
16%|█▋ | Permuting : 163/999 [00:00<00:02, 303.30it/s]
17%|█▋ | Permuting : 174/999 [00:00<00:02, 305.32it/s]
18%|█▊ | Permuting : 184/999 [00:00<00:02, 304.64it/s]
19%|█▉ | Permuting : 194/999 [00:00<00:02, 304.01it/s]
21%|██ | Permuting : 205/999 [00:00<00:02, 305.39it/s]
22%|██▏ | Permuting : 215/999 [00:00<00:02, 304.73it/s]
23%|██▎ | Permuting : 226/999 [00:00<00:02, 306.34it/s]
24%|██▎ | Permuting : 237/999 [00:00<00:02, 307.72it/s]
25%|██▍ | Permuting : 248/999 [00:00<00:02, 308.82it/s]
26%|██▌ | Permuting : 257/999 [00:00<00:02, 305.92it/s]
27%|██▋ | Permuting : 267/999 [00:00<00:02, 305.32it/s]
28%|██▊ | Permuting : 278/999 [00:00<00:02, 306.72it/s]
29%|██▉ | Permuting : 289/999 [00:00<00:02, 308.03it/s]
30%|███ | Permuting : 300/999 [00:00<00:02, 309.11it/s]
31%|███ | Permuting : 309/999 [00:01<00:02, 306.36it/s]
32%|███▏ | Permuting : 320/999 [00:01<00:02, 307.65it/s]
33%|███▎ | Permuting : 331/999 [00:01<00:02, 308.79it/s]
34%|███▍ | Permuting : 341/999 [00:01<00:02, 308.08it/s]
35%|███▌ | Permuting : 352/999 [00:01<00:02, 309.18it/s]
36%|███▌ | Permuting : 362/999 [00:01<00:02, 308.43it/s]
37%|███▋ | Permuting : 373/999 [00:01<00:02, 309.38it/s]
38%|███▊ | Permuting : 383/999 [00:01<00:01, 308.62it/s]
39%|███▉ | Permuting : 393/999 [00:01<00:01, 307.95it/s]
40%|████ | Permuting : 404/999 [00:01<00:01, 309.01it/s]
41%|████▏ | Permuting : 414/999 [00:01<00:01, 308.32it/s]
43%|████▎ | Permuting : 425/999 [00:01<00:01, 309.33it/s]
44%|████▎ | Permuting : 435/999 [00:01<00:01, 308.61it/s]
45%|████▍ | Permuting : 445/999 [00:01<00:01, 307.96it/s]
46%|████▌ | Permuting : 456/999 [00:01<00:01, 309.01it/s]
47%|████▋ | Permuting : 467/999 [00:01<00:01, 309.99it/s]
48%|████▊ | Permuting : 478/999 [00:01<00:01, 310.90it/s]
49%|████▉ | Permuting : 488/999 [00:01<00:01, 310.12it/s]
50%|████▉ | Permuting : 499/999 [00:01<00:01, 310.99it/s]
51%|█████ | Permuting : 508/999 [00:01<00:01, 308.64it/s]
52%|█████▏ | Permuting : 518/999 [00:01<00:01, 308.02it/s]
53%|█████▎ | Permuting : 530/999 [00:01<00:01, 310.56it/s]
54%|█████▍ | Permuting : 540/999 [00:01<00:01, 309.80it/s]
55%|█████▌ | Permuting : 551/999 [00:01<00:01, 310.69it/s]
56%|█████▌ | Permuting : 561/999 [00:01<00:01, 309.93it/s]
57%|█████▋ | Permuting : 572/999 [00:01<00:01, 310.83it/s]
58%|█████▊ | Permuting : 582/999 [00:01<00:01, 310.09it/s]
59%|█████▉ | Permuting : 594/999 [00:01<00:01, 312.28it/s]
61%|██████ | Permuting : 605/999 [00:01<00:01, 313.01it/s]
62%|██████▏ | Permuting : 616/999 [00:01<00:01, 313.73it/s]
63%|██████▎ | Permuting : 627/999 [00:02<00:01, 314.30it/s]
64%|██████▍ | Permuting : 637/999 [00:02<00:01, 313.36it/s]
65%|██████▍ | Permuting : 647/999 [00:02<00:01, 312.46it/s]
66%|██████▌ | Permuting : 657/999 [00:02<00:01, 311.55it/s]
67%|██████▋ | Permuting : 667/999 [00:02<00:01, 310.76it/s]
68%|██████▊ | Permuting : 678/999 [00:02<00:01, 311.58it/s]
69%|██████▉ | Permuting : 689/999 [00:02<00:00, 312.35it/s]
70%|███████ | Permuting : 700/999 [00:02<00:00, 313.09it/s]
71%|███████▏ | Permuting : 712/999 [00:02<00:00, 315.11it/s]
72%|███████▏ | Permuting : 722/999 [00:02<00:00, 314.13it/s]
73%|███████▎ | Permuting : 734/999 [00:02<00:00, 316.05it/s]
75%|███████▍ | Permuting : 745/999 [00:02<00:00, 316.46it/s]
76%|███████▌ | Permuting : 756/999 [00:02<00:00, 316.93it/s]
77%|███████▋ | Permuting : 767/999 [00:02<00:00, 317.38it/s]
78%|███████▊ | Permuting : 778/999 [00:02<00:00, 317.84it/s]
79%|███████▉ | Permuting : 789/999 [00:02<00:00, 318.28it/s]
80%|████████ | Permuting : 800/999 [00:02<00:00, 318.56it/s]
81%|████████ | Permuting : 811/999 [00:02<00:00, 318.93it/s]
82%|████████▏ | Permuting : 821/999 [00:02<00:00, 317.83it/s]
83%|████████▎ | Permuting : 832/999 [00:02<00:00, 318.26it/s]
84%|████████▍ | Permuting : 842/999 [00:02<00:00, 317.19it/s]
85%|████████▌ | Permuting : 853/999 [00:02<00:00, 317.67it/s]
87%|████████▋ | Permuting : 865/999 [00:02<00:00, 319.47it/s]
88%|████████▊ | Permuting : 875/999 [00:02<00:00, 318.31it/s]
89%|████████▉ | Permuting : 887/999 [00:02<00:00, 320.24it/s]
90%|████████▉ | Permuting : 898/999 [00:02<00:00, 320.58it/s]
91%|█████████ | Permuting : 910/999 [00:02<00:00, 322.39it/s]
92%|█████████▏| Permuting : 921/999 [00:02<00:00, 322.43it/s]
93%|█████████▎| Permuting : 933/999 [00:02<00:00, 324.11it/s]
94%|█████████▍| Permuting : 944/999 [00:03<00:00, 324.23it/s]
96%|█████████▌| Permuting : 955/999 [00:03<00:00, 324.35it/s]
97%|█████████▋| Permuting : 966/999 [00:03<00:00, 324.49it/s]
98%|█████████▊| Permuting : 978/999 [00:03<00:00, 326.09it/s]
99%|█████████▉| Permuting : 989/999 [00:03<00:00, 326.13it/s]
100%|██████████| Permuting : 999/999 [00:03<00:00, 326.81it/s]
100%|██████████| Permuting : 999/999 [00:03<00:00, 315.35it/s]
Note
Note how we only specified an adjacency for sensors! However,
because we used mne.stats.spatio_temporal_cluster_test()
,
an adjacency for time points was automatically taken into
account. That is, at time point N, the time points N - 1 and
N + 1 were considered as adjacent (this is also called “lattice
adjacency”). This is only possible because we ran the analysis on
2D data (times × channels) per observation … for 3D data per
observation (e.g., times × frequencies × channels), we will need
to use mne.stats.combine_adjacency()
, as shown further
below.
Note also that the same functions work with source estimates. The only differences are the origin of the data, the size, and the adjacency definition. It can be used for single trials or for groups of subjects.
Visualize clusters#
# We subselect clusters that we consider significant at an arbitrarily
# picked alpha level: "p_accept".
# NOTE: remember the caveats with respect to "significant" clusters that
# we mentioned in the introduction of this tutorial!
p_accept = 0.01
good_cluster_inds = np.where(p_values < p_accept)[0]
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": "steelblue"}
linestyles = {"L": "-", "R": "--"}
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = F_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3), layout="constrained")
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(
times=0,
mask=mask,
axes=ax_topo,
cmap="Reds",
vlim=(np.min, np.max),
show=False,
colorbar=False,
mask_params=dict(markersize=10),
)
image = ax_topo.images[0]
# remove the title that would otherwise say "0.000 s"
ax_topo.set_title("")
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
"Averaged F-map ({:0.3f} - {:0.3f} s)".format(*sig_times[[0, -1]])
)
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes("right", size="300%", pad=1.2)
title = f"Cluster #{i_clu + 1}, {len(ch_inds)} sensor"
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(
evokeds,
title=title,
picks=ch_inds,
axes=ax_signals,
colors=colors,
linestyles=linestyles,
show=False,
split_legend=True,
truncate_yaxis="auto",
)
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx(
(ymin, ymax), sig_times[0], sig_times[-1], color="orange", alpha=0.3
)
plt.show()
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
combining channels using RMS (mag channels)
Permutation statistic for time-frequencies#
Let’s do the same thing with the time-frequency decomposition of the data (see Frequency and time-frequency sensor analysis for a tutorial and Time-frequency on simulated data (Multitaper vs. Morlet vs. Stockwell vs. Hilbert) for a comparison of time-frequency methods) to show how cluster permutations can be done on higher-dimensional data.
decim = 4
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = freqs / freqs[0]
epochs_power = list()
for condition in [epochs[k] for k in ("Aud/L", "Vis/L")]:
this_tfr = condition.compute_tfr(
method="morlet",
freqs=freqs,
n_cycles=n_cycles,
decim=decim,
average=False,
return_itc=False,
)
this_tfr.apply_baseline(mode="ratio", baseline=(None, 0))
epochs_power.append(this_tfr.data)
# transpose again to (epochs, frequencies, times, channels)
X = [np.transpose(x, (0, 2, 3, 1)) for x in epochs_power]
[Parallel(n_jobs=1)]: Done 17 tasks | elapsed: 0.1s
[Parallel(n_jobs=1)]: Done 71 tasks | elapsed: 0.5s
Applying baseline correction (mode: ratio)
[Parallel(n_jobs=1)]: Done 17 tasks | elapsed: 0.1s
[Parallel(n_jobs=1)]: Done 71 tasks | elapsed: 0.5s
Applying baseline correction (mode: ratio)
Remember the note on the adjacency matrix from above: For 3D data, as here,
we must use mne.stats.combine_adjacency()
to extend the
sensor-based adjacency to incorporate the time-frequency plane as well.
Here, the integer inputs are converted into a lattice and combined with the sensor adjacency matrix so that data at similar times and with similar frequencies and at close sensor locations are clustered together.
# our data at each observation is of shape frequencies × times × channels
tfr_adjacency = combine_adjacency(len(freqs), len(this_tfr.times), adjacency)
Now we can run the cluster permutation test, but first we have to set a threshold. This example decimates in time and uses few frequencies so we need to increase the threshold from the default value in order to have differentiated clusters (i.e., so that our algorithm doesn’t just find one large cluster). For a more principled method of setting this parameter, threshold-free cluster enhancement may be used. See Statistical inference for a discussion.
# This time we don't calculate a threshold based on the F distribution.
# We might as well select an arbitrary threshold for cluster forming
tfr_threshold = 15.0
# run cluster based permutation analysis
cluster_stats = spatio_temporal_cluster_test(
X,
n_permutations=1000,
threshold=tfr_threshold,
tail=1,
n_jobs=None,
buffer_size=None,
adjacency=tfr_adjacency,
)
stat_fun(H1): min=6.752417666025854e-07 max=38.408365559505015
Running initial clustering …
Found 3 clusters
0%| | Permuting : 0/999 [00:00<?, ?it/s]
0%| | Permuting : 3/999 [00:00<01:06, 14.97it/s]
1%| | Permuting : 8/999 [00:00<00:28, 35.03it/s]
1%| | Permuting : 12/999 [00:00<00:21, 45.55it/s]
2%|▏ | Permuting : 18/999 [00:00<00:16, 60.82it/s]
2%|▏ | Permuting : 21/999 [00:00<00:15, 62.93it/s]
3%|▎ | Permuting : 26/999 [00:00<00:13, 71.91it/s]
3%|▎ | Permuting : 28/999 [00:00<00:13, 69.63it/s]
4%|▎ | Permuting : 35/999 [00:00<00:11, 81.42it/s]
4%|▍ | Permuting : 39/999 [00:00<00:11, 83.63it/s]
4%|▍ | Permuting : 41/999 [00:00<00:11, 80.59it/s]
4%|▍ | Permuting : 44/999 [00:00<00:11, 81.29it/s]
5%|▍ | Permuting : 46/999 [00:00<00:15, 61.37it/s]
5%|▌ | Permuting : 50/999 [00:00<00:14, 64.81it/s]
6%|▌ | Permuting : 55/999 [00:00<00:13, 69.77it/s]
6%|▌ | Permuting : 59/999 [00:00<00:13, 71.91it/s]
6%|▋ | Permuting : 63/999 [00:00<00:12, 74.06it/s]
7%|▋ | Permuting : 67/999 [00:00<00:12, 76.61it/s]
7%|▋ | Permuting : 70/999 [00:00<00:12, 76.68it/s]
7%|▋ | Permuting : 72/999 [00:00<00:12, 75.10it/s]
8%|▊ | Permuting : 76/999 [00:01<00:11, 77.21it/s]
8%|▊ | Permuting : 78/999 [00:01<00:12, 75.64it/s]
8%|▊ | Permuting : 81/999 [00:01<00:12, 76.37it/s]
8%|▊ | Permuting : 83/999 [00:01<00:12, 74.88it/s]
9%|▊ | Permuting : 86/999 [00:01<00:14, 63.19it/s]
9%|▉ | Permuting : 88/999 [00:01<00:14, 62.65it/s]
10%|▉ | Permuting : 96/999 [00:01<00:12, 69.97it/s]
10%|█ | Permuting : 100/999 [00:01<00:12, 71.87it/s]
11%|█ | Permuting : 107/999 [00:01<00:11, 77.42it/s]
11%|█ | Permuting : 110/999 [00:01<00:11, 77.94it/s]
12%|█▏ | Permuting : 115/999 [00:01<00:10, 81.12it/s]
12%|█▏ | Permuting : 120/999 [00:01<00:10, 84.16it/s]
12%|█▏ | Permuting : 124/999 [00:01<00:10, 85.74it/s]
13%|█▎ | Permuting : 129/999 [00:01<00:09, 88.61it/s]
13%|█▎ | Permuting : 134/999 [00:01<00:09, 91.36it/s]
14%|█▍ | Permuting : 139/999 [00:01<00:09, 94.00it/s]
15%|█▍ | Permuting : 145/999 [00:01<00:08, 97.90it/s]
15%|█▍ | Permuting : 149/999 [00:01<00:08, 98.88it/s]
15%|█▌ | Permuting : 154/999 [00:01<00:08, 101.20it/s]
16%|█▌ | Permuting : 158/999 [00:01<00:08, 101.16it/s]
16%|█▌ | Permuting : 159/999 [00:01<00:09, 85.11it/s]
16%|█▌ | Permuting : 161/999 [00:02<00:10, 83.55it/s]
17%|█▋ | Permuting : 166/999 [00:02<00:09, 86.06it/s]
17%|█▋ | Permuting : 172/999 [00:02<00:09, 89.52it/s]
18%|█▊ | Permuting : 176/999 [00:02<00:09, 90.75it/s]
18%|█▊ | Permuting : 180/999 [00:02<00:08, 91.24it/s]
19%|█▊ | Permuting : 185/999 [00:02<00:08, 93.06it/s]
19%|█▉ | Permuting : 188/999 [00:02<00:08, 92.91it/s]
20%|█▉ | Permuting : 199/999 [00:02<00:07, 102.81it/s]
20%|██ | Permuting : 204/999 [00:02<00:07, 104.78it/s]
21%|██ | Permuting : 206/999 [00:02<00:07, 102.19it/s]
21%|██ | Permuting : 211/999 [00:02<00:07, 104.04it/s]
22%|██▏ | Permuting : 216/999 [00:02<00:07, 105.98it/s]
22%|██▏ | Permuting : 218/999 [00:02<00:07, 103.30it/s]
22%|██▏ | Permuting : 222/999 [00:02<00:08, 91.34it/s]
22%|██▏ | Permuting : 224/999 [00:02<00:08, 89.59it/s]
23%|██▎ | Permuting : 226/999 [00:02<00:08, 87.91it/s]
23%|██▎ | Permuting : 229/999 [00:02<00:08, 87.97it/s]
23%|██▎ | Permuting : 233/999 [00:02<00:08, 88.59it/s]
24%|██▎ | Permuting : 236/999 [00:02<00:08, 88.52it/s]
24%|██▍ | Permuting : 243/999 [00:02<00:08, 92.83it/s]
24%|██▍ | Permuting : 244/999 [00:02<00:08, 89.75it/s]
25%|██▍ | Permuting : 249/999 [00:02<00:08, 92.14it/s]
26%|██▌ | Permuting : 255/999 [00:02<00:07, 95.60it/s]
26%|██▌ | Permuting : 260/999 [00:02<00:07, 97.80it/s]
27%|██▋ | Permuting : 268/999 [00:02<00:07, 102.98it/s]
27%|██▋ | Permuting : 274/999 [00:03<00:07, 92.95it/s]
28%|██▊ | Permuting : 279/999 [00:03<00:07, 95.01it/s]
28%|██▊ | Permuting : 283/999 [00:03<00:07, 95.38it/s]
29%|██▉ | Permuting : 291/999 [00:03<00:07, 100.77it/s]
30%|██▉ | Permuting : 295/999 [00:03<00:06, 101.47it/s]
30%|██▉ | Permuting : 296/999 [00:03<00:07, 98.14it/s]
30%|███ | Permuting : 300/999 [00:03<00:07, 98.26it/s]
30%|███ | Permuting : 302/999 [00:03<00:07, 96.76it/s]
31%|███ | Permuting : 307/999 [00:03<00:07, 98.82it/s]
31%|███ | Permuting : 309/999 [00:03<00:07, 96.00it/s]
31%|███ | Permuting : 312/999 [00:03<00:07, 95.16it/s]
31%|███▏ | Permuting : 314/999 [00:03<00:07, 93.16it/s]
32%|███▏ | Permuting : 317/999 [00:03<00:07, 92.95it/s]
32%|███▏ | Permuting : 321/999 [00:03<00:07, 94.01it/s]
32%|███▏ | Permuting : 324/999 [00:03<00:08, 81.11it/s]
33%|███▎ | Permuting : 328/999 [00:03<00:08, 82.48it/s]
33%|███▎ | Permuting : 332/999 [00:03<00:07, 83.81it/s]
34%|███▍ | Permuting : 338/999 [00:03<00:07, 87.32it/s]
34%|███▍ | Permuting : 343/999 [00:03<00:07, 89.63it/s]
35%|███▌ | Permuting : 353/999 [00:03<00:06, 96.96it/s]
36%|███▌ | Permuting : 356/999 [00:03<00:06, 96.68it/s]
36%|███▌ | Permuting : 359/999 [00:04<00:06, 95.84it/s]
36%|███▌ | Permuting : 362/999 [00:04<00:06, 95.59it/s]
37%|███▋ | Permuting : 370/999 [00:04<00:06, 100.64it/s]
37%|███▋ | Permuting : 372/999 [00:04<00:06, 98.44it/s]
38%|███▊ | Permuting : 377/999 [00:04<00:06, 99.85it/s]
38%|███▊ | Permuting : 381/999 [00:04<00:06, 100.62it/s]
38%|███▊ | Permuting : 383/999 [00:04<00:06, 98.39it/s]
39%|███▉ | Permuting : 394/999 [00:04<00:05, 107.67it/s]
40%|███▉ | Permuting : 396/999 [00:04<00:06, 94.52it/s]
40%|███▉ | Permuting : 399/999 [00:04<00:06, 93.80it/s]
40%|████ | Permuting : 404/999 [00:04<00:06, 95.32it/s]
41%|████ | Permuting : 410/999 [00:04<00:06, 97.90it/s]
41%|████▏ | Permuting : 414/999 [00:04<00:05, 98.71it/s]
42%|████▏ | Permuting : 419/999 [00:04<00:05, 100.63it/s]
42%|████▏ | Permuting : 421/999 [00:04<00:05, 98.47it/s]
43%|████▎ | Permuting : 426/999 [00:04<00:05, 99.85it/s]
43%|████▎ | Permuting : 432/999 [00:04<00:05, 102.32it/s]
44%|████▍ | Permuting : 442/999 [00:04<00:05, 109.37it/s]
45%|████▍ | Permuting : 448/999 [00:04<00:04, 111.46it/s]
45%|████▌ | Permuting : 451/999 [00:04<00:04, 110.58it/s]
46%|████▌ | Permuting : 456/999 [00:04<00:04, 112.13it/s]
46%|████▌ | Permuting : 459/999 [00:04<00:05, 97.39it/s]
46%|████▌ | Permuting : 462/999 [00:05<00:05, 96.57it/s]
47%|████▋ | Permuting : 465/999 [00:05<00:05, 96.31it/s]
47%|████▋ | Permuting : 471/999 [00:05<00:05, 99.35it/s]
47%|████▋ | Permuting : 474/999 [00:05<00:05, 98.42it/s]
48%|████▊ | Permuting : 478/999 [00:05<00:05, 98.90it/s]
48%|████▊ | Permuting : 482/999 [00:05<00:05, 99.67it/s]
49%|████▊ | Permuting : 486/999 [00:05<00:05, 100.43it/s]
49%|████▊ | Permuting : 487/999 [00:05<00:05, 97.11it/s]
49%|████▉ | Permuting : 491/999 [00:05<00:05, 97.97it/s]
49%|████▉ | Permuting : 494/999 [00:05<00:05, 85.47it/s]
50%|████▉ | Permuting : 499/999 [00:05<00:05, 87.70it/s]
50%|█████ | Permuting : 504/999 [00:05<00:05, 89.89it/s]
51%|█████ | Permuting : 508/999 [00:05<00:05, 90.46it/s]
51%|█████▏ | Permuting : 513/999 [00:05<00:05, 92.08it/s]
52%|█████▏ | Permuting : 519/999 [00:05<00:05, 94.75it/s]
52%|█████▏ | Permuting : 522/999 [00:05<00:05, 94.03it/s]
53%|█████▎ | Permuting : 526/999 [00:05<00:04, 94.78it/s]
53%|█████▎ | Permuting : 532/999 [00:05<00:04, 97.71it/s]
54%|█████▍ | Permuting : 537/999 [00:05<00:04, 99.67it/s]
54%|█████▍ | Permuting : 538/999 [00:05<00:04, 96.41it/s]
54%|█████▍ | Permuting : 544/999 [00:05<00:04, 99.02it/s]
55%|█████▍ | Permuting : 549/999 [00:05<00:04, 100.97it/s]
55%|█████▌ | Permuting : 550/999 [00:05<00:04, 97.58it/s]
56%|█████▌ | Permuting : 557/999 [00:05<00:04, 101.36it/s]
56%|█████▋ | Permuting : 563/999 [00:06<00:04, 104.45it/s]
57%|█████▋ | Permuting : 566/999 [00:06<00:04, 93.32it/s]
57%|█████▋ | Permuting : 568/999 [00:06<00:04, 91.57it/s]
58%|█████▊ | Permuting : 575/999 [00:06<00:04, 95.37it/s]
58%|█████▊ | Permuting : 579/999 [00:06<00:04, 96.26it/s]
59%|█████▊ | Permuting : 585/999 [00:06<00:04, 99.38it/s]
59%|█████▉ | Permuting : 587/999 [00:06<00:04, 97.30it/s]
60%|█████▉ | Permuting : 597/999 [00:06<00:03, 105.02it/s]
60%|██████ | Permuting : 601/999 [00:06<00:03, 104.96it/s]
61%|██████ | Permuting : 605/999 [00:06<00:03, 105.16it/s]
61%|██████ | Permuting : 610/999 [00:06<00:03, 106.90it/s]
62%|██████▏ | Permuting : 617/999 [00:06<00:03, 110.55it/s]
62%|██████▏ | Permuting : 622/999 [00:06<00:03, 112.10it/s]
63%|██████▎ | Permuting : 625/999 [00:06<00:03, 110.50it/s]
63%|██████▎ | Permuting : 629/999 [00:06<00:03, 110.54it/s]
63%|██████▎ | Permuting : 632/999 [00:06<00:03, 95.26it/s]
63%|██████▎ | Permuting : 634/999 [00:06<00:03, 93.46it/s]
64%|██████▍ | Permuting : 638/999 [00:06<00:03, 94.40it/s]
64%|██████▍ | Permuting : 641/999 [00:06<00:03, 93.69it/s]
64%|██████▍ | Permuting : 644/999 [00:06<00:03, 93.53it/s]
65%|██████▍ | Permuting : 649/999 [00:06<00:03, 95.63it/s]
66%|██████▌ | Permuting : 657/999 [00:06<00:03, 100.49it/s]
66%|██████▋ | Permuting : 662/999 [00:07<00:03, 102.36it/s]
67%|██████▋ | Permuting : 666/999 [00:07<00:03, 102.35it/s]
68%|██████▊ | Permuting : 675/999 [00:07<00:02, 108.83it/s]
68%|██████▊ | Permuting : 680/999 [00:07<00:02, 110.42it/s]
69%|██████▊ | Permuting : 686/999 [00:07<00:02, 113.15it/s]
69%|██████▉ | Permuting : 691/999 [00:07<00:02, 114.60it/s]
70%|██████▉ | Permuting : 695/999 [00:07<00:02, 114.08it/s]
70%|███████ | Permuting : 703/999 [00:07<00:02, 104.48it/s]
71%|███████ | Permuting : 707/999 [00:07<00:02, 104.44it/s]
71%|███████ | Permuting : 709/999 [00:07<00:02, 102.79it/s]
71%|███████▏ | Permuting : 712/999 [00:07<00:02, 101.73it/s]
72%|███████▏ | Permuting : 715/999 [00:07<00:02, 101.27it/s]
72%|███████▏ | Permuting : 720/999 [00:07<00:02, 103.07it/s]
73%|███████▎ | Permuting : 726/999 [00:07<00:02, 105.98it/s]
73%|███████▎ | Permuting : 731/999 [00:07<00:02, 107.65it/s]
73%|███████▎ | Permuting : 734/999 [00:07<00:02, 106.30it/s]
74%|███████▍ | Permuting : 738/999 [00:07<00:02, 106.81it/s]
74%|███████▍ | Permuting : 743/999 [00:07<00:02, 108.50it/s]
75%|███████▍ | Permuting : 745/999 [00:07<00:02, 105.89it/s]
75%|███████▍ | Permuting : 749/999 [00:07<00:02, 94.55it/s]
75%|███████▌ | Permuting : 751/999 [00:07<00:02, 92.76it/s]
75%|███████▌ | Permuting : 753/999 [00:07<00:02, 91.04it/s]
76%|███████▌ | Permuting : 756/999 [00:08<00:02, 90.98it/s]
76%|███████▋ | Permuting : 762/999 [00:08<00:02, 93.74it/s]
77%|███████▋ | Permuting : 770/999 [00:08<00:02, 98.64it/s]
77%|███████▋ | Permuting : 774/999 [00:08<00:02, 99.42it/s]
78%|███████▊ | Permuting : 782/999 [00:08<00:02, 104.17it/s]
79%|███████▉ | Permuting : 790/999 [00:08<00:01, 108.73it/s]
79%|███████▉ | Permuting : 793/999 [00:08<00:01, 107.96it/s]
80%|███████▉ | Permuting : 795/999 [00:08<00:01, 105.43it/s]
80%|███████▉ | Permuting : 799/999 [00:08<00:01, 105.98it/s]
80%|████████ | Permuting : 800/999 [00:08<00:01, 102.32it/s]
81%|████████ | Permuting : 807/999 [00:08<00:01, 105.94it/s]
81%|████████▏ | Permuting : 812/999 [00:08<00:01, 107.68it/s]
82%|████████▏ | Permuting : 817/999 [00:08<00:01, 96.84it/s]
82%|████████▏ | Permuting : 823/999 [00:08<00:01, 99.84it/s]
83%|████████▎ | Permuting : 825/999 [00:08<00:01, 97.80it/s]
83%|████████▎ | Permuting : 830/999 [00:08<00:01, 99.48it/s]
84%|████████▎ | Permuting : 835/999 [00:08<00:01, 101.36it/s]
84%|████████▍ | Permuting : 840/999 [00:08<00:01, 102.45it/s]
84%|████████▍ | Permuting : 844/999 [00:08<00:01, 102.61it/s]
85%|████████▍ | Permuting : 846/999 [00:08<00:01, 100.36it/s]
85%|████████▌ | Permuting : 850/999 [00:08<00:01, 101.02it/s]
85%|████████▌ | Permuting : 851/999 [00:08<00:01, 97.64it/s]
86%|████████▌ | Permuting : 856/999 [00:08<00:01, 99.68it/s]
86%|████████▌ | Permuting : 861/999 [00:08<00:01, 101.65it/s]
87%|████████▋ | Permuting : 867/999 [00:08<00:01, 104.76it/s]
87%|████████▋ | Permuting : 871/999 [00:09<00:01, 104.63it/s]
87%|████████▋ | Permuting : 872/999 [00:09<00:01, 90.28it/s]
88%|████████▊ | Permuting : 875/999 [00:09<00:01, 89.74it/s]
89%|████████▊ | Permuting : 885/999 [00:09<00:01, 97.46it/s]
89%|████████▉ | Permuting : 892/999 [00:09<00:01, 101.53it/s]
90%|████████▉ | Permuting : 896/999 [00:09<00:01, 102.17it/s]
91%|█████████ | Permuting : 909/999 [00:09<00:00, 113.13it/s]
91%|█████████▏| Permuting : 914/999 [00:09<00:00, 114.51it/s]
92%|█████████▏| Permuting : 920/999 [00:09<00:00, 117.01it/s]
93%|█████████▎| Permuting : 932/999 [00:09<00:00, 126.56it/s]
94%|█████████▍| Permuting : 938/999 [00:09<00:00, 128.64it/s]
95%|█████████▍| Permuting : 946/999 [00:09<00:00, 132.32it/s]
95%|█████████▌| Permuting : 951/999 [00:09<00:00, 133.00it/s]
95%|█████████▌| Permuting : 953/999 [00:09<00:00, 129.18it/s]
96%|█████████▌| Permuting : 959/999 [00:09<00:00, 130.41it/s]
96%|█████████▌| Permuting : 961/999 [00:09<00:00, 126.70it/s]
97%|█████████▋| Permuting : 967/999 [00:09<00:00, 128.06it/s]
97%|█████████▋| Permuting : 968/999 [00:09<00:00, 111.09it/s]
97%|█████████▋| Permuting : 974/999 [00:09<00:00, 113.63it/s]
98%|█████████▊| Permuting : 979/999 [00:09<00:00, 114.98it/s]
98%|█████████▊| Permuting : 984/999 [00:09<00:00, 116.30it/s]
99%|█████████▊| Permuting : 986/999 [00:09<00:00, 113.42it/s]
99%|█████████▉| Permuting : 991/999 [00:09<00:00, 114.63it/s]
100%|█████████▉| Permuting : 997/999 [00:10<00:00, 116.46it/s]
100%|██████████| Permuting : 999/999 [00:10<00:00, 99.48it/s]
Finally, we can plot our results. It is difficult to visualize clusters in time-frequency-sensor space; plotting time-frequency spectrograms and plotting topomaps display time-frequency and sensor space respectively but they are difficult to combine. We will plot topomaps with the clustered sensors colored in white adjacent to spectrograms in order to provide a visualization of the results. This is a dimensionally limited view, however. Each sensor has its own significant time-frequencies, but, in order to display a single spectrogram, all the time-frequencies that are significant for any sensor in the cluster are plotted as significant. This is a difficulty inherent to visualizing high-dimensional data and should be taken into consideration when interpreting results.
F_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
freq_inds, time_inds, space_inds = clusters[clu_idx]
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
freq_inds = np.unique(freq_inds)
# get topography for F stat
f_map = F_obs[freq_inds].mean(axis=0)
f_map = f_map[time_inds].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3), layout="constrained")
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(
times=0,
mask=mask,
axes=ax_topo,
cmap="Reds",
vlim=(np.min, np.max),
show=False,
colorbar=False,
mask_params=dict(markersize=10),
)
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
"Averaged F-map ({:0.3f} - {:0.3f} s)".format(*sig_times[[0, -1]])
)
# remove the title that would otherwise say "0.000 s"
ax_topo.set_title("")
# add new axis for spectrogram
ax_spec = divider.append_axes("right", size="300%", pad=1.2)
title = f"Cluster #{i_clu + 1}, {len(ch_inds)} spectrogram"
if len(ch_inds) > 1:
title += " (max over channels)"
F_obs_plot = F_obs[..., ch_inds].max(axis=-1)
F_obs_plot_sig = np.zeros(F_obs_plot.shape) * np.nan
F_obs_plot_sig[tuple(np.meshgrid(freq_inds, time_inds))] = F_obs_plot[
tuple(np.meshgrid(freq_inds, time_inds))
]
for f_image, cmap in zip([F_obs_plot, F_obs_plot_sig], ["gray", "autumn"]):
c = ax_spec.imshow(
f_image,
cmap=cmap,
aspect="auto",
origin="lower",
extent=[epochs.times[0], epochs.times[-1], freqs[0], freqs[-1]],
)
ax_spec.set_xlabel("Time (ms)")
ax_spec.set_ylabel("Frequency (Hz)")
ax_spec.set_title(title)
# add another colorbar
ax_colorbar2 = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(c, cax=ax_colorbar2)
ax_colorbar2.set_ylabel("F-stat")
# clean up viz
plt.show()
Exercises#
References#
Total running time of the script: (0 minutes 21.470 seconds)