Statistical inference#

Here we will briefly cover multiple concepts of inferential statistics in an introductory manner, and demonstrate how to use some MNE statistical functions.

# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
# Copyright the MNE-Python contributors.
from functools import partial

import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D  # noqa: F401, analysis:ignore
from scipy import stats

import mne
from mne.stats import (
    bonferroni_correction,
    fdr_correction,
    permutation_cluster_1samp_test,
    permutation_t_test,
    ttest_1samp_no_p,
)

Hypothesis testing#

Null hypothesis#

From Wikipedia:

In inferential statistics, a general statement or default position that there is no relationship between two measured phenomena, or no association among groups.

We typically want to reject a null hypothesis with some probability (e.g., p < 0.05). This probability is also called the significance level \(\alpha\). To think about what this means, let’s follow the illustrative example from [1] and construct a toy dataset consisting of a 40 × 40 square with a “signal” present in the center with white noise added and a Gaussian smoothing kernel applied.

width = 40
n_subjects = 10
signal_mean = 100
signal_sd = 100
noise_sd = 0.01
gaussian_sd = 5
sigma = 1e-3  # sigma for the "hat" method
n_permutations = "all"  # run an exact test
n_src = width * width

# For each "subject", make a smoothed noisy signal with a centered peak
rng = np.random.RandomState(2)
X = noise_sd * rng.randn(n_subjects, width, width)
# Add a signal at the center
X[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd
# Spatially smooth with a 2D Gaussian kernel
size = width // 2 - 1
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd**2)))
for si in range(X.shape[0]):
    for ri in range(X.shape[1]):
        X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, "same")
    for ci in range(X.shape[2]):
        X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, "same")

The data averaged over all subjects looks like this:

fig, ax = plt.subplots(layout="constrained")
ax.imshow(X.mean(0), cmap="inferno")
ax.set(xticks=[], yticks=[], title="Data averaged over subjects")
Data averaged over subjects

In this case, a null hypothesis we could test for each voxel is:

There is no difference between the mean value and zero (\(H_0 \colon \mu = 0\)).

The alternative hypothesis, then, is that the voxel has a non-zero mean (\(H_1 \colon \mu \neq 0\)). This is a two-tailed test because the mean could be less than or greater than zero, whereas a one-tailed test would test only one of these possibilities, i.e. \(H_1 \colon \mu \geq 0\) or \(H_1 \colon \mu \leq 0\).

Note

Here we will refer to each spatial location as a “voxel”. In general, though, it could be any sort of data value, including cortical vertex at a specific time, pixel in a time-frequency decomposition, etc.

Parametric tests#

Let’s start with a paired t-test, which is a standard test for differences in paired samples. Mathematically, it is equivalent to a 1-sample t-test on the difference between the samples in each condition. The paired t-test is parametric because it assumes that the underlying sample distribution is Gaussian, and is only valid in this case. This happens to be satisfied by our toy dataset, but is not always satisfied for neuroimaging data.

In the context of our toy dataset, which has many voxels (\(40 \cdot 40 = 1600\)), applying the paired t-test is called a mass-univariate approach as it treats each voxel independently.

titles = ["t"]
out = stats.ttest_1samp(X, 0, axis=0)
ts = [out[0]]
ps = [out[1]]
mccs = [False]  # these are not multiple-comparisons corrected


def plot_t_p(t, p, title, mcc, axes=None):
    if axes is None:
        fig = plt.figure(figsize=(6, 3), layout="constrained")
        axes = [fig.add_subplot(121, projection="3d"), fig.add_subplot(122)]
        show = True
    else:
        show = False

    # calculate critical t-value thresholds (2-tailed)
    p_lims = np.array([0.1, 0.001])
    df = n_subjects - 1  # degrees of freedom
    t_lims = stats.distributions.t.ppf(1 - p_lims / 2, df=df)
    p_lims = [-np.log10(p) for p in p_lims]

    # t plot
    x, y = np.mgrid[0:width, 0:width]
    surf = axes[0].plot_surface(
        x,
        y,
        np.reshape(t, (width, width)),
        rstride=1,
        cstride=1,
        linewidth=0,
        vmin=t_lims[0],
        vmax=t_lims[1],
        cmap="viridis",
    )
    axes[0].set(
        xticks=[], yticks=[], zticks=[], xlim=[0, width - 1], ylim=[0, width - 1]
    )
    axes[0].view_init(30, 15)
    cbar = axes[0].figure.colorbar(
        ax=axes[0],
        shrink=0.75,
        orientation="horizontal",
        fraction=0.1,
        pad=0.025,
        mappable=surf,
    )
    cbar.set_ticks(t_lims)
    cbar.set_ticklabels(["%0.1f" % t_lim for t_lim in t_lims])
    cbar.set_label("t-value")
    cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
    if not show:
        axes[0].set(title=title)
        if mcc:
            axes[0].title.set_weight("bold")
    # p plot
    use_p = -np.log10(np.reshape(np.maximum(p, 1e-5), (width, width)))
    img = axes[1].imshow(
        use_p, cmap="inferno", vmin=p_lims[0], vmax=p_lims[1], interpolation="nearest"
    )
    axes[1].set(xticks=[], yticks=[])
    cbar = axes[1].figure.colorbar(
        ax=axes[1],
        shrink=0.75,
        orientation="horizontal",
        fraction=0.1,
        pad=0.025,
        mappable=img,
    )
    cbar.set_ticks(p_lims)
    cbar.set_ticklabels(["%0.1f" % p_lim for p_lim in p_lims])
    cbar.set_label(r"$-\log_{10}(p)$")
    cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
    if show:
        text = fig.suptitle(title)
        if mcc:
            text.set_weight("bold")


plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
t

“Hat” variance adjustment#

The “hat” technique regularizes the variance values used in the t-test calculation [1] to compensate for implausibly small variances.

ts.append(ttest_1samp_no_p(X, sigma=sigma))
ps.append(stats.distributions.t.sf(np.abs(ts[-1]), len(X) - 1) * 2)
titles.append(r"$\mathrm{t_{hat}}$")
mccs.append(False)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
$\mathrm{t_{hat}}$

Non-parametric tests#

Instead of assuming an underlying Gaussian distribution, we could instead use a non-parametric resampling method. In the case of a paired t-test between two conditions A and B, which is mathematically equivalent to a one-sample t-test between the difference in the conditions A-B, under the null hypothesis we have the principle of exchangeability. This means that, if the null is true, we can exchange conditions and not change the distribution of the test statistic.

When using a paired t-test, exchangeability thus means that we can flip the signs of the difference between A and B. Therefore, we can construct the null distribution values for each voxel by taking random subsets of samples (subjects), flipping the sign of their difference, and recording the absolute value of the resulting statistic (we record the absolute value because we conduct a two-tailed test). The absolute value of the statistic evaluated on the veridical data can then be compared to this distribution, and the p-value is simply the proportion of null distribution values that are smaller.

Warning

In the case of a true one-sample t-test, i.e. analyzing a single condition rather than the difference between two conditions, it is not clear where/how exchangeability applies; see this FieldTrip discussion.

In the case where n_permutations is large enough (or “all”) so that the complete set of unique resampling exchanges can be done (which is \(2^{N_{samp}}-1\) for a one-tailed and \(2^{N_{samp}-1}-1\) for a two-tailed test, not counting the veridical distribution), instead of randomly exchanging conditions the null is formed from using all possible exchanges. This is known as a permutation test (or exact test).

# Here we have to do a bit of gymnastics to get our function to do
# a permutation test without correcting for multiple comparisons:

X.shape = (n_subjects, n_src)  # flatten the array for simplicity
titles.append("Permutation")
ts.append(np.zeros(width * width))
ps.append(np.zeros(width * width))
mccs.append(False)
for ii in range(n_src):
    t, p = permutation_t_test(X[:, [ii]], verbose=False)[:2]
    ts[-1][ii], ps[-1][ii] = t[0], p[0]
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Permutation

Multiple comparisons#

So far, we have done no correction for multiple comparisons. This is potentially problematic for these data because there are \(40 \cdot 40 = 1600\) tests being performed. If we use a threshold p < 0.05 for each individual test, we would expect many voxels to be declared significant even if there were no true effect. In other words, we would make many type I errors (adapted from here):

Null hypothesis

True

False

Reject

Yes

Type I error

False positive

Correct

True positive

No

Correct

True Negative

Type II error

False negative

To see why, consider a standard \(\alpha = 0.05\). For a single test, our probability of making a type I error is 0.05. The probability of making at least one type I error in \(N_{\mathrm{test}}\) independent tests is then given by \(1 - (1 - \alpha)^{N_{\mathrm{test}}}\):

N = np.arange(1, 80)
alpha = 0.05
p_type_I = 1 - (1 - alpha) ** N
fig, ax = plt.subplots(figsize=(4, 3), layout="constrained")
ax.scatter(N, p_type_I, 3)
ax.set(
    xlim=N[[0, -1]],
    ylim=[0, 1],
    xlabel=r"$N_{\mathrm{test}}$",
    ylabel="Probability of at least\none type I error",
)
ax.grid(True)
fig.show()
10 background stats

To combat this problem, several methods exist. Typically these provide control over either one of the following two measures:

  1. Familywise error rate (FWER)

    The probability of making one or more type I errors:

    \[\mathrm{P}(N_{\mathrm{type\ I}} >= 1 \mid H_0)\]
  2. False discovery rate (FDR)

    The expected proportion of rejected null hypotheses that are actually true:

    \[\mathrm{E}(\frac{N_{\mathrm{type\ I}}}{N_{\mathrm{reject}}} \mid N_{\mathrm{reject}} > 0) \cdot \mathrm{P}(N_{\mathrm{reject}} > 0 \mid H_0)\]

We cover some techniques that control FWER and FDR below.

Bonferroni correction#

Perhaps the simplest way to deal with multiple comparisons, Bonferroni correction conservatively multiplies the p-values by the number of comparisons to control the FWER.

titles.append("Bonferroni")
ts.append(ts[-1])
ps.append(bonferroni_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Bonferroni

False discovery rate (FDR) correction#

Typically FDR is performed with the Benjamini-Hochberg procedure, which is less restrictive than Bonferroni correction for large numbers of comparisons (fewer type II errors), but provides less strict control of type I errors.

titles.append("FDR")
ts.append(ts[-1])
ps.append(fdr_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
FDR

Non-parametric resampling test with a maximum statistic#

Non-parametric resampling tests can also be used to correct for multiple comparisons. In its simplest form, we again do permutations using exchangeability under the null hypothesis, but this time we take the maximum statistic across all voxels in each permutation to form the null distribution. The p-value for each voxel from the veridical data is then given by the proportion of null distribution values that were smaller.

This method has two important features:

  1. It controls FWER.

  2. It is non-parametric. Even though our initial test statistic (here a 1-sample t-test) is parametric, the null distribution for the null hypothesis rejection (the mean value across subjects is indistinguishable from zero) is obtained by permutations. This means that it makes no assumptions of Gaussianity (which do hold for this example, but do not in general for some types of processed neuroimaging data).

titles.append(r"$\mathbf{Perm_{max}}$")
out = permutation_t_test(X, verbose=False)[:2]
ts.append(out[0])
ps.append(out[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
$\mathbf{Perm_{max}}$

Clustering#

Each of the aforementioned multiple comparisons corrections have the disadvantage of not fully incorporating the correlation structure of the data, namely that points close to one another (e.g., in space or time) tend to be correlated. However, by defining the adjacency (or “neighbor”) structure in our data, we can use clustering to compensate.

To use this, we need to rethink our null hypothesis. Instead of thinking about a null hypothesis about means per voxel (with one independent test per voxel), we consider a null hypothesis about sizes of clusters in our data, which could be stated like:

The distribution of spatial cluster sizes observed in two experimental conditions are drawn from the same probability distribution.

Here we only have a single condition and we contrast to zero, which can be thought of as:

The distribution of spatial cluster sizes is independent of the sign of the data.

In this case, we again do permutations with a maximum statistic, but, under each permutation, we:

  1. Compute the test statistic for each voxel individually.

  2. Threshold the test statistic values.

  3. Cluster voxels that exceed this threshold (with the same sign) based on adjacency.

  4. Retain the size of the largest cluster (measured, e.g., by a simple voxel count, or by the sum of voxel t-values within the cluster) to build the null distribution.

After doing these permutations, the cluster sizes in our veridical data are compared to this null distribution. The p-value associated with each cluster is again given by the proportion of smaller null distribution values. This can then be subjected to a standard p-value threshold (e.g., p < 0.05) to reject the null hypothesis (i.e., find an effect of interest).

This reframing to consider cluster sizes rather than individual means maintains the advantages of the standard non-parametric permutation test – namely controlling FWER and making no assumptions of parametric data distribution. Critically, though, it also accounts for the correlation structure in the data – which in this toy case is spatial but in general can be multidimensional (e.g., spatio-temporal) – because the null distribution will be derived from data in a way that preserves these correlations.

However, there is a drawback. If a cluster significantly deviates from the null, no further inference on the cluster (e.g., peak location) can be made, as the entire cluster as a whole is used to reject the null. Moreover, because the test statistic concerns the full data, the null hypothesis (and our rejection of it) refers to the structure of the full data. For more information, see also the comprehensive FieldTrip tutorial.

Defining the adjacency matrix#

First we need to define our adjacency (sometimes called “neighbors”) matrix. This is a square array (or sparse matrix) of shape (n_src, n_src) that contains zeros and ones to define which spatial points are neighbors, i.e., which voxels are adjacent to each other. In our case this is quite simple, as our data are aligned on a rectangular grid.

Let’s pretend that our data were smaller – a 3 × 3 grid. Thinking about each voxel as being connected to the other voxels it touches, we would need a 9 × 9 adjacency matrix. The first row of this matrix contains the voxels in the flattened data that the first voxel touches. Since it touches the second element in the first row and the first element in the second row (and is also a neighbor to itself), this would be:

[1, 1, 0, 1, 0, 0, 0, 0, 0]

sklearn.feature_extraction provides a convenient function for this:

from sklearn.feature_extraction.image import grid_to_graph  # noqa: E402

mini_adjacency = grid_to_graph(3, 3).toarray()
assert mini_adjacency.shape == (9, 9)
print(mini_adjacency[0])
[1 1 0 1 0 0 0 0 0]

In general the adjacency between voxels can be more complex, such as those between sensors in 3D space, or time-varying activation at brain vertices on a cortical surface. MNE provides several convenience functions for computing adjacency matrices, for example:

See the Statistics API for a full list.

MNE also ships with numerous built-in channel adjacency matrices from the FieldTrip project (called “neighbors” there). You can get an overview of them by using mne.channels.get_builtin_ch_adjacencies():

biosemi16: Biosemi 16-electrode cap
biosemi32: Biosemi 32-electrode cap
biosemi64: Biosemi 64-electrode cap
bti148: BTI 148-channel system
bti248: BTI 248-channel system
bti248grad: BTI 248 gradiometer system
ctf151: CTF 151 axial gradiometer
ctf275: CTF 275 axial gradiometer
ctf64: CTF 64 axial gradiometer
easycap128ch-avg:
easycap32ch-avg:
easycap64ch-avg:
easycapM1: Easycap M1
easycapM11: Easycap M11
easycapM14: Easycap M14
easycapM15: Easycap M15
ecog256: ECOG 256channels, average referenced
ecog256bipolar: ECOG 256channels, bipolar referenced
eeg1010_neighb:
elec1005: Standard 10-05 system
elec1010: Standard 10-10 system
elec1020: Standard 10-20 system
itab153: ITAB 153-channel system
itab28: ITAB 28-channel system
KIT-157:
KIT-208:
KIT-NYU-2019:
KIT-UMD-1:
KIT-UMD-2:
KIT-UMD-3:
KIT-UMD-4:
language29ch-avg: MPI for Psycholinguistic: Averaged 29-channel cap
mpi_59_channels: MPI for Psycholinguistic: 59-channel cap
neuromag122cmb: Neuromag122, only combined planar gradiometers
neuromag306cmb: Neuromag306, only combined planar gradiometers
neuromag306mag: Neuromag306, only magnetometers
neuromag306planar: Neuromag306, only planar gradiometers
yokogawa160:
yokogawa440:

These built-in channel adjacency matrices can be loaded via mne.channels.read_ch_adjacency().

Standard clustering#

Here, since our data are on a grid, we can use adjacency=None to trigger optimized grid-based code, and run the clustering algorithm.

titles.append("Clustering")

# Reshape data to what is equivalent to (n_samples, n_space, n_time)
X.shape = (n_subjects, width, width)

# Compute threshold from t distribution (this is also the default)
# Here we use a two-tailed test, hence we need to divide alpha by 2.
# Subtracting alpha from 1 guarantees that we get a positive threshold,
# which MNE-Python expects for two-tailed tests.
df = n_subjects - 1  # degrees of freedom
t_thresh = stats.distributions.t.ppf(1 - alpha / 2, df=df)

# run the cluster test
t_clust, clusters, p_values, H0 = permutation_cluster_1samp_test(
    X,
    n_jobs=None,
    threshold=t_thresh,
    adjacency=None,
    n_permutations=n_permutations,
    out_type="mask",
)

# Put the cluster data in a viewable format
p_clust = np.ones((width, width))
for cl, p in zip(clusters, p_values):
    p_clust[cl] = p
ts.append(t_clust)
ps.append(p_clust)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Clustering
stat_fun(H1): min=-3.195526713940576 max=5.1204338596605075
Running initial clustering …
Found 2 clusters

  0%|          | Permuting : 0/510 [00:00<?,       ?it/s]
 14%|█▎        | Permuting : 70/510 [00:00<00:00, 2024.88it/s]
 28%|██▊       | Permuting : 144/510 [00:00<00:00, 2106.73it/s]
 44%|████▎     | Permuting : 223/510 [00:00<00:00, 2187.90it/s]
 59%|█████▉    | Permuting : 302/510 [00:00<00:00, 2228.71it/s]
 75%|███████▍  | Permuting : 381/510 [00:00<00:00, 2252.28it/s]
 89%|████████▉ | Permuting : 455/510 [00:00<00:00, 2240.14it/s]
100%|██████████| Permuting : 510/510 [00:00<00:00, 2174.36it/s]
100%|██████████| Permuting : 510/510 [00:00<00:00, 2174.98it/s]

“Hat” variance adjustment#

This method can also be used in this context to correct for small variances [1]:

titles.append(r"$\mathbf{C_{hat}}$")
stat_fun_hat = partial(ttest_1samp_no_p, sigma=sigma)
t_hat, clusters, p_values, H0 = permutation_cluster_1samp_test(
    X,
    n_jobs=None,
    threshold=t_thresh,
    adjacency=None,
    out_type="mask",
    n_permutations=n_permutations,
    stat_fun=stat_fun_hat,
    buffer_size=None,
)
p_hat = np.ones((width, width))
for cl, p in zip(clusters, p_values):
    p_hat[cl] = p
ts.append(t_hat)
ps.append(p_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
$\mathbf{C_{hat}}$
stat_fun(H1): min=-0.04360308801187525 max=3.127369419320333
Running initial clustering …
Found 1 cluster

  0%|          | Permuting : 0/510 [00:00<?,       ?it/s]
 25%|██▌       | Permuting : 130/510 [00:00<00:00, 3779.80it/s]
 50%|█████     | Permuting : 256/510 [00:00<00:00, 3748.61it/s]
 81%|████████▏ | Permuting : 415/510 [00:00<00:00, 4082.49it/s]
100%|██████████| Permuting : 510/510 [00:00<00:00, 4173.46it/s]
100%|██████████| Permuting : 510/510 [00:00<00:00, 4140.93it/s]

Threshold-free cluster enhancement (TFCE)#

TFCE eliminates the free parameter initial threshold value that determines which points are included in clustering by approximating a continuous integration across possible threshold values with a standard Riemann sum [2]. This requires giving a starting threshold start and a step size step, which in MNE is supplied as a dict. The smaller the step and closer to 0 the start value, the better the approximation, but the longer it takes.

A significant advantage of TFCE is that, rather than modifying the statistical null hypothesis under test (from one about individual voxels to one about the distribution of clusters in the data), it modifies the data under test while still controlling for multiple comparisons. The statistical test is then done at the level of individual voxels rather than clusters. This allows for evaluation of each point independently for significance rather than only as cluster groups.

titles.append(r"$\mathbf{C_{TFCE}}$")
threshold_tfce = dict(start=0, step=0.2)
t_tfce, _, p_tfce, H0 = permutation_cluster_1samp_test(
    X,
    n_jobs=None,
    threshold=threshold_tfce,
    adjacency=None,
    n_permutations=n_permutations,
    out_type="mask",
)
ts.append(t_tfce)
ps.append(p_tfce)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
$\mathbf{C_{TFCE}}$
stat_fun(H1): min=-3.195526713940576 max=5.1204338596605075
Running initial clustering …
Using 26 thresholds from 0.00 to 5.00 for TFCE computation (h_power=2.00, e_power=0.50)
Found 1600 clusters

  0%|          | Permuting : 0/510 [00:00<?,       ?it/s]
  0%|          | Permuting : 2/510 [00:00<00:08,   58.23it/s]
  1%|          | Permuting : 5/510 [00:00<00:06,   73.63it/s]
  2%|▏         | Permuting : 9/510 [00:00<00:05,   89.17it/s]
  2%|▏         | Permuting : 12/510 [00:00<00:05,   89.08it/s]
  3%|▎         | Permuting : 16/510 [00:00<00:05,   95.45it/s]
  4%|▎         | Permuting : 19/510 [00:00<00:05,   94.21it/s]
  5%|▍         | Permuting : 23/510 [00:00<00:04,   98.22it/s]
  5%|▌         | Permuting : 27/510 [00:00<00:04,  101.21it/s]
  6%|▌         | Permuting : 30/510 [00:00<00:04,   99.54it/s]
  7%|▋         | Permuting : 34/510 [00:00<00:04,  101.84it/s]
  7%|▋         | Permuting : 38/510 [00:00<00:04,  103.73it/s]
  8%|▊         | Permuting : 42/510 [00:00<00:04,  105.27it/s]
  9%|▉         | Permuting : 46/510 [00:00<00:04,  106.59it/s]
 10%|▉         | Permuting : 50/510 [00:00<00:04,  107.72it/s]
 11%|█         | Permuting : 54/510 [00:00<00:04,  108.68it/s]
 11%|█▏        | Permuting : 58/510 [00:00<00:04,  109.55it/s]
 12%|█▏        | Permuting : 63/510 [00:00<00:03,  112.84it/s]
 13%|█▎        | Permuting : 67/510 [00:00<00:03,  113.31it/s]
 14%|█▍        | Permuting : 72/510 [00:00<00:03,  116.00it/s]
 15%|█▍        | Permuting : 75/510 [00:00<00:03,  113.89it/s]
 15%|█▌        | Permuting : 79/510 [00:00<00:03,  114.23it/s]
 16%|█▌        | Permuting : 82/510 [00:00<00:03,  112.34it/s]
 17%|█▋        | Permuting : 85/510 [00:00<00:03,  110.66it/s]
 17%|█▋        | Permuting : 89/510 [00:00<00:03,  111.18it/s]
 18%|█▊        | Permuting : 93/510 [00:00<00:03,  111.66it/s]
 19%|█▉        | Permuting : 96/510 [00:00<00:03,  110.10it/s]
 19%|█▉        | Permuting : 99/510 [00:00<00:03,  108.66it/s]
 20%|██        | Permuting : 103/510 [00:00<00:03,  109.30it/s]
 21%|██        | Permuting : 107/510 [00:00<00:03,  109.88it/s]
 21%|██▏       | Permuting : 109/510 [00:01<00:03,  106.66it/s]
 22%|██▏       | Permuting : 113/510 [00:01<00:03,  107.36it/s]
 23%|██▎       | Permuting : 116/510 [00:01<00:03,  106.17it/s]
 23%|██▎       | Permuting : 119/510 [00:01<00:03,  105.10it/s]
 24%|██▍       | Permuting : 123/510 [00:01<00:03,  105.81it/s]
 25%|██▍       | Permuting : 125/510 [00:01<00:03,  103.01it/s]
 25%|██▌       | Permuting : 128/510 [00:01<00:03,  102.17it/s]
 25%|██▌       | Permuting : 130/510 [00:01<00:03,   99.65it/s]
 26%|██▌       | Permuting : 133/510 [00:01<00:03,   99.02it/s]
 27%|██▋       | Permuting : 136/510 [00:01<00:03,   98.43it/s]
 27%|██▋       | Permuting : 140/510 [00:01<00:03,   99.30it/s]
 28%|██▊       | Permuting : 142/510 [00:01<00:03,   96.92it/s]
 29%|██▊       | Permuting : 146/510 [00:01<00:03,   98.13it/s]
 29%|██▉       | Permuting : 149/510 [00:01<00:03,   97.61it/s]
 30%|██▉       | Permuting : 152/510 [00:01<00:03,   96.93it/s]
 30%|███       | Permuting : 155/510 [00:01<00:03,   96.45it/s]
 31%|███       | Permuting : 158/510 [00:01<00:03,   96.03it/s]
 32%|███▏      | Permuting : 161/510 [00:01<00:03,   95.63it/s]
 32%|███▏      | Permuting : 163/510 [00:01<00:03,   93.53it/s]
 33%|███▎      | Permuting : 167/510 [00:01<00:03,   94.86it/s]
 33%|███▎      | Permuting : 170/510 [00:01<00:03,   94.52it/s]
 34%|███▎      | Permuting : 172/510 [00:01<00:03,   92.61it/s]
 34%|███▍      | Permuting : 174/510 [00:01<00:03,   90.82it/s]
 35%|███▍      | Permuting : 177/510 [00:01<00:03,   90.71it/s]
 35%|███▌      | Permuting : 181/510 [00:01<00:03,   92.16it/s]
 36%|███▌      | Permuting : 184/510 [00:01<00:03,   91.97it/s]
 37%|███▋      | Permuting : 187/510 [00:01<00:03,   91.78it/s]
 37%|███▋      | Permuting : 190/510 [00:01<00:03,   91.62it/s]
 38%|███▊      | Permuting : 193/510 [00:01<00:03,   91.47it/s]
 38%|███▊      | Permuting : 196/510 [00:02<00:03,   91.34it/s]
 39%|███▉      | Permuting : 199/510 [00:02<00:03,   91.21it/s]
 40%|███▉      | Permuting : 202/510 [00:02<00:03,   91.08it/s]
 40%|████      | Permuting : 205/510 [00:02<00:03,   90.96it/s]
 41%|████      | Permuting : 209/510 [00:02<00:03,   92.38it/s]
 42%|████▏     | Permuting : 212/510 [00:02<00:03,   92.19it/s]
 42%|████▏     | Permuting : 215/510 [00:02<00:03,   92.02it/s]
 43%|████▎     | Permuting : 218/510 [00:02<00:03,   91.85it/s]
 44%|████▎     | Permuting : 222/510 [00:02<00:03,   93.18it/s]
 44%|████▍     | Permuting : 225/510 [00:02<00:03,   92.96it/s]
 45%|████▍     | Permuting : 228/510 [00:02<00:03,   92.74it/s]
 45%|████▌     | Permuting : 231/510 [00:02<00:03,   92.52it/s]
 46%|████▌     | Permuting : 234/510 [00:02<00:02,   92.33it/s]
 46%|████▋     | Permuting : 237/510 [00:02<00:02,   92.15it/s]
 47%|████▋     | Permuting : 240/510 [00:02<00:02,   91.98it/s]
 48%|████▊     | Permuting : 243/510 [00:02<00:02,   91.82it/s]
 48%|████▊     | Permuting : 247/510 [00:02<00:02,   93.17it/s]
 49%|████▉     | Permuting : 250/510 [00:02<00:02,   92.95it/s]
 50%|████▉     | Permuting : 254/510 [00:02<00:02,   94.13it/s]
 50%|█████     | Permuting : 257/510 [00:02<00:02,   93.86it/s]
 51%|█████     | Permuting : 261/510 [00:02<00:02,   95.11it/s]
 52%|█████▏    | Permuting : 264/510 [00:02<00:02,   94.79it/s]
 53%|█████▎    | Permuting : 268/510 [00:02<00:02,   95.96it/s]
 53%|█████▎    | Permuting : 271/510 [00:02<00:02,   95.59it/s]
 54%|█████▎    | Permuting : 274/510 [00:02<00:02,   95.24it/s]
 54%|█████▍    | Permuting : 277/510 [00:02<00:02,   94.92it/s]
 55%|█████▌    | Permuting : 281/510 [00:02<00:02,   96.12it/s]
 56%|█████▌    | Permuting : 284/510 [00:02<00:02,   95.75it/s]
 56%|█████▋    | Permuting : 287/510 [00:02<00:02,   95.41it/s]
 57%|█████▋    | Permuting : 290/510 [00:02<00:02,   95.08it/s]
 58%|█████▊    | Permuting : 294/510 [00:03<00:02,   96.25it/s]
 58%|█████▊    | Permuting : 297/510 [00:03<00:02,   95.87it/s]
 59%|█████▉    | Permuting : 300/510 [00:03<00:02,   95.51it/s]
 60%|█████▉    | Permuting : 304/510 [00:03<00:02,   96.67it/s]
 60%|██████    | Permuting : 307/510 [00:03<00:02,   96.26it/s]
 61%|██████    | Permuting : 311/510 [00:03<00:02,   97.27it/s]
 62%|██████▏   | Permuting : 314/510 [00:03<00:02,   96.84it/s]
 62%|██████▏   | Permuting : 317/510 [00:03<00:02,   96.43it/s]
 63%|██████▎   | Permuting : 320/510 [00:03<00:01,   96.06it/s]
 63%|██████▎   | Permuting : 322/510 [00:03<00:01,   94.20it/s]
 64%|██████▎   | Permuting : 325/510 [00:03<00:01,   93.93it/s]
 64%|██████▍   | Permuting : 328/510 [00:03<00:01,   93.67it/s]
 65%|██████▍   | Permuting : 331/510 [00:03<00:01,   93.43it/s]
 65%|██████▌   | Permuting : 334/510 [00:03<00:01,   93.19it/s]
 66%|██████▌   | Permuting : 337/510 [00:03<00:01,   92.95it/s]
 67%|██████▋   | Permuting : 340/510 [00:03<00:01,   92.73it/s]
 67%|██████▋   | Permuting : 344/510 [00:03<00:01,   94.01it/s]
 68%|██████▊   | Permuting : 347/510 [00:03<00:01,   93.71it/s]
 69%|██████▊   | Permuting : 350/510 [00:03<00:01,   93.45it/s]
 69%|██████▉   | Permuting : 353/510 [00:03<00:01,   93.20it/s]
 70%|██████▉   | Permuting : 356/510 [00:03<00:01,   92.96it/s]
 71%|███████   | Permuting : 360/510 [00:03<00:01,   94.22it/s]
 71%|███████   | Permuting : 363/510 [00:03<00:01,   93.91it/s]
 72%|███████▏  | Permuting : 366/510 [00:03<00:01,   93.61it/s]
 72%|███████▏  | Permuting : 369/510 [00:03<00:01,   93.37it/s]
 73%|███████▎  | Permuting : 372/510 [00:03<00:01,   93.15it/s]
 74%|███████▎  | Permuting : 375/510 [00:03<00:01,   92.93it/s]
 74%|███████▍  | Permuting : 379/510 [00:03<00:01,   94.21it/s]
 75%|███████▍  | Permuting : 382/510 [00:03<00:01,   93.94it/s]
 75%|███████▌  | Permuting : 385/510 [00:03<00:01,   93.66it/s]
 76%|███████▌  | Permuting : 388/510 [00:04<00:01,   93.40it/s]
 77%|███████▋  | Permuting : 391/510 [00:04<00:01,   93.03it/s]
 77%|███████▋  | Permuting : 393/510 [00:04<00:01,   91.34it/s]
 78%|███████▊  | Permuting : 396/510 [00:04<00:01,   91.21it/s]
 78%|███████▊  | Permuting : 399/510 [00:04<00:01,   91.09it/s]
 79%|███████▉  | Permuting : 402/510 [00:04<00:01,   90.98it/s]
 80%|███████▉  | Permuting : 406/510 [00:04<00:01,   92.30it/s]
 80%|████████  | Permuting : 409/510 [00:04<00:01,   92.13it/s]
 81%|████████  | Permuting : 412/510 [00:04<00:01,   91.97it/s]
 81%|████████▏ | Permuting : 415/510 [00:04<00:01,   91.81it/s]
 82%|████████▏ | Permuting : 419/510 [00:04<00:00,   93.13it/s]
 83%|████████▎ | Permuting : 422/510 [00:04<00:00,   92.90it/s]
 83%|████████▎ | Permuting : 425/510 [00:04<00:00,   92.70it/s]
 84%|████████▍ | Permuting : 428/510 [00:04<00:00,   92.50it/s]
 85%|████████▍ | Permuting : 431/510 [00:04<00:00,   92.30it/s]
 85%|████████▌ | Permuting : 434/510 [00:04<00:00,   92.12it/s]
 85%|████████▌ | Permuting : 436/510 [00:04<00:00,   90.47it/s]
 86%|████████▌ | Permuting : 439/510 [00:04<00:00,   90.38it/s]
 87%|████████▋ | Permuting : 442/510 [00:04<00:00,   90.30it/s]
 87%|████████▋ | Permuting : 445/510 [00:04<00:00,   90.21it/s]
 88%|████████▊ | Permuting : 448/510 [00:04<00:00,   90.14it/s]
 88%|████████▊ | Permuting : 451/510 [00:04<00:00,   89.93it/s]
 89%|████████▉ | Permuting : 454/510 [00:04<00:00,   89.88it/s]
 90%|████████▉ | Permuting : 457/510 [00:04<00:00,   89.83it/s]
 90%|█████████ | Permuting : 459/510 [00:04<00:00,   88.29it/s]
 90%|█████████ | Permuting : 461/510 [00:04<00:00,   86.82it/s]
 91%|█████████ | Permuting : 464/510 [00:04<00:00,   86.92it/s]
 92%|█████████▏| Permuting : 468/510 [00:04<00:00,   88.48it/s]
 92%|█████████▏| Permuting : 471/510 [00:04<00:00,   88.49it/s]
 93%|█████████▎| Permuting : 474/510 [00:05<00:00,   88.51it/s]
 94%|█████████▎| Permuting : 477/510 [00:05<00:00,   88.52it/s]
 94%|█████████▍| Permuting : 481/510 [00:05<00:00,   90.01it/s]
 95%|█████████▍| Permuting : 484/510 [00:05<00:00,   89.94it/s]
 95%|█████████▌| Permuting : 486/510 [00:05<00:00,   88.41it/s]
 96%|█████████▌| Permuting : 490/510 [00:05<00:00,   89.91it/s]
 97%|█████████▋| Permuting : 493/510 [00:05<00:00,   89.85it/s]
 97%|█████████▋| Permuting : 496/510 [00:05<00:00,   89.79it/s]
 98%|█████████▊| Permuting : 500/510 [00:05<00:00,   91.21it/s]
 98%|█████████▊| Permuting : 502/510 [00:05<00:00,   89.60it/s]
 99%|█████████▉| Permuting : 506/510 [00:05<00:00,   91.04it/s]
100%|█████████▉| Permuting : 508/510 [00:05<00:00,   89.45it/s]
100%|██████████| Permuting : 510/510 [00:05<00:00,   94.41it/s]

We can also combine TFCE and the “hat” correction:

titles.append(r"$\mathbf{C_{hat,TFCE}}$")
t_tfce_hat, _, p_tfce_hat, H0 = permutation_cluster_1samp_test(
    X,
    n_jobs=None,
    threshold=threshold_tfce,
    adjacency=None,
    out_type="mask",
    n_permutations=n_permutations,
    stat_fun=stat_fun_hat,
    buffer_size=None,
)
ts.append(t_tfce_hat)
ps.append(p_tfce_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
$\mathbf{C_{hat,TFCE}}$
stat_fun(H1): min=-0.04360308801187525 max=3.127369419320333
Running initial clustering …
Using 16 thresholds from 0.00 to 3.00 for TFCE computation (h_power=2.00, e_power=0.50)
Found 1600 clusters

  0%|          | Permuting : 0/510 [00:00<?,       ?it/s]
  1%|          | Permuting : 3/510 [00:00<00:05,   87.53it/s]
  2%|▏         | Permuting : 9/510 [00:00<00:03,  133.17it/s]
  3%|▎         | Permuting : 15/510 [00:00<00:03,  148.62it/s]
  4%|▍         | Permuting : 21/510 [00:00<00:03,  156.39it/s]
  5%|▌         | Permuting : 27/510 [00:00<00:03,  160.95it/s]
  6%|▋         | Permuting : 32/510 [00:00<00:03,  157.59it/s]
  7%|▋         | Permuting : 36/510 [00:00<00:03,  151.08it/s]
  8%|▊         | Permuting : 42/510 [00:00<00:03,  154.93it/s]
  9%|▉         | Permuting : 47/510 [00:00<00:03,  153.97it/s]
 10%|█         | Permuting : 51/510 [00:00<00:03,  149.54it/s]
 11%|█         | Permuting : 57/510 [00:00<00:02,  152.79it/s]
 12%|█▏        | Permuting : 62/510 [00:00<00:02,  152.28it/s]
 13%|█▎        | Permuting : 68/510 [00:00<00:02,  154.88it/s]
 15%|█▍        | Permuting : 74/510 [00:00<00:02,  157.09it/s]
 15%|█▌        | Permuting : 79/510 [00:00<00:02,  156.05it/s]
 16%|█▋        | Permuting : 84/510 [00:00<00:02,  155.34it/s]
 17%|█▋        | Permuting : 88/510 [00:00<00:02,  152.19it/s]
 18%|█▊        | Permuting : 92/510 [00:00<00:02,  149.38it/s]
 19%|█▉        | Permuting : 96/510 [00:00<00:02,  146.91it/s]
 20%|█▉        | Permuting : 100/510 [00:00<00:02,  144.69it/s]
 20%|██        | Permuting : 104/510 [00:00<00:02,  142.69it/s]
 21%|██        | Permuting : 108/510 [00:00<00:02,  140.88it/s]
 22%|██▏       | Permuting : 111/510 [00:00<00:02,  136.89it/s]
 23%|██▎       | Permuting : 115/510 [00:00<00:02,  135.59it/s]
 23%|██▎       | Permuting : 119/510 [00:00<00:02,  134.41it/s]
 24%|██▍       | Permuting : 123/510 [00:00<00:02,  133.27it/s]
 25%|██▍       | Permuting : 127/510 [00:00<00:02,  132.28it/s]
 26%|██▌       | Permuting : 131/510 [00:00<00:02,  131.37it/s]
 27%|██▋       | Permuting : 136/510 [00:00<00:02,  132.45it/s]
 28%|██▊       | Permuting : 141/510 [00:01<00:02,  133.44it/s]
 29%|██▊       | Permuting : 146/510 [00:01<00:02,  134.33it/s]
 30%|██▉       | Permuting : 152/510 [00:01<00:02,  137.00it/s]
 31%|███       | Permuting : 157/510 [00:01<00:02,  137.65it/s]
 32%|███▏      | Permuting : 161/510 [00:01<00:02,  136.44it/s]
 33%|███▎      | Permuting : 167/510 [00:01<00:02,  138.91it/s]
 34%|███▍      | Permuting : 173/510 [00:01<00:02,  141.16it/s]
 35%|███▌      | Permuting : 180/510 [00:01<00:02,  145.03it/s]
 36%|███▌      | Permuting : 184/510 [00:01<00:02,  143.47it/s]
 37%|███▋      | Permuting : 188/510 [00:01<00:02,  142.00it/s]
 38%|███▊      | Permuting : 192/510 [00:01<00:02,  140.65it/s]
 38%|███▊      | Permuting : 196/510 [00:01<00:02,  139.38it/s]
 40%|███▉      | Permuting : 202/510 [00:01<00:02,  141.53it/s]
 41%|████      | Permuting : 207/510 [00:01<00:02,  141.88it/s]
 41%|████▏     | Permuting : 211/510 [00:01<00:02,  140.56it/s]
 42%|████▏     | Permuting : 215/510 [00:01<00:02,  139.33it/s]
 43%|████▎     | Permuting : 219/510 [00:01<00:02,  138.18it/s]
 44%|████▎     | Permuting : 223/510 [00:01<00:02,  137.08it/s]
 44%|████▍     | Permuting : 226/510 [00:01<00:02,  134.44it/s]
 46%|████▌     | Permuting : 233/510 [00:01<00:02,  138.37it/s]
 47%|████▋     | Permuting : 239/510 [00:01<00:01,  140.46it/s]
 48%|████▊     | Permuting : 246/510 [00:01<00:01,  144.03it/s]
 50%|████▉     | Permuting : 253/510 [00:01<00:01,  147.40it/s]
 51%|█████     | Permuting : 259/510 [00:01<00:01,  148.99it/s]
 52%|█████▏    | Permuting : 266/510 [00:01<00:01,  152.08it/s]
 53%|█████▎    | Permuting : 272/510 [00:01<00:01,  153.41it/s]
 55%|█████▍    | Permuting : 278/510 [00:01<00:01,  154.67it/s]
 55%|█████▌    | Permuting : 283/510 [00:01<00:01,  154.22it/s]
 57%|█████▋    | Permuting : 289/510 [00:01<00:01,  155.43it/s]
 58%|█████▊    | Permuting : 295/510 [00:01<00:01,  156.56it/s]
 59%|█████▉    | Permuting : 300/510 [00:02<00:01,  156.11it/s]
 60%|██████    | Permuting : 306/510 [00:02<00:01,  157.19it/s]
 61%|██████    | Permuting : 312/510 [00:02<00:01,  158.23it/s]
 62%|██████▏   | Permuting : 317/510 [00:02<00:01,  157.69it/s]
 63%|██████▎   | Permuting : 323/510 [00:02<00:01,  158.70it/s]
 64%|██████▍   | Permuting : 327/510 [00:02<00:01,  156.62it/s]
 65%|██████▌   | Permuting : 332/510 [00:02<00:01,  156.18it/s]
 66%|██████▋   | Permuting : 338/510 [00:02<00:01,  157.27it/s]
 67%|██████▋   | Permuting : 344/510 [00:02<00:01,  158.32it/s]
 69%|██████▊   | Permuting : 350/510 [00:02<00:01,  159.30it/s]
 70%|██████▉   | Permuting : 356/510 [00:02<00:00,  160.24it/s]
 71%|███████   | Permuting : 361/510 [00:02<00:00,  159.61it/s]
 72%|███████▏  | Permuting : 367/510 [00:02<00:00,  160.53it/s]
 73%|███████▎  | Permuting : 373/510 [00:02<00:00,  161.23it/s]
 74%|███████▍  | Permuting : 377/510 [00:02<00:00,  158.91it/s]
 75%|███████▍  | Permuting : 382/510 [00:02<00:00,  158.36it/s]
 76%|███████▌  | Permuting : 388/510 [00:02<00:00,  159.33it/s]
 77%|███████▋  | Permuting : 394/510 [00:02<00:00,  160.26it/s]
 79%|███████▊  | Permuting : 401/510 [00:02<00:00,  162.57it/s]
 80%|███████▉  | Permuting : 407/510 [00:02<00:00,  163.33it/s]
 81%|████████  | Permuting : 413/510 [00:02<00:00,  164.05it/s]
 82%|████████▏ | Permuting : 420/510 [00:02<00:00,  166.24it/s]
 84%|████████▎ | Permuting : 426/510 [00:02<00:00,  166.83it/s]
 85%|████████▍ | Permuting : 432/510 [00:02<00:00,  167.18it/s]
 86%|████████▌ | Permuting : 438/510 [00:02<00:00,  167.69it/s]
 87%|████████▋ | Permuting : 443/510 [00:02<00:00,  166.70it/s]
 88%|████████▊ | Permuting : 449/510 [00:02<00:00,  167.17it/s]
 89%|████████▉ | Permuting : 453/510 [00:02<00:00,  164.72it/s]
 90%|████████▉ | Permuting : 457/510 [00:02<00:00,  162.17it/s]
 90%|█████████ | Permuting : 460/510 [00:03<00:00,  158.48it/s]
 91%|█████████ | Permuting : 465/510 [00:03<00:00,  157.93it/s]
 92%|█████████▏| Permuting : 468/510 [00:03<00:00,  154.43it/s]
 93%|█████████▎| Permuting : 472/510 [00:03<00:00,  152.60it/s]
 93%|█████████▎| Permuting : 476/510 [00:03<00:00,  150.87it/s]
 94%|█████████▍| Permuting : 481/510 [00:03<00:00,  150.73it/s]
 95%|█████████▌| Permuting : 486/510 [00:03<00:00,  150.58it/s]
 96%|█████████▋| Permuting : 492/510 [00:03<00:00,  151.82it/s]
 97%|█████████▋| Permuting : 497/510 [00:03<00:00,  151.62it/s]
 99%|█████████▊| Permuting : 503/510 [00:03<00:00,  152.81it/s]
100%|█████████▉| Permuting : 509/510 [00:03<00:00,  154.00it/s]
100%|██████████| Permuting : 510/510 [00:03<00:00,  151.89it/s]

Visualize and compare methods#

Let’s take a look at these statistics. The top row shows each test statistic, and the bottom shows p-values for various statistical tests, with the ones with proper control over FWER or FDR with bold titles.

fig = plt.figure(facecolor="w", figsize=(14, 3), layout="constrained")
assert len(ts) == len(titles) == len(ps)
for ii in range(len(ts)):
    ax = [
        fig.add_subplot(2, 10, ii + 1, projection="3d"),
        fig.add_subplot(2, 10, 11 + ii),
    ]
    plot_t_p(ts[ii], ps[ii], titles[ii], mccs[ii], ax)
t, $\mathrm{t_{hat}}$, Permutation, Bonferroni, FDR, $\mathbf{Perm_{max}}$, Clustering, $\mathbf{C_{hat}}$, $\mathbf{C_{TFCE}}$, $\mathbf{C_{hat,TFCE}}$

The first three columns show the parametric and non-parametric statistics that are not corrected for multiple comparisons:

  • Mass univariate t-tests result in jagged edges.

  • “Hat” variance correction of the t-tests produces less peaky edges, correcting for sharpness in the statistic driven by low-variance voxels.

  • Non-parametric resampling tests are very similar to t-tests. This is to be expected: the data are drawn from a Gaussian distribution, and thus satisfy parametric assumptions.

The next three columns show multiple comparison corrections of the mass univariate tests (parametric and non-parametric). These too conservatively correct for multiple comparisons because neighboring voxels in our data are correlated:

  • Bonferroni correction eliminates any significant activity.

  • FDR correction is less conservative than Bonferroni.

  • A permutation test with a maximum statistic also eliminates any significant activity.

The final four columns show the non-parametric cluster-based permutation tests with a maximum statistic:

  • Standard clustering identifies the correct region. However, the whole area must be declared significant, so no peak analysis can be done. Also, the peak is broad.

  • Clustering with “hat” variance adjustment tightens the estimate of significant activity.

  • Clustering with TFCE allows analyzing each significant point independently, but still has a broadened estimate.

  • Clustering with TFCE and “hat” variance adjustment tightens the area declared significant (again FWER corrected).

Statistical functions in MNE#

The complete listing of statistical functions provided by MNE are in the Statistics API list, but we will give a brief overview here.

MNE provides several convenience parametric testing functions that can be used in conjunction with the non-parametric clustering methods. However, the set of functions we provide is not meant to be exhaustive.

If the univariate statistical contrast of interest is not listed here (e.g., interaction term in an unbalanced ANOVA), consider checking out the statsmodels package. It offers many functions for computing statistical contrasts, e.g., statsmodels.stats.anova.anova_lm(). To use these functions in clustering:

  1. Determine which test statistic (e.g., t-value, F-value) you would use in a univariate context to compute your contrast of interest. In other words, if there were only a single output such as reaction times, what test statistic might you compute on the data?

  2. Wrap the call to that function within a function that takes an input of the same shape that is expected by your clustering function, and returns an array of the same shape without the “samples” dimension (e.g., mne.stats.permutation_cluster_1samp_test() takes an array of shape (n_samples, p, q) and returns an array of shape (p, q)).

  3. Pass this wrapped function to the stat_fun argument to the clustering function.

  4. Set an appropriate threshold value (float or dict) based on the values your statistical contrast function returns.

Parametric methods provided by MNE#

  • mne.stats.ttest_1samp_no_p()

    Paired t-test, optionally with hat adjustment. This is used by default for contrast enhancement in paired cluster tests.

  • mne.stats.f_oneway()

    One-way ANOVA for independent samples. This can be used to compute various F-contrasts. It is used by default for contrast enhancement in non-paired cluster tests.

  • mne.stats.f_mway_rm()

    M-way ANOVA for repeated measures and balanced designs. This returns F-statistics and p-values. The associated helper function mne.stats.f_threshold_mway_rm() can be used to determine the F-threshold at a given significance level.

  • mne.stats.linear_regression()

    Compute ordinary least square regressions on multiple targets, e.g., sensors, time points across trials (samples). For each regressor it returns the beta value, t-statistic, and uncorrected p-value. While it can be used as a test, it is particularly useful to compute weighted averages or deal with continuous predictors.

Non-parametric methods#

Warning

In most MNE functions, data has shape (..., n_space, n_time), where the spatial dimension can be e.g. sensors or source vertices. But for our spatio-temporal clustering functions, the spatial dimensions need to be last for computational efficiency reasons. For example, for mne.stats.spatio_temporal_cluster_1samp_test(), X needs to be of shape (n_samples, n_time, n_space). You can use numpy.transpose() to transpose axes if necessary.

References#

Total running time of the script: (0 minutes 34.850 seconds)

Estimated memory usage: 9 MB

Gallery generated by Sphinx-Gallery