mne.time_frequency.csd_array_multitaper(X, sfreq, t0=0, fmin=0, fmax=inf, tmin=None, tmax=None, ch_names=None, n_fft=None, bandwidth=None, adaptive=False, low_bias=True, projs=None, n_jobs=1, verbose=None)[source]

Estimate cross-spectral density from an array using Morlet wavelets.

X : array-like, shape (n_epochs, n_channels, n_times)

The time series data consisting of n_epochs separate observations of signals with n_channels time-series of length n_times.

sfreq : float

Sampling frequency of observations.

t0 : float

Time of the first sample relative to the onset of the epoch, in seconds. Defaults to 0.

fmin : float

Minimum frequency of interest, in Hertz.

fmax : float | np.inf

Maximum frequency of interest, in Hertz.

tmin : float | None

Minimum time instant to consider, in seconds. If None start at first sample.

tmax : float | None

Maximum time instant to consider, in seconds. If None end at last sample.

ch_names : list of str | None

A name for each time series. If None (the default), the series will be named ‘SERIES###’.

n_fft : int | None

Length of the FFT. If None, the exact number of samples between tmin and tmax will be used.

bandwidth : float | None

The bandwidth of the multitaper windowing function in Hz.

adaptive : bool

Use adaptive weights to combine the tapered spectra into PSD.

low_bias : bool

Only use tapers with more than 90% spectral concentration within bandwidth.

projs : list of Projection | None

List of projectors to store in the CSD object. Defaults to None, which means no projectors are stored.

n_jobs : int

Number of jobs to run in parallel. Defaults to 1.

verbose : bool | str | int | None

If not None, override default verbose level (see mne.verbose() and Logging documentation for more).

csd : instance of CrossSpectralDensity

The computed cross-spectral density.