Estimate cross-spectral density from an array using a multitaper method.
The time series data consisting of n_epochs separate observations of signals with n_channels time-series of length n_times.
float
Sampling frequency of observations.
float
Time of the first sample relative to the onset of the epoch, in seconds. Defaults to 0.
float
Minimum frequency of interest, in Hertz.
float
| numpy.inf
Maximum frequency of interest, in Hertz.
float
| None
Minimum time instant to consider, in seconds. If None
start at
first sample.
float
| None
Maximum time instant to consider, in seconds. If None
end at last
sample.
list
of str
| None
A name for each time series. If None
(the default), the series will
be named ‘SERIES###’.
int
| None
Length of the FFT. If None
, the exact number of samples between
tmin
and tmax
will be used.
float
| None
The bandwidth of the multitaper windowing function in Hz.
Use adaptive weights to combine the tapered spectra into PSD.
Only use tapers with more than 90% spectral concentration within bandwidth.
list
of Projection
| None
List of projectors to store in the CSD object. Defaults to None
,
which means no projectors are stored.
int
| None
The number of jobs to run in parallel. If -1
, it is set
to the number of CPU cores. Requires the joblib
package.
None
(default) is a marker for ‘unset’ that will be interpreted
as n_jobs=1
(sequential execution) unless the call is performed under
a joblib.parallel_backend()
context manager that sets another
value for n_jobs
.
str
| int
| None
Control verbosity of the logging output. If None
, use the default
verbosity level. See the logging documentation and
mne.verbose()
for details. Should only be passed as a keyword
argument.
CrossSpectralDensity
The computed cross-spectral density.