Estimate cross-spectral density from an array using Morlet wavelets.
The time series data consisting of n_epochs separate observations of signals with n_channels time-series of length n_times.
floatSampling frequency of observations.
list of floatThe frequencies of interest, in Hertz.
floatTime of the first sample relative to the onset of the epoch, in seconds. Defaults to 0.
float | NoneMinimum time instant to consider, in seconds. If None start at
first sample.
float | NoneMaximum time instant to consider, in seconds. If None end at last
sample.
list of str | NoneA name for each time series. If None (the default), the series will
be named ‘SERIES###’.
float | list of float | NoneNumber of cycles to use when constructing Morlet wavelets. Fixed number or one per frequency. Defaults to 7.
Whether to use FFT-based convolution to compute the wavelet transform. Defaults to True.
int | sliceTo reduce memory usage, decimation factor during time-frequency decomposition. Defaults to 1 (no decimation).
list of Projection | NoneList of projectors to store in the CSD object. Defaults to None,
which means the projectors defined in the Epochs object will be copied.
int | NoneThe number of jobs to run in parallel. If -1, it is set
to the number of CPU cores. Requires the joblib package.
None (default) is a marker for ‘unset’ that will be interpreted
as n_jobs=1 (sequential execution) unless the call is performed under
a joblib.parallel_backend() context manager that sets another
value for n_jobs.
str | int | NoneControl verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
CrossSpectralDensityThe computed cross-spectral density.