mne.beamformer.trap_music#
- mne.beamformer.trap_music(evoked, forward, noise_cov, n_dipoles=5, return_residual=False, *, verbose=None)[source]#
TRAP-MUSIC source localization method.
Compute Truncated Recursively Applied and Projected MUltiple SIgnal Classification (TRAP-MUSIC) 1 on evoked data.
Note
The goodness of fit (GOF) of all the returned dipoles is the same and corresponds to the GOF of the full set of dipoles.
- Parameters
- evokedinstance of
Evoked Evoked data to localize.
- forwardinstance of
Forward Forward operator.
- noise_covinstance of
Covariance The noise covariance.
- n_dipoles
int The number of dipoles to look for. The default value is 5.
- return_residual
bool If True, the residual is returned as an Evoked instance.
- verbose
bool|str|int|None Control verbosity of the logging output. If
None, use the default verbosity level. See the logging documentation andmne.verbose()for details. Should only be passed as a keyword argument.
- evokedinstance of
- Returns
See also
Notes
New in v1.4.
References
- 1
Niko Mäkelä, Matti Stenroos, Jukka Sarvas, and Risto J. Ilmoniemi. Truncated rap-music (trap-music) for meg and eeg source localization. Neuroimage, 167():73–83, 2018. doi:10.1016/j.neuroimage.2017.11.013.