mne.beamformer.rap_music#

mne.beamformer.rap_music(evoked, forward, noise_cov, n_dipoles=5, return_residual=False, *, verbose=None)[source]#

RAP-MUSIC source localization method.

Compute Recursively Applied and Projected MUltiple SIgnal Classification (RAP-MUSIC) 12 on evoked data.

Note

The goodness of fit (GOF) of all the returned dipoles is the same and corresponds to the GOF of the full set of dipoles.

Parameters
evokedinstance of Evoked

Evoked data to localize.

forwardinstance of Forward

Forward operator.

noise_covinstance of Covariance

The noise covariance.

n_dipolesint

The number of dipoles to look for. The default value is 5.

return_residualbool

If True, the residual is returned as an Evoked instance.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns
dipoleslist of instance of Dipole

The dipole fits.

residualinstance of Evoked

The residual a.k.a. data not explained by the dipoles. Only returned if return_residual is True.

Notes

New in v0.9.0.

References

1

John C. Mosher and Richard M. Leahy. Source localization using recursively applied and projected (RAP) MUSIC. IEEE Transactions on Signal Processing, 47(2):332–340, 1999. doi:10.1109/78.740118.

2

J.C. Mosher and R.M. Leahy. EEG and MEG source localization using recursively applied (RAP) MUSIC. In Conference Record of The Thirtieth Asilomar Conference on Signals, Systems and Computers, 1201–1207 vol.2. November 1996. ISSN: 1058-6393. doi:10.1109/ACSSC.1996.599135.

Examples using mne.beamformer.rap_music#

Compute Rap-Music on evoked data

Compute Rap-Music on evoked data