mne.beamformer.rap_music#

mne.beamformer.rap_music(evoked, forward, noise_cov, n_dipoles=5, return_residual=False, verbose=None)[source]#

RAP-MUSIC source localization method.

Compute Recursively Applied and Projected MUltiple SIgnal Classification (RAP-MUSIC) on evoked data.

Note

The goodness of fit (GOF) of all the returned dipoles is the same and corresponds to the GOF of the full set of dipoles.

Parameters:
evokedinstance of Evoked

Evoked data to localize.

forwardinstance of Forward

Forward operator.

noise_covinstance of Covariance

The noise covariance.

n_dipolesint

The number of dipoles to look for. The default value is 5.

return_residualbool

If True, the residual is returned as an Evoked instance.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
dipoleslist of instance of Dipole

The dipole fits.

residualinstance of Evoked

The residual a.k.a. data not explained by the dipoles. Only returned if return_residual is True.

See also

mne.fit_dipole

Notes

The references are:

J.C. Mosher and R.M. Leahy. 1999. Source localization using recursively applied and projected (RAP) MUSIC. Signal Processing, IEEE Trans. 47, 2 (February 1999), 332-340. DOI=10.1109/78.740118 https://doi.org/10.1109/78.740118

Mosher, J.C.; Leahy, R.M., EEG and MEG source localization using recursively applied (RAP) MUSIC, Signals, Systems and Computers, 1996. pp.1201,1207 vol.2, 3-6 Nov. 1996 doi: 10.1109/ACSSC.1996.599135

New in version 0.9.0.

Examples using mne.beamformer.rap_music#

Compute Rap-Music on evoked data

Compute Rap-Music on evoked data