mne.beamformer.rap_music#

mne.beamformer.rap_music(evoked, forward, noise_cov, n_dipoles=5, return_residual=False, *, verbose=None)[source]#

RAP-MUSIC source localization method.

Compute Recursively Applied and Projected MUltiple SIgnal Classification (RAP-MUSIC) [1][2] on evoked data.

Note

The goodness of fit (GOF) of all the returned dipoles is the same and corresponds to the GOF of the full set of dipoles.

Parameters:
evokedinstance of Evoked

Evoked data to localize.

forwardinstance of Forward

Forward operator.

noise_covinstance of Covariance

The noise covariance.

n_dipolesint

The number of dipoles to look for. The default value is 5.

return_residualbool

If True, the residual is returned as an Evoked instance.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
dipoleslist of instance of Dipole

The dipole fits.

residualinstance of Evoked

The residual a.k.a. data not explained by the dipoles. Only returned if return_residual is True.

Notes

New in version 0.9.0.

References

Examples using mne.beamformer.rap_music#

Compute Rap-Music on evoked data

Compute Rap-Music on evoked data