Class for visualizing a brain.
Warning
The API for this class is not currently complete. We suggest using
mne.viz.plot_source_estimates()
with the PyVista backend
enabled to obtain a Brain
instance.
str
Subject name in Freesurfer subjects dir.
str
Hemisphere id (ie ‘lh’, ‘rh’, ‘both’, or ‘split’). In the case of ‘both’, both hemispheres are shown in the same window. In the case of ‘split’ hemispheres are displayed side-by-side in different viewing panes.
str
FreeSurfer surface mesh name (ie ‘white’, ‘inflated’, etc.).
str
Title for the window.
str
, list
, dict
Specifies how the cortical surface is rendered. Options:
'classic'
(default), 'high_contrast'
,
'low_contrast'
, or 'bone'
.
color, e.g. 'red'
or (0.1, 0.4, 1.)
.
values for gyral (first) and sulcal (second). regions, e.g.,
['red', 'blue']
or [(1, 0, 0), (0, 0, 1)]
.
'vmin', 'vmax', 'colormap'
withvalues used to render the binarized curvature (where 0 is gyral, 1 is sulcal).
Changed in version 0.24: Add support for non-string arguments.
float
in [0, 1]Alpha level to control opacity of the cortical surface.
int
| array-like, shape (2,)The size of the window, in pixels. can be one number to specify a square window, or a length-2 sequence to specify (width, height).
tuple
(int
, int
, int
)The color definition of the background: (red, green, blue).
Color of the foreground (will be used for colorbars and text).
None (default) will use black or white depending on the value
of background
.
list
of Figure
| None
If None (default), a new window will be created with the appropriate views.
str
| None
If not None, this directory will be used as the subjects directory instead of the value set using the SUBJECTS_DIR environment variable.
str
| list
View to use. Using multiple views (list) is not supported for mpl
backend. See Brain.show_view
for
valid string options.
str
If True, shifts the right- or left-most x coordinate of the left and
right surfaces, respectively, to be at zero. This is useful for viewing
inflated surface where hemispheres typically overlap. Can be “auto”
(default) use True with inflated surfaces and False otherwise
(Default: ‘auto’). Only used when hemi='both'
.
Changed in version 0.23: Default changed to “auto”.
If True, toolbars will be shown for each view.
If True, rendering will be done offscreen (not shown). Useful mostly for generating images or screenshots, but can be buggy. Use at your own risk.
str
Can be “trackball” (default) or “terrain”, i.e. a turntable-style camera.
str
Can be ‘m’ or ‘mm’ (default).
str
Can be “vertical” (default) or “horizontal”. When using “horizontal” mode, the PyVista backend must be used and hemi cannot be “split”.
dict
| boolAs a dict, it contains the color
, linewidth
, alpha
opacity
and decimate
(level of decimation between 0 and 1 or None) of the
brain’s silhouette to display. If True, the default values are used
and if False, no silhouette will be displayed. Defaults to False.
str
| path-likeCan be “auto”, “light”, or “dark” or a path-like to a
custom stylesheet. For Dark-Mode and automatic Dark-Mode-Detection,
qdarkstyle
and
darkdetect,
respectively, are required. If None (default), the config option MNE_3D_OPTION_THEME will be used,
defaulting to “auto” if it’s not found.
Display the window as soon as it is ready. Defaults to True.
If True, start the Qt application event loop. Default to False.
Notes
This table shows the capabilities of each Brain backend (”✓” for full support, and “-” for partial support):
3D function: |
surfer.Brain |
mne.viz.Brain |
---|---|---|
✓ |
✓ |
|
✓ |
✓ |
|
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
✓ |
|
data |
✓ |
✓ |
foci |
✓ |
|
labels |
✓ |
✓ |
✓ |
||
✓ |
||
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
||
✓ |
||
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
✓ |
|
✓ |
✓ |
|
✓ |
✓ |
|
TimeViewer |
✓ |
✓ |
✓ |
||
✓ |
||
view_layout |
✓ |
|
flatmaps |
✓ |
|
vertex picking |
✓ |
|
label picking |
✓ |
Methods
|
Add an annotation file. |
|
Display data from a numpy array on the surface or volume. |
|
Add a quiver to render positions of dipoles. |
|
Add spherical foci, possibly mapping to displayed surf. |
|
Add a quiver to render positions of dipoles. |
|
Add a mesh to render the outer head surface. |
|
Add an ROI label to the image. |
|
Add mesh objects to represent sensor positions. |
|
Add a mesh to render the skull surface. |
|
Add a text to the visualization. |
|
Add labels to the rendering from an anatomical segmentation. |
Detect automatically fitting scaling parameters. |
|
Clear the picking glyphs. |
|
|
Close all figures and cleanup data structure. |
Return the vertices of the picked points. |
|
|
Get the camera orientation for a given subplot display. |
|
Display the help window. |
|
Plot the vertex time course. |
|
Add the time line to the MPL widget. |
Remove all annotations from the image. |
|
Remove rendered data from the mesh. |
|
Remove dipole objects from the rendered scene. |
|
Remove forward sources from the rendered scene. |
|
Remove head objects from the rendered scene. |
|
Remove all the ROI labels from the image. |
|
|
Remove sensors from the rendered scene. |
Remove skull objects from the rendered scene. |
|
|
Remove text from the rendered scene. |
Remove the volume labels from the rendered scene. |
|
|
Reset view and time step. |
Reset the camera. |
|
Restore original scaling parameters. |
|
|
Save view from all panels to disk. |
|
Save a movie (for data with a time axis). |
|
Generate a screenshot of current view. |
|
Set the number of smoothing steps. |
|
Set the time playback speed. |
|
Set the time to display (in seconds). |
|
Set the interpolation mode. |
|
Set the time point shown (can be a float to interpolate). |
|
Configure the time viewer parameters. |
|
Display the window. |
|
Orient camera to display view. |
|
Toggle the interface. |
|
Toggle time playback. |
|
Update color map. |
Add an annotation file.
str
| tuple
Either path to annotation file or annotation name. Alternatively,
the annotation can be specified as a (labels, ctab)
tuple per
hemisphere, i.e. annot=(labels, ctab)
for a single hemisphere
or annot=((lh_labels, lh_ctab), (rh_labels, rh_ctab))
for both
hemispheres. labels
and ctab
should be arrays as returned
by nibabel.freesurfer.io.read_annot()
.
int
Show only label borders. If int, specify the number of steps (away from the true border) along the cortical mesh to include as part of the border definition.
float
in [0, 1]Alpha level to control opacity. Default is 1.
str
| None
If None, it is assumed to belong to the hemipshere being shown. If two hemispheres are being shown, data must exist for both hemispheres.
If True (default), remove old annotations.
code
If used, show all annotations in the same (specified) color. Probably useful only when showing annotation borders.
Examples using add_annotation
:
Visualize source time courses (stcs)
Display data from a numpy array on the surface or volume.
This provides a similar interface to
surfer.Brain.add_overlay()
, but it displays
it with a single colormap. It offers more flexibility over the
colormap, and provides a way to display four-dimensional data
(i.e., a timecourse) or five-dimensional data (i.e., a
vector-valued timecourse).
Note
fmin
sets the low end of the colormap, and is separate
from thresh (this is a different convention from
surfer.Brain.add_overlay()
).
numpy
array
, shape (n_vertices[, 3][, n_times])Data array. For the data to be understood as vector-valued
(3 values per vertex corresponding to X/Y/Z surface RAS),
then array
must be have all 3 dimensions.
If vectors with no time dimension are desired, consider using a
singleton (e.g., np.newaxis
) to create a “time” dimension
and pass time_label=None
(vector values are not supported).
float
Minimum value in colormap (uses real fmin if None).
float
Intermediate value in colormap (fmid between fmin and fmax if None).
float
Maximum value in colormap (uses real max if None).
None
or float
Not supported yet. If not None, values below thresh will not be visible.
float
or None
If not None, center of a divergent colormap, changes the meaning of fmin, fmax and fmid.
None
If True: use a linear transparency between fmin and fmid and make values below fmin fully transparent (symmetrically for divergent colormaps). None will choose automatically based on colormap type.
str
, list
of color, or array
Name of matplotlib colormap to use, a list of matplotlib colors, or a custom look up table (an n x 4 array coded with RBGA values between 0 and 255), the default “auto” chooses a default divergent colormap, if “center” is given (currently “icefire”), otherwise a default sequential colormap (currently “rocket”).
float
in [0, 1]Alpha level to control opacity of the overlay.
numpy
array
Vertices for which the data is defined (needed if
len(data) < nvtx
).
int
or None
Number of smoothing steps (smoothing is used if len(data) < nvtx) The value ‘nearest’ can be used too. None (default) will use as many as necessary to fill the surface.
numpy
array
Time points in the data array (if data is 2D or 3D).
str
| callable()
| None
Format of the time label (a format string, a function that maps
floating point time values to strings, or None for no label). The
default is 'auto'
, which will use time=%0.2f ms
if there
is more than one time point.
Whether to add a colorbar to the figure. Can also be a tuple to give the (row, col) index of where to put the colorbar.
str
| None
If None, it is assumed to belong to the hemisphere being shown. If two hemispheres are being shown, an error will be thrown.
Not supported yet. Remove surface added by previous “add_data” call. Useful for conserving memory when displaying different data in a loop.
int
Font size of the time label (default 14).
float
| None
Time initially shown in the plot. None
to use the first time
sample (default).
float
| None
(default)The scale factor to use when displaying glyphs for vector-valued data.
float
| None
Alpha level to control opacity of the arrows. Only used for
vector-valued data. If None (default), alpha
is used.
dict
Original clim arguments.
SourceSpaces
| None
The source space corresponding to the source estimate. Only necessary if the STC is a volume or mixed source estimate.
float
| dict
| None
Options for volumetric source estimate plotting, with key/value pairs:
'resolution'
float | NoneResolution (in mm) of volume rendering. Smaller (e.g., 1.) looks better at the cost of speed. None (default) uses the volume source space resolution, which is often something like 7 or 5 mm, without resampling.
'blending'
strCan be “mip” (default) for maximum intensity projection or “composite” for composite blending using alpha values.
'alpha'
float | NoneAlpha for the volumetric rendering. Defaults are 0.4 for vector source estimates and 1.0 for scalar source estimates.
'surface_alpha'
float | NoneAlpha for the surface enclosing the volume(s). None (default) will use half the volume alpha. Set to zero to avoid plotting the surface.
'silhouette_alpha'
float | NoneAlpha for a silhouette along the outside of the volume. None (default)
will use 0.25 * surface_alpha
.
'silhouette_linewidth'
floatThe line width to use for the silhouette. Default is 2.
A float input (default 1.) or None will be used for the 'resolution'
entry.
dict
| None
Options to pass to pyvista.Plotter.add_scalar_bar()
(e.g., dict(title_font_size=10)
).
str
| int
| None
Control verbosity of the logging output. If None
, use the default
verbosity level. See the logging documentation and
mne.verbose()
for details. Should only be passed as a keyword
argument.
Notes
If the data is defined for a subset of vertices (specified by the “vertices” parameter), a smoothing method is used to interpolate the data onto the high resolution surface. If the data is defined for subsampled version of the surface, smoothing_steps can be set to None, in which case only as many smoothing steps are applied until the whole surface is filled with non-zeros.
Due to a VTK alpha rendering bug, vector_alpha
is
clamped to be strictly < 1.
Examples using add_data
:
Add a quiver to render positions of dipoles.
Dipole
Dipole object containing position, orientation and amplitude of one or more dipoles or in the forward solution.
str
| dict
| instance of Transform
If str, the path to the head<->MRI transform *-trans.fif
file produced
during coregistration. Can also be 'fsaverage'
to use the built-in
fsaverage transformation.
list
| matplotlib-style color | None
A single color or list of anything matplotlib accepts: string, RGB, hex, etc. Default red.
float
in [0, 1]Alpha level to control opacity. Default 1.
list
| float
| None
The size of the arrow representing the dipole in
mne.viz.Brain
units. Default 5mm.
Notes
New in version 1.0.
Examples using add_dipole
:
Add spherical foci, possibly mapping to displayed surf.
The foci spheres can be displayed at the coordinates given, or mapped through a surface geometry. In other words, coordinates from a volume-based analysis in MNI space can be displayed on an inflated average surface by finding the closest vertex on the white surface and mapping to that vertex on the inflated mesh.
ndarray
, shape (n_coords, 3)Coordinates in stereotaxic space (default) or array of
vertex ids (with coord_as_verts=True
).
Whether the coords parameter should be interpreted as vertex ids.
str
| None
Surface to project the coordinates to, or None to use raw coords. When set to a surface, each foci is positioned at the closest vertex in the mesh.
float
Controls the size of the foci spheres (relative to 1cm).
A list of anything matplotlib accepts: string, RGB, hex, etc.
float
in [0, 1]Alpha level to control opacity. Default is 1.
str
Internal name to use.
str
| None
If None, it is assumed to belong to the hemipshere being shown. If two hemispheres are being shown, an error will be thrown.
int
The resolution of the spheres.
Examples using add_foci
:
How MNE uses FreeSurfer’s outputs
The SourceEstimate data structure
Source localization with MNE, dSPM, sLORETA, and eLORETA
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Compute cross-talk functions for LCMV beamformers
Add a quiver to render positions of dipoles.
Forward
The forward solution. If present, the orientations of the dipoles present in the forward solution are displayed.
str
| dict
| instance of Transform
If str, the path to the head<->MRI transform *-trans.fif
file produced
during coregistration. Can also be 'fsaverage'
to use the built-in
fsaverage transformation.
float
in [0, 1]Alpha level to control opacity. Default 1.
None
| float
The size of the arrow representing the dipoles in
mne.viz.Brain
units. Default 1.5mm.
Notes
New in version 1.0.
Add a mesh to render the outer head surface.
Notes
New in version 0.24.
Examples using add_head
:
Importing data from fNIRS devices
Add an ROI label to the image.
str
| instance of Label
Label filepath or name. Can also be an instance of an object with attributes “hemi”, “vertices”, “name”, and optionally “color” and “values” (if scalar_thresh is not None).
None
Anything matplotlib accepts: string, RGB, hex, etc. (default “crimson”).
float
in [0, 1]Alpha level to control opacity.
None
| float
Threshold the label ids using this value in the label file’s scalar field (i.e. label only vertices with scalar >= thresh).
int
Show only label borders. If int, specify the number of steps (away from the true border) along the cortical mesh to include as part of the border definition.
str
| None
If None, it is assumed to belong to the hemipshere being shown.
None
| str
If a label is specified as name, subdir can be used to indicate
that the label file is in a sub-directory of the subject’s
label directory rather than in the label directory itself (e.g.
for $SUBJECTS_DIR/$SUBJECT/label/aparc/lh.cuneus.label
brain.add_label('cuneus', subdir='aparc')
).
If True, reset the camera view after adding the label. Defaults to True.
Notes
To remove previously added labels, run Brain.remove_labels().
Examples using add_label
:
Compute Power Spectral Density of inverse solution from single epochs
Generate a functional label from source estimates
Compute MxNE with time-frequency sparse prior
Add mesh objects to represent sensor positions.
mne.Info
The mne.Info
object with information about the sensors and methods of measurement.
str
| dict
| instance of Transform
If str, the path to the head<->MRI transform *-trans.fif
file produced
during coregistration. Can also be 'fsaverage'
to use the built-in
fsaverage transformation.
str
| list
| bool | None
Can be “helmet”, “sensors” or “ref” to show the MEG helmet, sensors or
reference sensors respectively, or a combination like
('helmet', 'sensors')
(same as None, default). True translates to
('helmet', 'sensors', 'ref')
.
str
| list
String options are:
True
)Shows EEG sensors using their digitized locations (after
transformation to the chosen coord_frame
)
The EEG locations projected onto the scalp, as is done in forward modeling
Can also be a list of these options, or an empty list ([]
,
equivalent of False
).
str
| list
| bool | None
Can be “channels”, “pairs”, “detectors”, and/or “sources” to show the
fNIRS channel locations, optode locations, or line between
source-detector pairs, or a combination like ('pairs', 'channels')
.
True translates to ('pairs',)
.
If True (default), show ECoG sensors.
If True (default), show sEEG electrodes.
If True (default), show DBS (deep brain stimulation) electrodes.
str
| int
| None
Control verbosity of the logging output. If None
, use the default
verbosity level. See the logging documentation and
mne.verbose()
for details. Should only be passed as a keyword
argument.
Notes
New in version 0.24.
Examples using add_sensors
:
Importing data from fNIRS devices
Preprocessing functional near-infrared spectroscopy (fNIRS) data
Locating intracranial electrode contacts
Add a mesh to render the skull surface.
Notes
New in version 0.24.
Add a text to the visualization.
float
X coordinate.
float
Y coordinate.
str
Text to add.
str
Name of the text (text label can be updated using update_text()).
tuple
Color of the text. Default is the foreground color set during initialization (default is black or white depending on the background color).
float
Opacity of the text (default 1.0).
int
| None
Row index of which brain to use. Default is the top row.
int
| None
Column index of which brain to use. Default is the left-most column.
float
| None
The font size to use.
str
| None
The text justification.
Examples using add_text
:
The SourceEstimate data structure
Source localization with MNE, dSPM, sLORETA, and eLORETA
Computing various MNE solutions
Display sensitivity maps for EEG and MEG sensors
Visualize source leakage among labels using a circular graph
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Compute cross-talk functions for LCMV beamformers
Compute spatial resolution metrics in source space
Compute spatial resolution metrics to compare MEG with EEG+MEG
Add labels to the rendering from an anatomical segmentation.
str
The anatomical segmentation file. Default aparc+aseg
. This may
be any anatomical segmentation file in the mri subdirectory of the
Freesurfer subject directory.
list
Labeled regions of interest to plot. See
mne.get_montage_volume_labels()
for one way to determine regions of interest. Regions can also be
chosen from the FreeSurfer LUT.
list
| matplotlib-style color | None
A list of anything matplotlib accepts: string, RGB, hex, etc. (default FreeSurfer LUT colors).
float
in [0, 1]Alpha level to control opacity.
float
in [0, 1)The smoothing factor to be applied. Default 0 is no smoothing.
int
| None
The size of holes to remove in the mesh in voxels. Default is None,
no holes are removed. Warning, this dilates the boundaries of the
surface by fill_hole_size
number of voxels so use the minimal
size.
None
| dict
Add a legend displaying the names of the labels
. Default (None)
is True
if the number of labels
is 10 or fewer.
Can also be a dict of kwargs
to pass to
pyvista.Plotter.add_legend()
.
Notes
New in version 0.24.
Examples using add_volume_labels
:
Visualize source time courses (stcs)
Close all figures and cleanup data structure.
Examples using close
:
Make figures more publication ready
Data used by time viewer and color bar widgets.
Get the camera orientation for a given subplot display.
float
| None
The roll of the camera rendering the view in degrees.
float
| None
The distance from the camera rendering the view to the focalpoint in plot units (either m or mm).
float
The azimuthal angle of the camera rendering the view in degrees.
float
The The zenith angle of the camera rendering the view in degrees.
tuple
, shape (3,) | None
The focal point of the camera rendering the view: (x, y, z) in plot units (either m or mm).
The interaction style.
Add the time line to the MPL widget.
Force an update of the plot. Defaults to True.
Save view from all panels to disk.
Examples using save_image
:
Repeated measures ANOVA on source data with spatio-temporal clustering
Save a movie (for data with a time axis).
The movie is created through the imageio
module. The format is
determined by the extension, and additional options can be specified
through keyword arguments that depend on the format, see
imageio’s format page.
Warning
This method assumes that time is specified in seconds when adding data. If time is specified in milliseconds this will result in movies 1000 times longer than expected.
str
Path at which to save the movie. The extension determines the
format (e.g., '*.mov'
, '*.gif'
, …; see the imageio
documentation for available formats).
float
Factor by which to stretch time (default 4). For example, an epoch
from -100 to 600 ms lasts 700 ms. With time_dilation=4
this
would result in a 2.8 s long movie.
float
First time point to include (default: all data).
float
Last time point to include (default: all data).
float
Framerate of the movie (frames per second, default 24).
str
| None
Interpolation method (scipy.interpolate.interp1d
parameter).
Must be one of ‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’,
or ‘cubic’.
If None, it uses the current brain.interpolation
,
which defaults to 'nearest'
. Defaults to None.
str
| None
The codec to use.
float
| None
The bitrate to use.
callable()
| None
A function to call on each iteration. Useful for status message
updates. It will be passed keyword arguments frame
and
n_frames
.
If True, include time viewer traces. Only used if
time_viewer=True
and separate_canvas=False
.
dict
Specify additional options for imageio
.
Generate a screenshot of current view.
array
Image pixel values.
Examples using screenshot
:
Make figures more publication ready
Set the number of smoothing steps.
int
Number of smoothing steps.
Set the time playback speed.
float
The speed of the playback.
Set the time to display (in seconds).
float
The time to show, in seconds.
Set the interpolation mode.
str
| None
Interpolation method (scipy.interpolate.interp1d
parameter).
Must be one of ‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’,
or ‘cubic’.
Configure the time viewer parameters.
Notes
The keyboard shortcuts are the following:
‘?’: Display help window ‘i’: Toggle interface ‘s’: Apply auto-scaling ‘r’: Restore original clim ‘c’: Clear all traces ‘n’: Shift the time forward by the playback speed ‘b’: Shift the time backward by the playback speed ‘Space’: Start/Pause playback ‘Up’: Decrease camera elevation angle ‘Down’: Increase camera elevation angle ‘Left’: Decrease camera azimuth angle ‘Right’: Increase camera azimuth angle
Orient camera to display view.
str
| None
The name of the view to show (e.g. “lateral”). Other arguments
take precedence and modify the camera starting from the view
.
See Brain.show_view
for valid
string shortcut options.
float
| None
The roll of the camera rendering the view in degrees.
float
| None
The distance from the camera rendering the view to the focalpoint in plot units (either m or mm).
int
| None
The row to set. Default all rows.
int
| None
The column to set. Default all columns.
str
| None
Which hemi to use for view lookup (when in “both” mode).
If True, consider view arguments relative to canonical MRI directions (closest to MNI for the subject) rather than native MRI space. This helps when MRIs are not in standard orientation (e.g., have large rotations).
float
The azimuthal angle of the camera rendering the view in degrees.
float
The The zenith angle of the camera rendering the view in degrees.
tuple
, shape (3,) | None
The focal point of the camera rendering the view: (x, y, z) in plot units (either m or mm).
Notes
The builtin string views are the following perspectives, based on the RAS convention. If not otherwise noted, the view will have the top of the brain (superior, +Z) in 3D space shown upward in the 2D perspective:
'lateral'
From the left or right side such that the lateral (outside) surface of the given hemisphere is visible.
'medial'
From the left or right side such that the medial (inside) surface of the given hemisphere is visible (at least when in split or single-hemi mode).
'rostral'
From the front.
'caudal'
From the rear.
'dorsal'
From above, with the front of the brain pointing up.
'ventral'
From below, with the front of the brain pointing up.
'frontal'
From the front and slightly lateral, with the brain slightly tilted forward (yielding a view from slightly above).
'parietal'
From the rear and slightly lateral, with the brain slightly tilted backward (yielding a view from slightly above).
'axial'
From above with the brain pointing up (same as 'dorsal'
).
'sagittal'
From the right side.
'coronal'
From the rear.
Three letter abbreviations (e.g., 'lat'
) of all of the above are
also supported.
Examples using show_view
:
Importing data from fNIRS devices
Preprocessing functional near-infrared spectroscopy (fNIRS) data
How MNE uses FreeSurfer’s outputs
Visualize source time courses (stcs)
Repeated measures ANOVA on source data with spatio-temporal clustering
Locating intracranial electrode contacts
Generate a functional label from source estimates
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
The interpolation mode.
mne.viz.Brain
#Importing data from fNIRS devices
Working with CTF data: the Brainstorm auditory dataset
Preprocessing functional near-infrared spectroscopy (fNIRS) data
How MNE uses FreeSurfer’s outputs
The SourceEstimate data structure
Source localization with MNE, dSPM, sLORETA, and eLORETA
The role of dipole orientations in distributed source localization
Computing various MNE solutions
Source reconstruction using an LCMV beamformer
Visualize source time courses (stcs)
EEG source localization given electrode locations on an MRI
Permutation t-test on source data with spatio-temporal clustering
2 samples permutation test on source data with spatio-temporal clustering
Repeated measures ANOVA on source data with spatio-temporal clustering
Locating intracranial electrode contacts
Corrupt known signal with point spread
Simulate raw data using subject anatomy
Make figures more publication ready
Compute Power Spectral Density of inverse solution from single epochs
Compute source power spectral density (PSD) of VectorView and OPM data
Display sensitivity maps for EEG and MEG sensors
Compute source power using DICS beamformer
Compute evoked ERS source power using DICS, LCMV beamformer, and dSPM
Generate a functional label from source estimates
Compute MNE inverse solution on evoked data with a mixed source space
Compute source power estimate by projecting the covariance with MNE
Visualize source leakage among labels using a circular graph
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Compute cross-talk functions for LCMV beamformers
Compute spatial resolution metrics in source space
Compute spatial resolution metrics to compare MEG with EEG+MEG
Compute MxNE with time-frequency sparse prior
Plotting the full vector-valued MNE solution
Optically pumped magnetometer (OPM) data