Class for visualizing a brain.
Warning
The API for this class is not currently complete. We suggest using
mne.viz.plot_source_estimates() with the PyVista backend
enabled to obtain a Brain instance.
strSubject name in Freesurfer subjects dir.
strHemisphere id (ie ‘lh’, ‘rh’, ‘both’, or ‘split’). In the case of ‘both’, both hemispheres are shown in the same window. In the case of ‘split’ hemispheres are displayed side-by-side in different viewing panes.
strFreeSurfer surface mesh name (ie ‘white’, ‘inflated’, etc.).
strTitle for the window.
str, list, dictSpecifies how the cortical surface is rendered. Options:
'classic' (default), 'high_contrast',
'low_contrast', or 'bone'.
color, e.g. 'red' or (0.1, 0.4, 1.).
values for gyral (first) and sulcal (second). regions, e.g.,
['red', 'blue'] or [(1, 0, 0), (0, 0, 1)].
'vmin', 'vmax', 'colormap' withvalues used to render the binarized curvature (where 0 is gyral, 1 is sulcal).
Changed in version 0.24: Add support for non-string arguments.
float in [0, 1]Alpha level to control opacity of the cortical surface.
int | array-like, shape (2,)The size of the window, in pixels. can be one number to specify a square window, or a length-2 sequence to specify (width, height).
tuple(int, int, int)The color definition of the background: (red, green, blue).
Color of the foreground (will be used for colorbars and text).
None (default) will use black or white depending on the value
of background.
list of Figure | NoneIf None (default), a new window will be created with the appropriate views.
str | NoneIf not None, this directory will be used as the subjects directory instead of the value set using the SUBJECTS_DIR environment variable.
str | listView to use. Using multiple views (list) is not supported for mpl
backend. See Brain.show_view for
valid string options.
strIf True, shifts the right- or left-most x coordinate of the left and
right surfaces, respectively, to be at zero. This is useful for viewing
inflated surface where hemispheres typically overlap. Can be “auto”
(default) use True with inflated surfaces and False otherwise
(Default: ‘auto’). Only used when hemi='both'.
Changed in version 0.23: Default changed to “auto”.
If True, toolbars will be shown for each view.
If True, rendering will be done offscreen (not shown). Useful mostly for generating images or screenshots, but can be buggy. Use at your own risk.
strCan be “trackball” (default) or “terrain”, i.e. a turntable-style camera.
strCan be ‘m’ or ‘mm’ (default).
strCan be “vertical” (default) or “horizontal”. When using “horizontal” mode, the PyVista backend must be used and hemi cannot be “split”.
dict | boolAs a dict, it contains the color, linewidth, alpha opacity
and decimate (level of decimation between 0 and 1 or None) of the
brain’s silhouette to display. If True, the default values are used
and if False, no silhouette will be displayed. Defaults to False.
str | path-likeCan be “auto”, “light”, or “dark” or a path-like to a
custom stylesheet. For Dark-Mode and automatic Dark-Mode-Detection,
qdarkstyle and
darkdetect,
respectively, are required. If None (default), the config option MNE_3D_OPTION_THEME will be used,
defaulting to “auto” if it’s not found.
Display the window as soon as it is ready. Defaults to True.
If True, start the Qt application event loop. Default to False.
Notes
This table shows the capabilities of each Brain backend (”✓” for full support, and “-” for partial support):
3D function: |
surfer.Brain |
mne.viz.Brain |
|---|---|---|
✓ |
✓ |
|
✓ |
✓ |
|
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
✓ |
|
data |
✓ |
✓ |
foci |
✓ |
|
labels |
✓ |
✓ |
✓ |
||
✓ |
||
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
||
✓ |
||
✓ |
||
✓ |
||
✓ |
||
✓ |
✓ |
|
✓ |
✓ |
|
✓ |
✓ |
|
✓ |
✓ |
|
TimeViewer |
✓ |
✓ |
✓ |
||
✓ |
||
view_layout |
✓ |
|
flatmaps |
✓ |
|
vertex picking |
✓ |
|
label picking |
✓ |
Methods
|
Add an annotation file. |
|
Display data from a numpy array on the surface or volume. |
|
Add a quiver to render positions of dipoles. |
|
Add spherical foci, possibly mapping to displayed surf. |
|
Add a quiver to render positions of dipoles. |
|
Add a mesh to render the outer head surface. |
|
Add an ROI label to the image. |
|
Add mesh objects to represent sensor positions. |
|
Add a mesh to render the skull surface. |
|
Add a text to the visualization. |
|
Add labels to the rendering from an anatomical segmentation. |
Detect automatically fitting scaling parameters. |
|
Clear the picking glyphs. |
|
|
Close all figures and cleanup data structure. |
Return the vertices of the picked points. |
|
|
Get the camera orientation for a given subplot display. |
|
Display the help window. |
|
Plot the vertex time course. |
|
Add the time line to the MPL widget. |
Remove all annotations from the image. |
|
Remove rendered data from the mesh. |
|
Remove dipole objects from the rendered scene. |
|
Remove forward sources from the rendered scene. |
|
Remove head objects from the rendered scene. |
|
Remove all the ROI labels from the image. |
|
|
Remove sensors from the rendered scene. |
Remove skull objects from the rendered scene. |
|
|
Remove text from the rendered scene. |
Remove the volume labels from the rendered scene. |
|
|
Reset view and time step. |
Reset the camera. |
|
Restore original scaling parameters. |
|
|
Save view from all panels to disk. |
|
Save a movie (for data with a time axis). |
|
Generate a screenshot of current view. |
|
Set the number of smoothing steps. |
|
Set the time playback speed. |
|
Set the time to display (in seconds). |
|
Set the interpolation mode. |
|
Set the time point shown (can be a float to interpolate). |
|
Configure the time viewer parameters. |
|
Display the window. |
|
Orient camera to display view. |
|
Toggle the interface. |
|
Toggle time playback. |
|
Update color map. |
Add an annotation file.
str | tupleEither path to annotation file or annotation name. Alternatively,
the annotation can be specified as a (labels, ctab) tuple per
hemisphere, i.e. annot=(labels, ctab) for a single hemisphere
or annot=((lh_labels, lh_ctab), (rh_labels, rh_ctab)) for both
hemispheres. labels and ctab should be arrays as returned
by nibabel.freesurfer.io.read_annot().
intShow only label borders. If int, specify the number of steps (away from the true border) along the cortical mesh to include as part of the border definition.
float in [0, 1]Alpha level to control opacity. Default is 1.
str | NoneIf None, it is assumed to belong to the hemipshere being shown. If two hemispheres are being shown, data must exist for both hemispheres.
If True (default), remove old annotations.
codeIf used, show all annotations in the same (specified) color. Probably useful only when showing annotation borders.
Examples using add_annotation:
Display data from a numpy array on the surface or volume.
This provides a similar interface to
surfer.Brain.add_overlay(), but it displays
it with a single colormap. It offers more flexibility over the
colormap, and provides a way to display four-dimensional data
(i.e., a timecourse) or five-dimensional data (i.e., a
vector-valued timecourse).
Note
fmin sets the low end of the colormap, and is separate
from thresh (this is a different convention from
surfer.Brain.add_overlay()).
numpy array, shape (n_vertices[, 3][, n_times])Data array. For the data to be understood as vector-valued
(3 values per vertex corresponding to X/Y/Z surface RAS),
then array must be have all 3 dimensions.
If vectors with no time dimension are desired, consider using a
singleton (e.g., np.newaxis) to create a “time” dimension
and pass time_label=None (vector values are not supported).
floatMinimum value in colormap (uses real fmin if None).
floatIntermediate value in colormap (fmid between fmin and fmax if None).
floatMaximum value in colormap (uses real max if None).
None or floatNot supported yet. If not None, values below thresh will not be visible.
float or NoneIf not None, center of a divergent colormap, changes the meaning of fmin, fmax and fmid.
NoneIf True: use a linear transparency between fmin and fmid and make values below fmin fully transparent (symmetrically for divergent colormaps). None will choose automatically based on colormap type.
str, list of color, or arrayName of matplotlib colormap to use, a list of matplotlib colors, or a custom look up table (an n x 4 array coded with RBGA values between 0 and 255), the default “auto” chooses a default divergent colormap, if “center” is given (currently “icefire”), otherwise a default sequential colormap (currently “rocket”).
float in [0, 1]Alpha level to control opacity of the overlay.
numpy arrayVertices for which the data is defined (needed if
len(data) < nvtx).
int or NoneNumber of smoothing steps (smoothing is used if len(data) < nvtx) The value ‘nearest’ can be used too. None (default) will use as many as necessary to fill the surface.
numpy arrayTime points in the data array (if data is 2D or 3D).
str | callable() | NoneFormat of the time label (a format string, a function that maps
floating point time values to strings, or None for no label). The
default is 'auto', which will use time=%0.2f ms if there
is more than one time point.
Whether to add a colorbar to the figure. Can also be a tuple to give the (row, col) index of where to put the colorbar.
str | NoneIf None, it is assumed to belong to the hemisphere being shown. If two hemispheres are being shown, an error will be thrown.
Not supported yet. Remove surface added by previous “add_data” call. Useful for conserving memory when displaying different data in a loop.
intFont size of the time label (default 14).
float | NoneTime initially shown in the plot. None to use the first time
sample (default).
float | None (default)The scale factor to use when displaying glyphs for vector-valued data.
float | NoneAlpha level to control opacity of the arrows. Only used for
vector-valued data. If None (default), alpha is used.
dictOriginal clim arguments.
SourceSpaces | NoneThe source space corresponding to the source estimate. Only necessary if the STC is a volume or mixed source estimate.
float | dict | NoneOptions for volumetric source estimate plotting, with key/value pairs:
'resolution'float | NoneResolution (in mm) of volume rendering. Smaller (e.g., 1.) looks better at the cost of speed. None (default) uses the volume source space resolution, which is often something like 7 or 5 mm, without resampling.
'blending'strCan be “mip” (default) for maximum intensity projection or “composite” for composite blending using alpha values.
'alpha'float | NoneAlpha for the volumetric rendering. Defaults are 0.4 for vector source estimates and 1.0 for scalar source estimates.
'surface_alpha'float | NoneAlpha for the surface enclosing the volume(s). None (default) will use half the volume alpha. Set to zero to avoid plotting the surface.
'silhouette_alpha'float | NoneAlpha for a silhouette along the outside of the volume. None (default)
will use 0.25 * surface_alpha.
'silhouette_linewidth'floatThe line width to use for the silhouette. Default is 2.
A float input (default 1.) or None will be used for the 'resolution'
entry.
dict | NoneOptions to pass to pyvista.Plotter.add_scalar_bar()
(e.g., dict(title_font_size=10)).
str | int | NoneControl verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Notes
If the data is defined for a subset of vertices (specified by the “vertices” parameter), a smoothing method is used to interpolate the data onto the high resolution surface. If the data is defined for subsampled version of the surface, smoothing_steps can be set to None, in which case only as many smoothing steps are applied until the whole surface is filled with non-zeros.
Due to a VTK alpha rendering bug, vector_alpha is
clamped to be strictly < 1.
Examples using add_data:
Add a quiver to render positions of dipoles.
DipoleDipole object containing position, orientation and amplitude of one or more dipoles or in the forward solution.
str | dict | instance of TransformIf str, the path to the head<->MRI transform *-trans.fif file produced
during coregistration. Can also be 'fsaverage' to use the built-in
fsaverage transformation.
list | matplotlib-style color | NoneA single color or list of anything matplotlib accepts: string, RGB, hex, etc. Default red.
float in [0, 1]Alpha level to control opacity. Default 1.
list | float | NoneThe size of the arrow representing the dipole in
mne.viz.Brain units. Default 5mm.
Notes
New in version 1.0.
Examples using add_dipole:
Add spherical foci, possibly mapping to displayed surf.
The foci spheres can be displayed at the coordinates given, or mapped through a surface geometry. In other words, coordinates from a volume-based analysis in MNI space can be displayed on an inflated average surface by finding the closest vertex on the white surface and mapping to that vertex on the inflated mesh.
ndarray, shape (n_coords, 3)Coordinates in stereotaxic space (default) or array of
vertex ids (with coord_as_verts=True).
Whether the coords parameter should be interpreted as vertex ids.
str | NoneSurface to project the coordinates to, or None to use raw coords. When set to a surface, each foci is positioned at the closest vertex in the mesh.
floatControls the size of the foci spheres (relative to 1cm).
A list of anything matplotlib accepts: string, RGB, hex, etc.
float in [0, 1]Alpha level to control opacity. Default is 1.
strInternal name to use.
str | NoneIf None, it is assumed to belong to the hemipshere being shown. If two hemispheres are being shown, an error will be thrown.
intThe resolution of the spheres.
Examples using add_foci:
Source localization with MNE, dSPM, sLORETA, and eLORETA
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Add a quiver to render positions of dipoles.
ForwardThe forward solution. If present, the orientations of the dipoles present in the forward solution are displayed.
str | dict | instance of TransformIf str, the path to the head<->MRI transform *-trans.fif file produced
during coregistration. Can also be 'fsaverage' to use the built-in
fsaverage transformation.
float in [0, 1]Alpha level to control opacity. Default 1.
None | floatThe size of the arrow representing the dipoles in
mne.viz.Brain units. Default 1.5mm.
Notes
New in version 1.0.
Add a mesh to render the outer head surface.
Notes
New in version 0.24.
Examples using add_head:
Add an ROI label to the image.
str | instance of LabelLabel filepath or name. Can also be an instance of an object with attributes “hemi”, “vertices”, “name”, and optionally “color” and “values” (if scalar_thresh is not None).
NoneAnything matplotlib accepts: string, RGB, hex, etc. (default “crimson”).
float in [0, 1]Alpha level to control opacity.
None | floatThreshold the label ids using this value in the label file’s scalar field (i.e. label only vertices with scalar >= thresh).
intShow only label borders. If int, specify the number of steps (away from the true border) along the cortical mesh to include as part of the border definition.
str | NoneIf None, it is assumed to belong to the hemipshere being shown.
None | strIf a label is specified as name, subdir can be used to indicate
that the label file is in a sub-directory of the subject’s
label directory rather than in the label directory itself (e.g.
for $SUBJECTS_DIR/$SUBJECT/label/aparc/lh.cuneus.label
brain.add_label('cuneus', subdir='aparc')).
If True, reset the camera view after adding the label. Defaults to True.
Notes
To remove previously added labels, run Brain.remove_labels().
Examples using add_label:
Compute Power Spectral Density of inverse solution from single epochs
Add mesh objects to represent sensor positions.
mne.InfoThe mne.Info object with information about the sensors and methods of measurement.
str | dict | instance of TransformIf str, the path to the head<->MRI transform *-trans.fif file produced
during coregistration. Can also be 'fsaverage' to use the built-in
fsaverage transformation.
str | list | bool | NoneCan be “helmet”, “sensors” or “ref” to show the MEG helmet, sensors or
reference sensors respectively, or a combination like
('helmet', 'sensors') (same as None, default). True translates to
('helmet', 'sensors', 'ref').
str | listString options are:
True)Shows EEG sensors using their digitized locations (after
transformation to the chosen coord_frame)
The EEG locations projected onto the scalp, as is done in forward modeling
Can also be a list of these options, or an empty list ([],
equivalent of False).
str | list | bool | NoneCan be “channels”, “pairs”, “detectors”, and/or “sources” to show the
fNIRS channel locations, optode locations, or line between
source-detector pairs, or a combination like ('pairs', 'channels').
True translates to ('pairs',).
If True (default), show ECoG sensors.
If True (default), show sEEG electrodes.
If True (default), show DBS (deep brain stimulation) electrodes.
str | int | NoneControl verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Notes
New in version 0.24.
Examples using add_sensors:
Preprocessing functional near-infrared spectroscopy (fNIRS) data
Add a mesh to render the skull surface.
Notes
New in version 0.24.
Add a text to the visualization.
floatX coordinate.
floatY coordinate.
strText to add.
strName of the text (text label can be updated using update_text()).
tupleColor of the text. Default is the foreground color set during initialization (default is black or white depending on the background color).
floatOpacity of the text (default 1.0).
int | NoneRow index of which brain to use. Default is the top row.
int | NoneColumn index of which brain to use. Default is the left-most column.
float | NoneThe font size to use.
str | NoneThe text justification.
Examples using add_text:
Source localization with MNE, dSPM, sLORETA, and eLORETA
Visualize source leakage among labels using a circular graph
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Compute spatial resolution metrics in source space
Compute spatial resolution metrics to compare MEG with EEG+MEG
Add labels to the rendering from an anatomical segmentation.
strThe anatomical segmentation file. Default aparc+aseg. This may
be any anatomical segmentation file in the mri subdirectory of the
Freesurfer subject directory.
listLabeled regions of interest to plot. See
mne.get_montage_volume_labels()
for one way to determine regions of interest. Regions can also be
chosen from the FreeSurfer LUT.
list | matplotlib-style color | NoneA list of anything matplotlib accepts: string, RGB, hex, etc. (default FreeSurfer LUT colors).
float in [0, 1]Alpha level to control opacity.
float in [0, 1)The smoothing factor to be applied. Default 0 is no smoothing.
int | NoneThe size of holes to remove in the mesh in voxels. Default is None,
no holes are removed. Warning, this dilates the boundaries of the
surface by fill_hole_size number of voxels so use the minimal
size.
None | dictAdd a legend displaying the names of the labels. Default (None)
is True if the number of labels is 10 or fewer.
Can also be a dict of kwargs to pass to
pyvista.Plotter.add_legend().
Notes
New in version 0.24.
Examples using add_volume_labels:
Data used by time viewer and color bar widgets.
Get the camera orientation for a given subplot display.
float | NoneThe roll of the camera rendering the view in degrees.
float | NoneThe distance from the camera rendering the view to the focalpoint in plot units (either m or mm).
floatThe azimuthal angle of the camera rendering the view in degrees.
floatThe The zenith angle of the camera rendering the view in degrees.
tuple, shape (3,) | NoneThe focal point of the camera rendering the view: (x, y, z) in plot units (either m or mm).
The interaction style.
Add the time line to the MPL widget.
Force an update of the plot. Defaults to True.
Save view from all panels to disk.
Examples using save_image:
Repeated measures ANOVA on source data with spatio-temporal clustering
Save a movie (for data with a time axis).
The movie is created through the imageio module. The format is
determined by the extension, and additional options can be specified
through keyword arguments that depend on the format, see
imageio’s format page.
Warning
This method assumes that time is specified in seconds when adding data. If time is specified in milliseconds this will result in movies 1000 times longer than expected.
strPath at which to save the movie. The extension determines the
format (e.g., '*.mov', '*.gif', …; see the imageio
documentation for available formats).
floatFactor by which to stretch time (default 4). For example, an epoch
from -100 to 600 ms lasts 700 ms. With time_dilation=4 this
would result in a 2.8 s long movie.
floatFirst time point to include (default: all data).
floatLast time point to include (default: all data).
floatFramerate of the movie (frames per second, default 24).
str | NoneInterpolation method (scipy.interpolate.interp1d parameter).
Must be one of ‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’,
or ‘cubic’.
If None, it uses the current brain.interpolation,
which defaults to 'nearest'. Defaults to None.
str | NoneThe codec to use.
float | NoneThe bitrate to use.
callable() | NoneA function to call on each iteration. Useful for status message
updates. It will be passed keyword arguments frame and
n_frames.
If True, include time viewer traces. Only used if
time_viewer=True and separate_canvas=False.
dictSpecify additional options for imageio.
Generate a screenshot of current view.
arrayImage pixel values.
Examples using screenshot:
Set the number of smoothing steps.
intNumber of smoothing steps.
Set the time playback speed.
floatThe speed of the playback.
Set the time to display (in seconds).
floatThe time to show, in seconds.
Set the interpolation mode.
str | NoneInterpolation method (scipy.interpolate.interp1d parameter).
Must be one of ‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’,
or ‘cubic’.
Configure the time viewer parameters.
Notes
The keyboard shortcuts are the following:
‘?’: Display help window ‘i’: Toggle interface ‘s’: Apply auto-scaling ‘r’: Restore original clim ‘c’: Clear all traces ‘n’: Shift the time forward by the playback speed ‘b’: Shift the time backward by the playback speed ‘Space’: Start/Pause playback ‘Up’: Decrease camera elevation angle ‘Down’: Increase camera elevation angle ‘Left’: Decrease camera azimuth angle ‘Right’: Increase camera azimuth angle
Orient camera to display view.
str | NoneThe name of the view to show (e.g. “lateral”). Other arguments
take precedence and modify the camera starting from the view.
See Brain.show_view for valid
string shortcut options.
float | NoneThe roll of the camera rendering the view in degrees.
float | NoneThe distance from the camera rendering the view to the focalpoint in plot units (either m or mm).
int | NoneThe row to set. Default all rows.
int | NoneThe column to set. Default all columns.
str | NoneWhich hemi to use for view lookup (when in “both” mode).
If True, consider view arguments relative to canonical MRI directions (closest to MNI for the subject) rather than native MRI space. This helps when MRIs are not in standard orientation (e.g., have large rotations).
floatThe azimuthal angle of the camera rendering the view in degrees.
floatThe The zenith angle of the camera rendering the view in degrees.
tuple, shape (3,) | NoneThe focal point of the camera rendering the view: (x, y, z) in plot units (either m or mm).
Notes
The builtin string views are the following perspectives, based on the RAS convention. If not otherwise noted, the view will have the top of the brain (superior, +Z) in 3D space shown upward in the 2D perspective:
'lateral'From the left or right side such that the lateral (outside) surface of the given hemisphere is visible.
'medial'From the left or right side such that the medial (inside) surface of the given hemisphere is visible (at least when in split or single-hemi mode).
'rostral'From the front.
'caudal'From the rear.
'dorsal'From above, with the front of the brain pointing up.
'ventral'From below, with the front of the brain pointing up.
'frontal'From the front and slightly lateral, with the brain slightly tilted forward (yielding a view from slightly above).
'parietal'From the rear and slightly lateral, with the brain slightly tilted backward (yielding a view from slightly above).
'axial'From above with the brain pointing up (same as 'dorsal').
'sagittal'From the right side.
'coronal'From the rear.
Three letter abbreviations (e.g., 'lat') of all of the above are
also supported.
Examples using show_view:
Preprocessing functional near-infrared spectroscopy (fNIRS) data
Repeated measures ANOVA on source data with spatio-temporal clustering
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
The interpolation mode.
mne.viz.Brain#Working with CTF data: the Brainstorm auditory dataset
Preprocessing functional near-infrared spectroscopy (fNIRS) data
Source localization with MNE, dSPM, sLORETA, and eLORETA
The role of dipole orientations in distributed source localization
EEG source localization given electrode locations on an MRI
Permutation t-test on source data with spatio-temporal clustering
2 samples permutation test on source data with spatio-temporal clustering
Repeated measures ANOVA on source data with spatio-temporal clustering
Compute Power Spectral Density of inverse solution from single epochs
Compute source power spectral density (PSD) of VectorView and OPM data
Compute evoked ERS source power using DICS, LCMV beamformer, and dSPM
Compute MNE inverse solution on evoked data with a mixed source space
Compute source power estimate by projecting the covariance with MNE
Visualize source leakage among labels using a circular graph
Plot point-spread functions (PSFs) and cross-talk functions (CTFs)
Compute spatial resolution metrics in source space
Compute spatial resolution metrics to compare MEG with EEG+MEG