dataarray of shape (n_dipoles, n_times) | tuple, shape (2,)
The data in source space. When it is a single array, the
left hemisphere is stored in data[:len(vertices[0])] and the right
hemisphere is stored in data[-len(vertices[1]):].
When data is a tuple, it contains two arrays:
“kernel” shape (n_vertices, n_sensors) and
“sens_data” shape (n_sensors, n_times).
In this case, the source space data corresponds to
np.dot(kernel,sens_data).
Vertex numbers corresponding to the data. The first element of the list
contains vertices of left hemisphere and the second element contains
vertices of right hemisphere.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The time interval to consider as “baseline” when applying baseline
correction. If None, do not apply baseline correction.
If a tuple (a,b), the interval is between a and b
(in seconds), including the endpoints.
If a is None, the beginning of the data is used; and if b
is None, it is set to the end of the interval.
If (None,None), the entire time interval is used.
Note
The baseline (a,b) includes both endpoints, i.e. all
timepoints t such that a<=t<=b.
Correction is applied to each source individually in the following
way:
Calculate the mean signal of the baseline period.
Subtract this mean from the entire source estimate data.
Note
Baseline correction is appropriate when signal and noise are
approximately additive, and the noise level can be estimated from
the baseline interval. This can be the case for non-normalized
source activities (e.g. signed and unsigned MNE), but it is not
the case for normalized estimates (e.g. signal-to-noise ratios,
dSPM, sLORETA).
Defaults to (None,0), i.e. beginning of the the data until
time point zero.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Return a source estimate object with data summarized over time bins.
Time bins of width seconds. This method is intended for
visualization only. No filter is applied to the data before binning,
making the method inappropriate as a tool for downsampling data.
Last possible time point contained in a bin (if the last bin would
be shorter than width it is dropped). The default is the last time
point of the stc.
This function computes the spatial center of mass on the surface
as well as the temporal center of mass as in [1].
Note
All activity must occur in a single hemisphere, otherwise
an error is raised. The “mass” of each point in space for
computing the spatial center of mass is computed by summing
across time, and vice-versa for each point in time in
computing the temporal center of mass. This is useful for
quantifying spatio-temporal cluster locations, especially
when combined with mne.vertex_to_mni().
Calculate the center of mass for the left (0) or right (1)
hemisphere. If None, one of the hemispheres must be all zeroes,
and the center of mass will be calculated for the other
hemisphere (useful for getting COM for clusters).
If True, returned vertex will be one from stc. Otherwise, it could
be any vertex from surf. If an array of int, the returned vertex
will come from that array. If instance of SourceSpaces (as of
0.13), the returned vertex will be from the given source space.
For most accuruate estimates, do not restrict vertices.
The surface to use for Euclidean distance center of mass
finding. The default here is “sphere”, which finds the center
of mass on the spherical surface to help avoid potential issues
with cortical folding.
Vertex of the spatial center of mass for the inferred hemisphere,
with each vertex weighted by the sum of the stc across time. For a
boolean stc, then, this would be weighted purely by the duration
each vertex was active.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
where \(\\b_k\) is the signal on sensor \(k\) provided by the
forward model for a source with unit amplitude, \(a\) is the
source amplitude, \(N\) is the number of sensors, and
\(s_k^2\) is the noise variance on sensor \(k\).
If using a surface or mixed source space, this should be the
Label’s for which to extract the time course.
If working with whole-brain volume source estimates, this must be one of:
a string path to a FreeSurfer atlas for the subject (e.g., their
‘aparc.a2009s+aseg.mgz’) to extract time courses for all volumes in the
atlas
a two-element list or tuple, the first element being a path to an atlas,
and the second being a list or dict of volume_labels to extract
(see mne.setup_volume_source_space() for details).
Changed in version 0.21.0: Support for volume source estimates.
False (default) will emit an error if there are labels that have no
vertices in the source estimate. True and 'ignore' will return
all-zero time courses for labels that do not have any vertices in the
source estimate, and True will emit a warning while and “ignore” will
just log a message.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Maximum value across vertices at each time point within each label.
'mean'
Average across vertices at each time point within each label. Ignores
orientation of sources for standard source estimates, which varies
across the cortical surface, which can lead to cancellation.
Vector source estimates are always in XYZ / RAS orientation, and are thus
already geometrically aligned.
'mean_flip'
Finds the dominant direction of source space normal vector orientations
within each label, applies a sign-flip to time series at vertices whose
orientation is more than 180° different from the dominant direction, and
then averages across vertices at each time point within each label.
'pca_flip'
Applies singular value decomposition to the time courses within each label,
and uses the first right-singular vector as the representative label time
course. This signal is scaled so that its power matches the average
(per-vertex) power within the label, and sign-flipped by multiplying by
np.sign(u@flip), where u is the first left-singular vector and
flip is the same sign-flip vector used when mode='mean_flip'. This
sign-flip ensures that extracting time courses from the same label in
similar STCs does not result in 180° direction/phase changes.
'auto' (default)
Uses 'mean_flip' when a standard source estimate is applied, and
'mean' when a vector source estimate is supplied.
None
No aggregation is performed, and an array of shape (n_vertices,n_times) is
returned.
New in v0.21: Support for 'auto', vector, and volume source estimates.
The only modes that work for vector and volume source estimates are 'mean',
'max', and 'auto'.
The maximum point in time to be considered for peak getting.
mode{‘pos’, ‘neg’, ‘abs’}
How to deal with the sign of the data. If ‘pos’ only positive
values will be considered. If ‘neg’ only negative values will
be considered. If ‘abs’ absolute values will be considered.
Defaults to ‘abs’.
Hemisphere id (ie 'lh', 'rh', 'both', or 'split'). In
the case of 'both', both hemispheres are shown in the same window.
In the case of 'split' hemispheres are displayed side-by-side
in different viewing panes.
Name of colormap to use or a custom look up table. If array, must
be (n x 3) or (n x 4) array for with RGB or RGBA values between
0 and 255.
The default (‘auto’) uses 'hot' for one-sided data and
‘mne’ for two-sided data.
Format of the time label (a format string, a function that maps
floating point time values to strings, or None for no label). The
default is 'auto', which will use time=%0.2fms if there
is more than one time point.
If True: use a linear transparency between fmin and fmid
and make values below fmin fully transparent (symmetrically for
divergent colormaps). None will choose automatically based on colormap
type.
If None, a new figure will be created. If multiple views or a
split view is requested, this must be a list of the appropriate
length. If int is provided it will be used to identify the PyVista
figure by it’s id or create a new figure with the given id. If an
instance of matplotlib figure, mpl backend is used for plotting.
View to use. Using multiple views (list) is not supported for mpl
backend. See Brain.show_view for
valid string options.
When plotting a standard SourceEstimate (not volume, mixed, or vector)
and using the PyVista backend, views='flat' is also supported to
plot cortex as a flatmap.
Using multiple views (list) is not supported by the matplotlib backend.
Colorbar properties specification. If ‘auto’, set clim automatically
based on data percentiles. If dict, should contain:
kind‘value’ | ‘percent’
Flag to specify type of limits.
limslist | np.ndarray | tuple of float, 3 elements
Lower, middle, and upper bounds for colormap.
pos_limslist | np.ndarray | tuple of float, 3 elements
Lower, middle, and upper bound for colormap. Positive values
will be mirrored directly across zero during colormap
construction to obtain negative control points.
Note
Only one of lims or pos_lims should be provided.
Only sequential colormaps should be used with lims, and
only divergent colormaps should be used with pos_lims.
Specifies how binarized curvature values are rendered.
Either the name of a preset Brain cortex colorscheme (one of
'classic', 'bone', 'low_contrast', or 'high_contrast'),
or the name of a colormap, or a tuple with values
(colormap,min,max,reverse) to fully specify the curvature
colors. Has no effect with the matplotlib backend.
The size of the window, in pixels. can be one number to specify
a square window, or the (width, height) of a rectangular window.
Has no effect with mpl backend.
Only affects the matplotlib backend.
The spacing to use for the source space. Can be 'ico#' for a
recursively subdivided icosahedron, 'oct#' for a recursively
subdivided octahedron, or 'all' for all points. In general, you can
speed up the plotting by selecting a sparser source space.
Defaults to ‘oct6’.
If True, enable interactive picking of a point on the surface of the
brain and plot its time course.
This feature is only available with the PyVista 3d backend, and requires
time_viewer=True. Defaults to ‘auto’, which will use True if and
only if time_viewer=True, the backend is PyVista, and there is more
than one time point. If float (between zero and one), it specifies what
proportion of the total window should be devoted to traces (True is
equivalent to 0.25, i.e., it will occupy the bottom 1/4 of the figure).
Options for volumetric source estimate plotting, with key/value pairs:
'resolution'float | None
Resolution (in mm) of volume rendering. Smaller (e.g., 1.) looks
better at the cost of speed. None (default) uses the volume source
space resolution, which is often something like 7 or 5 mm,
without resampling.
'blending'str
Can be “mip” (default) for maximum intensity projection or
“composite” for composite blending using alpha values.
'alpha'float | None
Alpha for the volumetric rendering. Defaults are 0.4 for vector source
estimates and 1.0 for scalar source estimates.
'surface_alpha'float | None
Alpha for the surface enclosing the volume(s). None (default) will use
half the volume alpha. Set to zero to avoid plotting the surface.
'silhouette_alpha'float | None
Alpha for a silhouette along the outside of the volume. None (default)
will use 0.25*surface_alpha.
'silhouette_linewidth'float
The line width to use for the silhouette. Default is 2.
A float input (default 1.) or None will be used for the 'resolution'
entry.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Flatmaps are available by default for fsaverage but not for other
subjects reconstructed by FreeSurfer. We recommend using
mne.compute_source_morph() to morph source estimates to fsaverage
for flatmap plotting. If you want to construct your own flatmap for a given
subject, these links might help:
The number of jobs to run in parallel. If -1, it is set
to the number of CPU cores. Requires the joblib package.
None (default) is a marker for ‘unset’ that will be interpreted
as n_jobs=1 (sequential execution) unless the call is performed under
a joblib.parallel_config context manager that sets another
value for n_jobs.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The stem of the file name. The file names used for surface source
spaces are obtained by adding "-lh.stc" and "-rh.stc" (or
"-lh.w" and "-rh.w") to the stem provided, for the left and
the right hemisphere, respectively.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Export data in tabular structure as a pandas DataFrame.
Vertices are converted to columns in the DataFrame. By default,
an additional column “time” is added, unless index='time'
(in which case time values form the DataFrame’s index).
Kind of index to use for the DataFrame. If None, a sequential
integer index (pandas.RangeIndex) will be used. If 'time', a
pandas.Index or pandas.TimedeltaIndex will be used
(depending on the value of time_format).
Defaults to None.
Scaling factor applied to the channels picked. If None, defaults to
dict(eeg=1e6,mag=1e15,grad=1e13) — i.e., converts EEG to µV,
magnetometers to fT, and gradiometers to fT/cm.
If True, the DataFrame is returned in long format where each row is one
observation of the signal at a unique combination of time point and vertex.
Defaults to False.
Desired time format. If None, no conversion is applied, and time values
remain as float values in seconds. If 'ms', time values will be rounded
to the nearest millisecond and converted to integers. If 'timedelta',
time values will be converted to pandas.Timedelta values.
Default is None.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The transform to be applied, including parameters (see, e.g.,
functools.partial()). The first parameter of the function is
the input data. The first two dimensions of the transformed data
should be (i) vertices and (ii) time. See Notes for details.
The transformed stc or, in the case of transforms which yield
N-dimensional output (where N > 2), a list of stcs. For a list,
copy must be True.
Notes
Transforms which yield 3D
output (e.g. time-frequency transforms) are valid, so long as the
first two dimensions are vertices and time. In this case, the
copy parameter must be True and a list of
SourceEstimates, rather than a single instance of SourceEstimate,
will be returned, one for each index of the 3rd dimension of the
transformed data. In the case of transforms yielding 2D output
(e.g. filtering), the user has the option of modifying the input
inplace (copy = False) or returning a new instance of
SourceEstimate (copy = True) with the transformed data.
Applying transforms can be significantly faster if the
SourceEstimate object was created using “(kernel, sens_data)”, for
the “data” parameter as the transform is applied in sensor space.
Inverse methods, e.g., “apply_inverse_epochs”, or “apply_lcmv_epochs”
do this automatically (if possible).
The transform to be applied, including parameters (see, e.g.,
functools.partial()). The first parameter of the function is
the input data. The first return value is the transformed data,
remaining outputs are ignored. The first dimension of the
transformed data has to be the same as the first dimension of the
input data.
Applying transforms can be significantly faster if the
SourceEstimate object was created using “(kernel, sens_data)”, for
the “data” parameter as the transform is applied in sensor space.
Inverse methods, e.g., “apply_inverse_epochs”, or “apply_lcmv_epochs”
do this automatically (if possible).