dataarray of shape (n_dipoles, n_times) | tuple, shape (2,)
The data in source space. When it is a single array, the
left hemisphere is stored in data[:len(vertices[0])] and the right
hemisphere is stored in data[-len(vertices[1]):].
When data is a tuple, it contains two arrays:
“kernel” shape (n_vertices, n_sensors) and
“sens_data” shape (n_sensors, n_times).
In this case, the source space data corresponds to
np.dot(kernel,sens_data).
Vertex numbers corresponding to the data. The first element of the list
contains vertices of left hemisphere and the second element contains
vertices of right hemisphere.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The time interval to consider as “baseline” when applying baseline
correction. If None, do not apply baseline correction.
If a tuple (a,b), the interval is between a and b
(in seconds), including the endpoints.
If a is None, the beginning of the data is used; and if b
is None, it is set to the end of the data.
If (None,None), the entire time interval is used.
Note
The baseline (a,b) includes both endpoints, i.e. all timepoints t
such that a<=t<=b.
Correction is applied to each source individually in the following
way:
Calculate the mean signal of the baseline period.
Subtract this mean from the entire source estimate data.
Note
Baseline correction is appropriate when signal and noise are
approximately additive, and the noise level can be estimated from
the baseline interval. This can be the case for non-normalized
source activities (e.g. signed and unsigned MNE), but it is not
the case for normalized estimates (e.g. signal-to-noise ratios,
dSPM, sLORETA).
Defaults to (None,0), i.e. beginning of the the data until
time point zero.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The function fun is applied to the vertices defined in picks.
The source estimate object’s data is modified in-place. If the function returns a different
data type (e.g. numpy.complex128) it must be specified
using the dtype parameter, which causes the data type of all the data
to change (even if the function is only applied to vertices in
picks).
Note
If n_jobs > 1, more memory is required as
len(picks)*n_times additional time points need to
be temporarily stored in memory.
Note
If the data type changes (dtype!=None), more memory is
required since the original and the converted data needs
to be stored in memory.
A function to be applied to the channels. The first argument of
fun has to be a timeseries (numpy.ndarray). The function must
operate on an array of shape (n_times,) because it will apply vertex-wise.
The function must return an ndarray shaped like its input.
Note
If channel_wise=True, one can optionally access the index and/or the
name of the currently processed channel within the applied function.
This can enable tailored computations for different channels.
To use this feature, add ch_idx and/or ch_name as
additional argument(s) to your function definition.
Channels to include. Slices and lists of integers will be interpreted as
channel indices. In lists, channel type strings (e.g., ['meg','eeg']) will pick channels of those types, channel name strings (e.g.,
['MEG0111','MEG2623'] will pick the given channels. Can also be the
string values 'all' to pick all channels, or 'data' to pick
data channels. None (default) will pick all channels. Note that
channels in info['bads']will be included if their names or indices
are explicitly provided.
The number of jobs to run in parallel. If -1, it is set
to the number of CPU cores. Requires the joblib package.
None (default) is a marker for ‘unset’ that will be interpreted
as n_jobs=1 (sequential execution) unless the call is performed under
a joblib.parallel_config context manager that sets another
value for n_jobs. Ignored if vertice_wise=False as the workload
is split across vertices.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Channels to include. Slices and lists of integers will be interpreted as
channel indices. In lists, channel type strings (e.g., ['meg','eeg']) will pick channels of those types, channel name strings (e.g.,
['MEG0111','MEG2623'] will pick the given channels. Can also be the
string values 'all' to pick all channels, or 'data' to pick
data channels. None (default) will pick all data channels
(excluding reference MEG channels). Note that channels in info['bads']will be included if their names or indices are explicitly provided.
The number of jobs to run in parallel. If -1, it is set
to the number of CPU cores. Requires the joblib package.
None (default) is a marker for ‘unset’ that will be interpreted
as n_jobs=1 (sequential execution) unless the call is performed under
a joblib.parallel_config context manager that sets another
value for n_jobs.
Points to use in the FFT for Hilbert transformation. The signal
will be padded with zeros before computing Hilbert, then cut back
to original length. If None, n == self.n_times. If ‘auto’,
the next highest fast FFT length will be use.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
If envelope=False, the analytic signal for the channels/vertices defined in
picks is computed and the data of the Raw object is converted to
a complex representation (the analytic signal is complex valued).
If envelope=True, the absolute value of the analytic signal for the
channels/vertices defined in picks is computed, resulting in the envelope
signal.
If envelope=False, more memory is required since the original raw data
as well as the analytic signal have temporarily to be stored in memory.
If n_jobs > 1, more memory is required as len(picks)*n_times
additional time points need to be temporarily stored in memory.
Also note that the n_fft parameter will allow you to pad the signal
with zeros before performing the Hilbert transform. This padding
is cut off, but it may result in a slightly different result
(particularly around the edges). Use at your own risk.
Analytic signal
The analytic signal “x_a(t)” of “x(t)” is:
x_a=F^{-1}(F(x)2U)=x+iy
where “F” is the Fourier transform, “U” the unit step function,
and “y” the Hilbert transform of “x”. One usage of the analytic
signal is the computation of the envelope signal, which is given by
“e(t) = abs(x_a(t))”. Due to the linearity of Hilbert transform and the
MNE inverse solution, the enevlope in source space can be obtained
by computing the analytic signal in sensor space, applying the MNE
inverse, and computing the envelope in source space.
Return a source estimate object with data summarized over time bins.
Time bins of width seconds. This method is intended for
visualization only. No filter is applied to the data before binning,
making the method inappropriate as a tool for downsampling data.
Last possible time point contained in a bin (if the last bin would
be shorter than width it is dropped). The default is the last time
point of the stc.
This function computes the spatial center of mass on the surface
as well as the temporal center of mass as in [1].
Note
All activity must occur in a single hemisphere, otherwise
an error is raised. The “mass” of each point in space for
computing the spatial center of mass is computed by summing
across time, and vice-versa for each point in time in
computing the temporal center of mass. This is useful for
quantifying spatio-temporal cluster locations, especially
when combined with mne.vertex_to_mni().
Calculate the center of mass for the left (0) or right (1)
hemisphere. If None, one of the hemispheres must be all zeroes,
and the center of mass will be calculated for the other
hemisphere (useful for getting COM for clusters).
If True, returned vertex will be one from stc. Otherwise, it could
be any vertex from surf. If an array of int, the returned vertex
will come from that array. If instance of SourceSpaces (as of
0.13), the returned vertex will be from the given source space.
For most accuruate estimates, do not restrict vertices.
The surface to use for Euclidean distance center of mass
finding. The default here is “sphere”, which finds the center
of mass on the spherical surface to help avoid potential issues
with cortical folding.
Vertex of the spatial center of mass for the inferred hemisphere,
with each vertex weighted by the sum of the stc across time. For a
boolean stc, then, this would be weighted purely by the duration
each vertex was active.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
where \(\\b_k\) is the signal on sensor \(k\) provided by the
forward model for a source with unit amplitude, \(a\) is the
source amplitude, \(N\) is the number of sensors, and
\(s_k^2\) is the noise variance on sensor \(k\).
If using a surface or mixed source space, this should be the
Label’s for which to extract the time course.
If working with whole-brain volume source estimates, this must be one of:
a string path to a FreeSurfer atlas for the subject (e.g., their
‘aparc.a2009s+aseg.mgz’) to extract time courses for all volumes in the
atlas
a two-element list or tuple, the first element being a path to an atlas,
and the second being a list or dict of volume_labels to extract
(see mne.setup_volume_source_space() for details).
Changed in version 0.21.0: Support for volume source estimates.
False (default) will emit an error if there are labels that have no
vertices in the source estimate. True and 'ignore' will return
all-zero time courses for labels that do not have any vertices in the
source estimate, and True will emit a warning while and “ignore” will
just log a message.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Maximum absolute value across vertices at each time point within each label.
'mean'
Average across vertices at each time point within each label. Ignores
orientation of sources for standard source estimates, which varies
across the cortical surface, which can lead to cancellation.
Vector source estimates are always in XYZ / RAS orientation, and are thus
already geometrically aligned.
'mean_flip'
Finds the dominant direction of source space normal vector orientations
within each label, applies a sign-flip to time series at vertices whose
orientation is more than 90° different from the dominant direction, and
then averages across vertices at each time point within each label.
'pca_flip'
Applies singular value decomposition to the time courses within each label,
and uses the first right-singular vector as the representative label time
course. This signal is scaled so that its power matches the average
(per-vertex) power within the label, and sign-flipped by multiplying by
np.sign(u@flip), where u is the first left-singular vector and
flip is the same sign-flip vector used when mode='mean_flip'. This
sign-flip ensures that extracting time courses from the same label in
similar STCs does not result in 180° direction/phase changes.
'auto' (default)
Uses 'mean_flip' when a standard source estimate is applied, and
'mean' when a vector source estimate is supplied.
None
No aggregation is performed, and an array of shape (n_vertices,n_times) is
returned.
New in v0.21: Support for 'auto', vector, and volume source estimates.
The only modes that work for vector and volume source estimates are 'mean',
'max', and 'auto'.
Channels to include. Slices and lists of integers will be interpreted as
channel indices. In lists, channel type strings (e.g., ['meg','eeg']) will pick channels of those types, channel name strings (e.g.,
['MEG0111','MEG2623'] will pick the given channels. Can also be the
string values 'all' to pick all channels, or 'data' to pick
data channels. None (default) will pick all data channels. Note
that channels in info['bads']will be included if their names or
indices are explicitly provided.
‘auto’ (default): The filter length is chosen based
on the size of the transition regions (6.6 times the reciprocal
of the shortest transition band for fir_window=’hamming’
and fir_design=”firwin2”, and half that for “firwin”).
str: A human-readable time in
units of “s” or “ms” (e.g., “10s” or “5500ms”) will be
converted to that number of samples if phase="zero", or
the shortest power-of-two length at least that duration for
phase="zero-double".
int: Specified length in samples. For fir_design=”firwin”,
this should not be used.
Width of the transition band at the low cut-off frequency in Hz
(high pass or cutoff 1 in bandpass). Can be “auto”
(default) to use a multiple of l_freq:
Width of the transition band at the high cut-off frequency in Hz
(low pass or cutoff 2 in bandpass). Can be “auto”
(default in 0.14) to use a multiple of h_freq:
Dictionary of parameters to use for IIR filtering.
If iir_params=None and method="iir", 4th order Butterworth will be used.
For more information, see mne.filter.construct_iir_filter().
Phase of the filter.
When method='fir', symmetric linear-phase FIR filters are constructed
with the following behaviors when method="fir":
"zero" (default)
The delay of this filter is compensated for, making it non-causal.
"minimum"
A minimum-phase filter will be constructed by decomposing the zero-phase filter
into a minimum-phase and all-pass systems, and then retaining only the
minimum-phase system (of the same length as the original zero-phase filter)
via scipy.signal.minimum_phase().
"zero-double"
This is a legacy option for compatibility with MNE <= 0.13.
The filter is applied twice, once forward, and once backward
(also making it non-causal).
"minimum-half"
This is a legacy option for compatibility with MNE <= 1.6.
A minimum-phase filter will be reconstructed from the zero-phase filter with
half the length of the original filter.
When method='iir', phase='zero' (default) or equivalently 'zero-double'
constructs and applies IIR filter twice, once forward, and once backward (making it
non-causal) using filtfilt(); phase='forward' will apply
the filter once in the forward (causal) direction using
lfilter().
New in v0.13.
Changed in version 1.7: The behavior for phase="minimum" was fixed to use a filter of the requested
length and improved suppression.
Can be “firwin” (default) to use scipy.signal.firwin(),
or “firwin2” to use scipy.signal.firwin2(). “firwin” uses
a time-domain design technique that generally gives improved
attenuation using fewer samples than “firwin2”.
If a string (or list of str), any annotation segment that begins
with the given string will not be included in filtering, and
segments on either side of the given excluded annotated segment
will be filtered separately (i.e., as independent signals).
The default (('edge','bad_acq_skip') will separately filter
any segments that were concatenated by mne.concatenate_raws()
or mne.io.Raw.append(), or separated during acquisition.
To disable, provide an empty list. Only used if inst is raw.
The type of padding to use. Supports
all numpy.pad()mode options. Can also be "reflect_limited", which
pads with a reflected version of each vector mirrored on the first and last values
of the vector, followed by zeros.
Only used for method='fir'.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The maximum point in time to be considered for peak getting.
mode{‘pos’, ‘neg’, ‘abs’}
How to deal with the sign of the data. If ‘pos’ only positive
values will be considered. If ‘neg’ only negative values will
be considered. If ‘abs’ absolute values will be considered.
Defaults to ‘abs’.
Hemisphere id (ie 'lh', 'rh', 'both', or 'split'). In
the case of 'both', both hemispheres are shown in the same window.
In the case of 'split' hemispheres are displayed side-by-side
in different viewing panes.
Name of colormap to use or a custom look up table. If array, must
be (n x 3) or (n x 4) array for with RGB or RGBA values between
0 and 255.
The default (‘auto’) uses 'hot' for one-sided data and
‘mne’ for two-sided data.
Format of the time label (a format string, a function that maps
floating point time values to strings, or None for no label). The
default is 'auto', which will use time=%0.2fms if there
is more than one time point.
If True: use a linear transparency between fmin and fmid
and make values below fmin fully transparent (symmetrically for
divergent colormaps). None will choose automatically based on colormap
type.
If None, a new figure will be created. If multiple views or a
split view is requested, this must be a list of the appropriate
length. If int is provided it will be used to identify the PyVista
figure by it’s id or create a new figure with the given id. If an
instance of matplotlib figure, mpl backend is used for plotting.
View to use. Using multiple views (list) is not supported for mpl
backend. See Brain.show_view for
valid string options.
When plotting a standard SourceEstimate (not volume, mixed, or vector)
and using the PyVista backend, views='flat' is also supported to
plot cortex as a flatmap.
Using multiple views (list) is not supported by the matplotlib backend.
Colorbar properties specification. If ‘auto’, set clim automatically
based on data percentiles. If dict, should contain:
kind‘value’ | ‘percent’
Flag to specify type of limits.
limslist | np.ndarray | tuple of float, 3 elements
Lower, middle, and upper bounds for colormap.
pos_limslist | np.ndarray | tuple of float, 3 elements
Lower, middle, and upper bound for colormap. Positive values
will be mirrored directly across zero during colormap
construction to obtain negative control points.
Note
Only one of lims or pos_lims should be provided.
Only sequential colormaps should be used with lims, and
only divergent colormaps should be used with pos_lims.
Specifies how binarized curvature values are rendered.
Either the name of a preset Brain cortex colorscheme (one of
'classic', 'bone', 'low_contrast', or 'high_contrast'),
or the name of a colormap, or a tuple with values
(colormap,min,max,reverse) to fully specify the curvature
colors. Has no effect with the matplotlib backend.
The size of the window, in pixels. can be one number to specify
a square window, or the (width, height) of a rectangular window.
Has no effect with mpl backend.
Only affects the matplotlib backend.
The spacing to use for the source space. Can be 'ico#' for a
recursively subdivided icosahedron, 'oct#' for a recursively
subdivided octahedron, or 'all' for all points. In general, you can
speed up the plotting by selecting a sparser source space.
Defaults to ‘oct6’.
If True, enable interactive picking of a point on the surface of the
brain and plot its time course.
This feature is only available with the PyVista 3d backend, and requires
time_viewer=True. Defaults to ‘auto’, which will use True if and
only if time_viewer=True, the backend is PyVista, and there is more
than one time point. If float (between zero and one), it specifies what
proportion of the total window should be devoted to traces (True is
equivalent to 0.25, i.e., it will occupy the bottom 1/4 of the figure).
Options for volumetric source estimate plotting, with key/value pairs:
'resolution'float | None
Resolution (in mm) of volume rendering. Smaller (e.g., 1.) looks
better at the cost of speed. None (default) uses the volume source
space resolution, which is often something like 7 or 5 mm,
without resampling.
'blending'str
Can be “mip” (default) for maximum intensity projection or
“composite” for composite blending using alpha values.
'alpha'float | None
Alpha for the volumetric rendering. Defaults are 0.4 for vector source
estimates and 1.0 for scalar source estimates.
'surface_alpha'float | None
Alpha for the surface enclosing the volume(s). None (default) will use
half the volume alpha. Set to zero to avoid plotting the surface.
'silhouette_alpha'float | None
Alpha for a silhouette along the outside of the volume. None (default)
will use 0.25*surface_alpha.
'silhouette_linewidth'float
The line width to use for the silhouette. Default is 2.
A float input (default 1.) or None will be used for the 'resolution'
entry.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Flatmaps are available by default for fsaverage but not for other
subjects reconstructed by FreeSurfer. We recommend using
mne.compute_source_morph() to morph source estimates to fsaverage
for flatmap plotting. If you want to construct your own flatmap for a given
subject, these links might help:
Resampling method to use. Can be "fft" (default) or "polyphase"
to use FFT-based on polyphase FIR resampling, respectively. These wrap to
scipy.signal.resample() and scipy.signal.resample_poly(), respectively.
When method="fft", this is the frequency-domain window to use in resampling,
and should be the same length as the signal; see scipy.signal.resample()
for details. When method="polyphase", this is the time-domain linear-phase
window to use after upsampling the signal; see scipy.signal.resample_poly()
for details. The default "auto" will use "boxcar" for method="fft" and
("kaiser",5.0) for method="polyphase".
The type of padding to use. When method="fft", supports
all numpy.pad()mode options. Can also be "reflect_limited", which
pads with a reflected version of each vector mirrored on the first and last values
of the vector, followed by zeros.
When method="polyphase", supports all modes of scipy.signal.upfirdn().
The default (“auto”) means 'reflect_limited' for method='fft' and
'reflect' for method='polyphase'.
The number of jobs to run in parallel. If -1, it is set
to the number of CPU cores. Requires the joblib package.
None (default) is a marker for ‘unset’ that will be interpreted
as n_jobs=1 (sequential execution) unless the call is performed under
a joblib.parallel_config context manager that sets another
value for n_jobs.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The stem of the file name. The file names used for surface source
spaces are obtained by adding "-lh.stc" and "-rh.stc" (or
"-lh.w" and "-rh.w") to the stem provided, for the left and
the right hemisphere, respectively.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Filename basename to save files as.
Will write anatomical GIFTI plus time series GIFTI for both lh/rh,
for example "basename" will write "basename.lh.gii",
"basename.lh.time.gii", "basename.rh.gii", and
"basename.rh.time.gii".
Approximate high cut-off frequency in Hz. Note that this
is not an exact cutoff, since Savitzky-Golay filtering
[3] is done using polynomial fits
instead of FIR/IIR filtering. This parameter is thus used to
determine the length of the window over which a 5th-order
polynomial smoothing is used.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Export data in tabular structure as a pandas DataFrame.
Vertices are converted to columns in the DataFrame. By default,
an additional column “time” is added, unless index='time'
(in which case time values form the DataFrame’s index).
Kind of index to use for the DataFrame. If None, a sequential
integer index (pandas.RangeIndex) will be used. If 'time', a
pandas.Index or pandas.TimedeltaIndex will be used
(depending on the value of time_format).
Defaults to None.
Scaling factor applied to the channels picked. If None, defaults to
dict(eeg=1e6,mag=1e15,grad=1e13) — i.e., converts EEG to µV,
magnetometers to fT, and gradiometers to fT/cm.
If True, the DataFrame is returned in long format where each row is one
observation of the signal at a unique combination of time point and vertex.
Defaults to False.
Desired time format. If None, no conversion is applied, and time values
remain as float values in seconds. If 'ms', time values will be rounded
to the nearest millisecond and converted to integers. If 'timedelta',
time values will be converted to pandas.Timedelta values.
Default is None.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The transform to be applied, including parameters (see, e.g.,
functools.partial()). The first parameter of the function is
the input data. The first two dimensions of the transformed data
should be (i) vertices and (ii) time. See Notes for details.
The transformed stc or, in the case of transforms which yield
N-dimensional output (where N > 2), a list of stcs. For a list,
copy must be True.
Notes
Transforms which yield 3D
output (e.g. time-frequency transforms) are valid, so long as the
first two dimensions are vertices and time. In this case, the
copy parameter must be True and a list of
SourceEstimates, rather than a single instance of SourceEstimate,
will be returned, one for each index of the 3rd dimension of the
transformed data. In the case of transforms yielding 2D output
(e.g. filtering), the user has the option of modifying the input
inplace (copy = False) or returning a new instance of
SourceEstimate (copy = True) with the transformed data.
Applying transforms can be significantly faster if the
SourceEstimate object was created using “(kernel, sens_data)”, for
the “data” parameter as the transform is applied in sensor space.
Inverse methods, e.g., “apply_inverse_epochs”, or “apply_lcmv_epochs”
do this automatically (if possible).
The transform to be applied, including parameters (see, e.g.,
functools.partial()). The first parameter of the function is
the input data. The first return value is the transformed data,
remaining outputs are ignored. The first dimension of the
transformed data has to be the same as the first dimension of the
input data.
Applying transforms can be significantly faster if the
SourceEstimate object was created using “(kernel, sens_data)”, for
the “data” parameter as the transform is applied in sensor space.
Inverse methods, e.g., “apply_inverse_epochs”, or “apply_lcmv_epochs”
do this automatically (if possible).