mne.events_from_annotations#
- mne.events_from_annotations(raw, event_id='auto', regexp='^(?![Bb][Aa][Dd]|[Ee][Dd][Gg][Ee]).*$', use_rounding=True, chunk_duration=None, verbose=None)[source]#
Get events and
event_id
from an Annotations object.- Parameters:
- rawinstance of
Raw
The raw data for which Annotations are defined.
- event_id
dict
|callable()
|None
| ‘auto’ Can be:
dict: map descriptions (keys) to integer event codes (values). Only the descriptions present will be mapped, others will be ignored.
callable: must take a string input and return an integer event code, or return
None
to ignore the event.None: Map descriptions to unique integer values based on their
sorted
order.‘auto’ (default): prefer a raw-format-specific parser:
Brainvision: map stimulus events to their integer part; response events to integer part + 1000; optic events to integer part + 2000; ‘SyncStatus/Sync On’ to 99998; ‘New Segment/’ to 99999; all others like
None
with an offset of 10000.Other raw formats: Behaves like None.
New in version 0.18.
- regexp
str
|None
Regular expression used to filter the annotations whose descriptions is a match. The default ignores descriptions beginning
'bad'
or'edge'
(case-insensitive).Changed in version 0.18: Default ignores bad and edge descriptions.
- use_rounding
bool
If True, use rounding (instead of truncation) when converting times to indices. This can help avoid non-unique indices.
- chunk_duration
float
|None
Chunk duration in seconds. If
chunk_duration
is set to None (default), generated events correspond to the annotation onsets. If not,mne.events_from_annotations()
returns as many events as they fit within the annotation duration spaced according tochunk_duration
. As a consequence annotations with duration shorter thanchunk_duration
will not contribute events.- verbose
bool
|str
|int
|None
Control verbosity of the logging output. If
None
, use the default verbosity level. See the logging documentation andmne.verbose()
for details. Should only be passed as a keyword argument.
- rawinstance of
- Returns:
See also
Notes
For data formats that store integer events as strings (e.g., NeuroScan
.cnt
files), passing the Python built-in functionint
as theevent_id
parameter will do what most users probably want in those circumstances: return anevent_id
dictionary that maps event'1'
to integer event code1
,'2'
to2
, etc.
Examples using mne.events_from_annotations
#

Preprocessing functional near-infrared spectroscopy (fNIRS) data

Frequency-tagging: Basic analysis of an SSVEP/vSSR dataset

Sleep stage classification from polysomnography (PSG) data

Plot single trial activity, grouped by ROI and sorted by RT

Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)

Decoding in time-frequency space using Common Spatial Patterns (CSP)