mne.events_from_annotations#

mne.events_from_annotations(raw, event_id='auto', regexp='^(?![Bb][Aa][Dd]|[Ee][Dd][Gg][Ee]).*$', use_rounding=True, chunk_duration=None, verbose=None)[source]#

Get events and event_id from an Annotations object.

Parameters:
rawinstance of Raw

The raw data for which Annotations are defined.

event_iddict | callable() | None | ‘auto’

Can be:

  • dict: map descriptions (keys) to integer event codes (values). Only the descriptions present will be mapped, others will be ignored.

  • callable: must take a string input and return an integer event code, or return None to ignore the event.

  • None: Map descriptions to unique integer values based on their sorted order.

  • ‘auto’ (default): prefer a raw-format-specific parser:

    • Brainvision: map stimulus events to their integer part; response events to integer part + 1000; optic events to integer part + 2000; ‘SyncStatus/Sync On’ to 99998; ‘New Segment/’ to 99999; all others like None with an offset of 10000.

    • Other raw formats: Behaves like None.

    New in version 0.18.

regexpstr | None

Regular expression used to filter the annotations whose descriptions is a match. The default ignores descriptions beginning 'bad' or 'edge' (case-insensitive).

Changed in version 0.18: Default ignores bad and edge descriptions.

use_roundingbool

If True, use rounding (instead of truncation) when converting times to indices. This can help avoid non-unique indices.

chunk_durationfloat | None

Chunk duration in seconds. If chunk_duration is set to None (default), generated events correspond to the annotation onsets. If not, mne.events_from_annotations() returns as many events as they fit within the annotation duration spaced according to chunk_duration. As a consequence annotations with duration shorter than chunk_duration will not contribute events.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
eventsarray of int, shape (n_events, 3)

The array of events. The first column contains the event time in samples, with first_samp included. The third column contains the event id.

event_iddict

The event_id variable that can be passed to Epochs.

Notes

For data formats that store integer events as strings (e.g., NeuroScan .cnt files), passing the Python built-in function int as the event_id parameter will do what most users probably want in those circumstances: return an event_id dictionary that maps event '1' to integer event code 1, '2' to 2, etc.

Examples using mne.events_from_annotations#

Parsing events from raw data

Parsing events from raw data

Parsing events from raw data
Preprocessing functional near-infrared spectroscopy (fNIRS) data

Preprocessing functional near-infrared spectroscopy (fNIRS) data

Preprocessing functional near-infrared spectroscopy (fNIRS) data
Auto-generating Epochs metadata

Auto-generating Epochs metadata

Auto-generating Epochs metadata
Frequency-tagging: Basic analysis of an SSVEP/vSSR dataset

Frequency-tagging: Basic analysis of an SSVEP/vSSR dataset

Frequency-tagging: Basic analysis of an SSVEP/vSSR dataset
Working with sEEG data

Working with sEEG data

Working with sEEG data
Working with ECoG data

Working with ECoG data

Working with ECoG data
Sleep stage classification from polysomnography (PSG) data

Sleep stage classification from polysomnography (PSG) data

Sleep stage classification from polysomnography (PSG) data
Plot single trial activity, grouped by ROI and sorted by RT

Plot single trial activity, grouped by ROI and sorted by RT

Plot single trial activity, grouped by ROI and sorted by RT
Compute and visualize ERDS maps

Compute and visualize ERDS maps

Compute and visualize ERDS maps
Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)

Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)

Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)
Decoding in time-frequency space using Common Spatial Patterns (CSP)

Decoding in time-frequency space using Common Spatial Patterns (CSP)

Decoding in time-frequency space using Common Spatial Patterns (CSP)