mne.io.BaseRaw#

class mne.io.BaseRaw(info, preload=False, first_samps=(0, ), last_samps=None, filenames=(None, ), raw_extras=(None, ), orig_format='double', dtype=<class 'numpy.float64'>, buffer_size_sec=1.0, orig_units=None, *, verbose=None)[source]#

Base class for Raw data.

Parameters:
infomne.Info

The mne.Info object with information about the sensors and methods of measurement.

preloadbool | str | ndarray

Preload data into memory for data manipulation and faster indexing. If True, the data will be preloaded into memory (fast, requires large amount of memory). If preload is a string, preload is the file name of a memory-mapped file which is used to store the data on the hard drive (slower, requires less memory). If preload is an ndarray, the data are taken from that array. If False, data are not read until save.

first_sampsiterable

Iterable of the first sample number from each raw file. For unsplit raw files this should be a length-one list or tuple.

last_sampsiterable | None

Iterable of the last sample number from each raw file. For unsplit raw files this should be a length-one list or tuple. If None, then preload must be an ndarray.

filenamestuple

Tuple of length one (for unsplit raw files) or length > 1 (for split raw files).

raw_extraslist of dict

The data necessary for on-demand reads for the given reader format. Should be the same length as filenames. Will have the entry raw_extras['orig_nchan'] added to it for convenience.

orig_formatstr

The data format of the original raw file (e.g., 'double').

dtypedtype | None

The dtype of the raw data. If preload is an ndarray, its dtype must match what is passed here.

buffer_size_secfloat

The buffer size in seconds that should be written by default using mne.io.Raw.save().

orig_unitsdict | None

Dictionary mapping channel names to their units as specified in the header file. Example: {‘FC1’: ‘nV’}.

New in v0.17.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Attributes:
annotations

Annotations for marking segments of data.

ch_names

Channel names.

compensation_grade

The current gradient compensation grade.

filenames

The filenames used.

first_samp

The first data sample.

first_time

The first time point (including first_samp but not meas_date).

last_samp

The last data sample.

n_times

Number of time points.

proj

Whether or not projections are active.

times

Time points.

Methods

__contains__(ch_type)

Check channel type membership.

__getitem__(item)

Get raw data and times.

__len__()

Return the number of time points.

add_channels(add_list[, force_update_info])

Append new channels to the instance.

add_events(events[, stim_channel, replace])

Add events to stim channel.

add_proj(projs[, remove_existing, verbose])

Add SSP projection vectors.

add_reference_channels(ref_channels)

Add reference channels to data that consists of all zeros.

anonymize([daysback, keep_his, verbose])

Anonymize measurement information in place.

append(raws[, preload])

Concatenate raw instances as if they were continuous.

apply_function(fun[, picks, dtype, n_jobs, ...])

Apply a function to a subset of channels.

apply_gradient_compensation(grade[, verbose])

Apply CTF gradient compensation.

apply_hilbert([picks, envelope, n_jobs, ...])

Compute analytic signal or envelope for a subset of channels/vertices.

apply_proj([verbose])

Apply the signal space projection (SSP) operators to the data.

close()

Clean up the object.

compute_psd([method, fmin, fmax, tmin, ...])

Perform spectral analysis on sensor data.

compute_tfr(method, freqs, *[, tmin, tmax, ...])

Compute a time-frequency representation of sensor data.

copy()

Return copy of Raw instance.

crop([tmin, tmax, include_tmax, verbose])

Crop raw data file.

crop_by_annotations([annotations, verbose])

Get crops of raw data file for selected annotations.

del_proj([idx])

Remove SSP projection vector.

describe([data_frame])

Describe channels (name, type, descriptive statistics).

drop_channels(ch_names[, on_missing])

Drop channel(s).

export(fname[, fmt, physical_range, ...])

Export Raw to external formats.

filter(l_freq, h_freq[, picks, ...])

Filter a subset of channels/vertices.

get_channel_types([picks, unique, only_data_chs])

Get a list of channel type for each channel.

get_data([picks, start, stop, ...])

Get data in the given range.

get_montage()

Get a DigMontage from instance.

interpolate_bads([reset_bads, mode, origin, ...])

Interpolate bad MEG and EEG channels.

load_bad_channels([bad_file, force, verbose])

Mark channels as bad from a text file.

load_data([verbose])

Load raw data.

notch_filter(freqs[, picks, filter_length, ...])

Notch filter a subset of channels.

pick(picks[, exclude, verbose])

Pick a subset of channels.

pick_channels(ch_names[, ordered, verbose])

pick_types([meg, eeg, stim, eog, ecg, emg, ...])

plot([events, duration, start, n_channels, ...])

Plot raw data.

plot_projs_topomap([ch_type, sensors, ...])

Plot SSP vector.

plot_psd([fmin, fmax, tmin, tmax, picks, ...])

plot_psd_topo([tmin, tmax, fmin, fmax, ...])

plot_psd_topomap([bands, tmin, tmax, ...])

plot_sensors([kind, ch_type, title, ...])

Plot sensor positions.

rename_channels(mapping[, allow_duplicates, ...])

Rename channels.

reorder_channels(ch_names)

Reorder channels.

resample(sfreq, *[, npad, window, ...])

Resample all channels.

save(fname[, picks, tmin, tmax, ...])

Save raw data to file.

savgol_filter(h_freq[, verbose])

Filter the data using Savitzky-Golay polynomial method.

set_annotations(annotations[, emit_warning, ...])

Setter for annotations.

set_channel_types(mapping, *[, ...])

Specify the sensor types of channels.

set_eeg_reference([ref_channels, ...])

Specify which reference to use for EEG data.

set_meas_date(meas_date)

Set the measurement start date.

set_montage(montage[, match_case, ...])

Set EEG/sEEG/ECoG/DBS/fNIRS channel positions and digitization points.

time_as_index(times[, use_rounding, origin])

Convert time to indices.

to_data_frame([picks, index, scalings, ...])

Export data in tabular structure as a pandas DataFrame.

See also

mne.io.Raw

Documentation of attributes and methods.

Notes

This class is public to allow for stable type-checking in user code (i.e., isinstance(my_raw_object, BaseRaw)) but should not be used as a constructor for Raw objects (use instead one of the subclass constructors, or one of the mne.io.read_raw_* functions).

Subclasses must provide the following methods:

  • _read_segment_file(self, data, idx, fi, start, stop, cals, mult) (only needed for types that support on-demand disk reads)