Skip to content

General settings

study_name module-attribute

study_name: str = ''

Specify the name of your study. It will be used to populate filenames for saving the analysis results.

study_name = 'my-study'

bids_root module-attribute

bids_root: Optional[PathLike] = None

Specify the BIDS root directory. Pass an empty string or `None to use the value specified in the BIDS_ROOT environment variable instead. Raises an exception if the BIDS root has not been specified.

bids_root = '/path/to/your/bids_root'  # Use this to specify a path here.
bids_root = None  # Make use of the ``BIDS_ROOT`` environment variable.

deriv_root module-attribute

deriv_root: Optional[PathLike] = None

The root of the derivatives directory in which the pipeline will store the processing results. If None, this will be derivatives/mne-bids-pipeline inside the BIDS root.

Note: Note If specified and you wish to run the source analysis steps, you must set subjects_dir as well.

subjects_dir module-attribute

subjects_dir: Optional[PathLike] = None

Path to the directory that contains the FreeSurfer reconstructions of all subjects. Specifically, this defines the SUBJECTS_DIR that is used by FreeSurfer.

  • When running the freesurfer processing step to create the reconstructions from anatomical scans in the BIDS dataset, the output will be stored in this directory.
  • When running the source analysis steps, we will look for the surfaces in this directory and also store the BEM surfaces there.

If None, this will default to bids_root/derivatives/freesurfer/subjects.

Note: Note This setting is required if you specify deriv_root and want to run the source analysis steps.

interactive module-attribute

interactive: bool = False

If True, the scripts will provide some interactive elements, such as figures. If running the scripts from a notebook or Spyder, run %matplotlib qt in the command line to open the figures in a separate window.

Note: Note Enabling interactive mode deactivates parallel processing.

sessions module-attribute

sessions: Union[List, Literal[all]] = 'all'

The sessions to process. If 'all', will process all sessions found in the BIDS dataset.

task module-attribute

task: str = ''

The task to process.

task_is_rest module-attribute

task_is_rest: bool = False

Whether the task should be treated as resting-state data.

runs module-attribute

runs: Union[Iterable, Literal[all]] = 'all'

The runs to process. If 'all', will process all runs found in the BIDS dataset.

exclude_runs module-attribute

exclude_runs: Optional[Dict[str, List[str]]] = None

Specify runs to exclude from analysis, for each participant individually.

exclude_runs = None  # Include all runs.
exclude_runs = {'01': ['02']}  # Exclude run 02 of subject 01.
Good Practice / Advice

Keep track of the criteria leading you to exclude a run (e.g. too many movements, missing blocks, aborted experiment, did not understand the instructions, etc.).

crop_runs module-attribute

crop_runs: Optional[Tuple[float, float]] = None

Crop the raw data of each run to the specified time interval [tmin, tmax], in seconds. The runs will be cropped before Maxwell or frequency filtering is applied. If None, do not crop the data.

acq module-attribute

acq: Optional[str] = None

The BIDS acquisition entity.

proc module-attribute

proc: Optional[str] = None

The BIDS processing entity.

rec module-attribute

rec: Optional[str] = None

The BIDS recording entity.

space module-attribute

space: Optional[str] = None

The BIDS space entity.

subjects module-attribute

subjects: Union[Iterable[str], Literal[all]] = 'all'

Subjects to analyze. If 'all', include all subjects. To only include a subset of subjects, pass a list of their identifiers. Even if you plan on analyzing only a single subject, pass their identifier as a list.

Please note that if you intend to EXCLUDE only a few subjects, you should consider setting subjects = 'all' and adding the identifiers of the excluded subjects to exclude_subjects (see next section).

subjects = 'all'  # Include all subjects.
subjects = ['05']  # Only include subject 05.
subjects = ['01', '02']  # Only include subjects 01 and 02.

exclude_subjects module-attribute

exclude_subjects: Iterable[str] = []

Specify subjects to exclude from analysis. The MEG empty-room mock-subject is automatically excluded from regular analysis.

Good Practice / Advice

Keep track of the criteria leading you to exclude a participant (e.g. too many movements, missing blocks, aborted experiment, did not understand the instructions, etc, ...) The emptyroom subject will be excluded automatically.

process_er module-attribute

process_er: bool = False

Whether to apply the same pre-processing steps to the empty-room data as to the experimental data (up until including frequency filtering). This is required if you wish to use the empty-room recording to estimate noise covariance (via noise_cov='emptyroom'). The empty-room recording corresponding to the processed experimental data will be retrieved automatically.

ch_types module-attribute

ch_types: Iterable[Literal[meg, mag, grad, eeg]] = []

The channel types to consider.


Currently, MEG and EEG data cannot be processed together.

# Use EEG channels:
ch_types = ['eeg']

# Use magnetometer and gradiometer MEG channels:
ch_types = ['mag', 'grad']

# Currently does not work and will raise an error message:
ch_types = ['meg', 'eeg']

data_type module-attribute

data_type: Optional[Literal[meg, eeg]] = None

The BIDS data type.

For MEG recordings, this will usually be 'meg'; and for EEG, 'eeg'. However, if your dataset contains simultaneous recordings of MEG and EEG, stored in a single file, you will typically need to set this to 'meg'. If None, we will assume that the data type matches the channel type.


The dataset contains simultaneous recordings of MEG and EEG, and we only wish to process the EEG data, which is stored inside the MEG files:

ch_types = ['eeg']
data_type = 'eeg'

The dataset contains simultaneous recordings of MEG and EEG, and we only wish to process the gradiometer data:

ch_types = ['grad']
data_type = 'meg'  # or data_type = None

The dataset contains only EEG data:

ch_types = ['eeg']
data_type = 'eeg'  # or data_type = None

eog_channels module-attribute

eog_channels: Optional[Iterable[str]] = None

Specify EOG channels to use, or create virtual EOG channels.

Allows the specification of custom channel names that shall be used as (virtual) EOG channels. For example, say you recorded EEG without dedicated EOG electrodes, but with some EEG electrodes placed close to the eyes, e.g. Fp1 and Fp2. These channels can be expected to have captured large quantities of ocular activity, and you might want to use them as "virtual" EOG channels, while also including them in the EEG analysis. By default, MNE won't know that these channels are suitable for recovering EOG, and hence won't be able to perform tasks like automated blink removal, unless a "true" EOG sensor is present in the data as well. Specifying channel names here allows MNE to find the respective EOG signals based on these channels.

You can specify one or multiple channel names. Each will be treated as if it were a dedicated EOG channel, without excluding it from any other analyses.

If None, only actual EOG channels will be used for EOG recovery.

If there are multiple actual EOG channels in your data, and you only specify a subset of them here, only this subset will be used during processing.


Treat Fp1 as virtual EOG channel:

eog_channels = ['Fp1']

Treat Fp1 and Fp2 as virtual EOG channels:

eog_channels = ['Fp1', 'Fp2']

eeg_bipolar_channels module-attribute

eeg_bipolar_channels: Optional[
    Dict[str, Tuple[str, str]]
] = None

Combine two channels into a bipolar channel, whose signal is the difference between the two combined channels, and add it to the data. A typical use case is the combination of two EOG channels – for example, a left and a right horizontal EOG – into a single, bipolar EOG channel. You need to pass a dictionary whose keys are the name of the new bipolar channel you wish to create, and whose values are tuples consisting of two strings: the name of the channel acting as anode and the name of the channel acting as cathode, i.e. {'ch_name': ('anode', 'cathode')}. You can request to construct more than one bipolar channel by specifying multiple key/value pairs. See the examples below.

Can also be None if you do not want to create bipolar channels.

Note: Note The channels used to create the bipolar channels are not automatically dropped from the data. To drop channels, set drop_channels.


Combine the existing channels HEOG_left and HEOG_right into a new, bipolar channel, HEOG:

eeg_add_bipolar_channels = {'HEOG': ('HEOG_left', 'HEOG_right')}

Create two bipolar channels, HEOG and VEOG:

eeg_add_bipolar_channels = {'HEOG': ('HEOG_left', 'HEOG_right'),
                            'VEOG': ('VEOG_lower', 'VEOG_upper')}

eeg_reference module-attribute

eeg_reference: Union[
    Literal[average], str, Iterable[str]
] = "average"

The EEG reference to use. If average, will use the average reference, i.e. the average across all channels. If a string, must be the name of a single channel. To use multiple channels as reference, set to a list of channel names.


Use the average reference:

eeg_reference = 'average'

Use the P9 channel as reference:

eeg_reference = 'P9'

Use the average of the P9 and P10 channels as reference:

eeg_reference = ['P9', 'P10']

eeg_template_montage module-attribute

eeg_template_montage: Optional[str] = None

In situations where you wish to process EEG data and no individual digitization points (measured channel locations) are available, you can apply a "template" montage. This means we will assume the EEG cap was placed either according to an international system like 10/20, or as suggested by the cap manufacturers in their respective manual.

Please be aware that the actual cap placement most likely deviated somewhat from the template, and, therefore, source reconstruction may be impaired.

If None, do not apply a template montage. If a string, must be the name of a built-in template montage in MNE-Python. You can find an overview of supported template montages at


Do not apply template montage:

eeg_template_montage = None

Apply 64-channel Biosemi 10/20 template montage:

eeg_template_montage = 'biosemi64'

drop_channels module-attribute

drop_channels: Iterable[str] = []

Names of channels to remove from the data. This can be useful, for example, if you have added a new bipolar channel via eeg_bipolar_channels and now wish to remove the anode, cathode, or both.


Exclude channels Fp1 and Cz from processing:

drop_channels = ['Fp1', 'Cz]

reader_extra_params module-attribute

reader_extra_params: dict = {}

Parameters to be passed to read_raw_bids() calls when importing raw data.


Enforce units for EDF files:

reader_extra_params = {"units": "uV"}

analyze_channels module-attribute

analyze_channels: Union[
    Literal[all], Literal[ch_types], Iterable[str]
] = "ch_types"

The names of the channels to analyze during ERP/ERF and time-frequency analysis steps. For certain paradigms, e.g. EEG ERP research, it is common to contrain sensor-space analysis to only a few specific sensors. If 'all', do not exclude any channels (except for those selected for removal via the drop_channels setting; use with caution as this can include things like STIM channels during the decoding step). If 'ch_types' (default), restrict to the channels listed in the ch_types parameter. The constraint will be applied to all sensor-level analyses after the preprocessing stage, but not to the preprocessing stage itself, nor to the source analysis stage.


Only use channel Pz for ERP, evoked contrasts, time-by-time decoding, and time-frequency analysis:

analyze_channels = ['Pz']

plot_psd_for_runs module-attribute

plot_psd_for_runs: Union[
    Literal[all], Iterable[str]
] = "all"

For which runs to add a power spectral density (PSD) plot to the generated report. This can take a considerable amount of time if you have many long runs. In this case, specify the runs, or pass an empty list to disable raw PSD plotting.

N_JOBS module-attribute

N_JOBS: int = 1

Specifies how many subjects you want to process in parallel. If 1, disables parallel processing.

parallel_backend module-attribute

parallel_backend: Literal[loky, dask] = 'loky'

Specifies which backend to use for parallel job execution. loky is the default backend used by joblib. dask requires Dask to be installed. Ignored if N_JOBS is set to 1.

dask_open_dashboard module-attribute

dask_open_dashboard: bool = False

Whether to open the Dask dashboard in the default webbrowser automatically. Ignored if parallel_backend is not 'dask'.

dask_temp_dir module-attribute

dask_temp_dir: Optional[PathLike] = None

The temporary directory to use by Dask. Dask places lock-files in this directory, and also uses it to "spill" RAM contents to disk if the amount of free memory in the system hits a critical low. It is recommended to point this to a location on a fast, local disk (i.e., not a network-attached storage) to ensure good performance. The directory needs to be writable and will be created if it does not exist.

If None, will use .dask-worker-space inside of deriv_root.

dask_worker_memory_limit module-attribute

dask_worker_memory_limit: str = '10G'

The maximum amount of RAM per Dask worker.

random_state module-attribute

random_state: Optional[int] = 42

You can specify the seed of the random number generator (RNG). This setting is passed to the ICA algorithm and to the decoding function, ensuring reproducible results. Set to None to avoid setting the RNG to a defined state.

shortest_event module-attribute

shortest_event: int = 1

Minimum number of samples an event must last. If the duration is less than this, an exception will be raised.

memory_location module-attribute

memory_location: Optional[Union[PathLike, bool]] = True

If not None (or False), caching will be enabled and the cache files will be stored in the given directory. The default (True) will use a 'joblib' subdirectory in the BIDS derivative root of the dataset.

memory_file_method module-attribute

memory_file_method: MemoryFileMethodT = 'mtime'

The method to use for cache invalidation (i.e., detecting changes). Using the "modified time" reported by the filesystem ('mtime', default) is very fast but requires that the filesystem supports proper mtime reporting. Using file hashes ('hash') is slower and requires reading all input files but should work on any filesystem.

memory_verbose module-attribute

memory_verbose: int = 0

The verbosity to use when using memory. The default (0) does not print, while 1 will print the function calls that will be cached. See the documentation for the joblib.Memory class for more information.

log_level module-attribute

log_level: Literal[info, error] = 'info'

Set the pipeline logging verbosity.

mne_log_level module-attribute

mne_log_level: Literal[info, error] = 'error'

Set the MNE-Python logging verbosity.

on_error module-attribute

on_error: OnErrorT = 'abort'

Whether to abort processing as soon as an error occurs, continue with all other processing steps for as long as possible, or drop you into a debugger in case of an error.