Datasets Overview#

All the dataset fetchers are available in mne.datasets. To download any of the datasets, use the data_path (fetches full dataset) or the load_data (fetches dataset partially) functions.

All fetchers will check the default download location first to see if the dataset is already on your computer, and only download it if necessary. The default download location is also configurable; see the documentation of any of the data_path functions for more information.

Sample#

mne.datasets.sample.data_path()

These data were acquired with the Neuromag Vectorview system at MGH/HMS/MIT Athinoula A. Martinos Center Biomedical Imaging. EEG data from a 60-channel electrode cap was acquired simultaneously with the MEG. The original MRI data set was acquired with a Siemens 1.5 T Sonata scanner using an MPRAGE sequence.

Note

These data are provided solely for the purpose of getting familiar with the MNE software. The data should not be used to evaluate the performance of the MEG or MRI system employed.

In this experiment, checkerboard patterns were presented to the subject into the left and right visual field, interspersed by tones to the left or right ear. The interval between the stimuli was 750 ms. Occasionally a smiley face was presented at the center of the visual field. The subject was asked to press a key with the right index finger as soon as possible after the appearance of the face.

Trigger codes for the sample data set.#

Name

Contents

LA

1

Response to left-ear auditory stimulus

RA

2

Response to right-ear auditory stimulus

LV

3

Response to left visual field stimulus

RV

4

Response to right visual field stimulus

smiley

5

Response to the smiley face

button

32

Response triggered by the button press

Contents of the data set#

The sample data set contains two main directories: MEG/sample (the MEG/EEG data) and subjects/sample (the MRI reconstructions). In addition to subject sample, the MRI surface reconstructions from another subject, morph, are provided to demonstrate morphing capabilities.

Contents of the MEG/sample directory.#

File

Contents

sample/audvis_raw.fif

The raw MEG/EEG data

audvis.ave

A template script for off-line averaging

auvis.cov

A template script for the computation of a noise-covariance matrix

Overview of the contents of the subjects/sample directory.#

File / directory

Contents

bem

Directory for the forward modelling data

bem/watershed

BEM surface segmentation data computed with the watershed algorithm

bem/inner_skull.surf

Inner skull surface for BEM

bem/outer_skull.surf

Outer skull surface for BEM

bem/outer_skin.surf

Skin surface for BEM

sample-head.fif

Skin surface in fif format for mne_analyze visualizations

surf

Surface reconstructions

mri/T1

The T1-weighted MRI data employed in visualizations

The following preprocessing steps have been already accomplished in the sample data set:

  • The MRI surface reconstructions have been computed using the FreeSurfer software.

  • The BEM surfaces have been created with the watershed algorithm, see Using the watershed algorithm.

The sample dataset is distributed with fsaverage for convenience.

Brainstorm#

Dataset fetchers for three Brainstorm tutorials are available. Users must agree to the license terms of these datasets before downloading them. These files are recorded in a CTF 275 system and are provided in native CTF format (.ds files).

Auditory#

mne.datasets.brainstorm.bst_raw.data_path().

Details about the data can be found at the Brainstorm auditory dataset tutorial.

Examples

Resting state#

mne.datasets.brainstorm.bst_resting.data_path()

Details can be found at the Brainstorm resting state dataset tutorial.

Median nerve#

mne.datasets.brainstorm.bst_raw.data_path()

Details can be found at the Brainstorm median nerve dataset tutorial.

SPM faces#

mne.datasets.spm_face.data_path()

The SPM faces dataset contains EEG, MEG and fMRI recordings on face perception.

Examples

EEGBCI motor imagery#

mne.datasets.eegbci.load_data()

The EEGBCI dataset is documented in 1. The data set is available at PhysioNet 2. The dataset contains 64-channel EEG recordings from 109 subjects and 14 runs on each subject in EDF+ format. The recordings were made using the BCI2000 system. To load a subject, do:

from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
raw_fnames = eegbci.load_data(subject, runs)
raws = [read_raw_edf(f, preload=True) for f in raw_fnames]
raw = concatenate_raws(raws)

Somatosensory#

mne.datasets.somato.data_path()

This dataset contains somatosensory data with event-related synchronizations (ERS) and desynchronizations (ERD).

Multimodal#

mne.datasets.multimodal.data_path()

This dataset contains a single subject recorded at Otaniemi (Aalto University) with auditory, visual, and somatosensory stimuli.

fNIRS motor#

mne.datasets.fnirs_motor.data_path()

This dataset contains a single subject recorded at Macquarie University. It has optodes placed over the motor cortex. There are three conditions:

  • tapping the left thumb to fingers

  • tapping the right thumb to fingers

  • a control where nothing happens

The tapping lasts 5 seconds, and there are 30 trials of each condition.

High frequency SEF#

mne.datasets.hf_sef.data_path()

This dataset contains somatosensory evoked fields (median nerve stimulation) with thousands of epochs. It was recorded with an Elekta TRIUX MEG device at a sampling frequency of 3 kHz. The dataset is suitable for investigating high-frequency somatosensory responses. Data from two subjects are included with MRI images in DICOM format and FreeSurfer reconstructions.

Visual 92 object categories#

mne.datasets.visual_92_categories.data_path().

This dataset is recorded using a 306-channel Neuromag vectorview system.

Experiment consisted in the visual presentation of 92 images of human, animal and inanimate objects either natural or artificial 3. Given the high number of conditions this dataset is well adapted to an approach based on Representational Similarity Analysis (RSA).

Examples

mTRF Dataset#

mne.datasets.mtrf.data_path().

This dataset contains 128 channel EEG as well as natural speech stimulus features, which is also available here.

The experiment consisted of subjects listening to natural speech. The dataset contains several feature representations of the speech stimulus, suitable for using to fit continuous regression models of neural activity. More details and a description of the package can be found in 4.

Examples

Kiloword dataset#

mne.datasets.kiloword.data_path().

This dataset consists of averaged EEG data from 75 subjects performing a lexical decision task on 960 English words 5. The words are richly annotated, and can be used for e.g. multiple regression estimation of EEG correlates of printed word processing.

4D Neuroimaging / BTi dataset#

mne.datasets.phantom_4dbti.data_path().

This dataset was obtained with a phantom on a 4D Neuroimaging / BTi system at the MEG center in La Timone hospital in Marseille.

OPM#

mne.datasets.opm.data_path()

OPM data acquired using an Elekta DACQ, simply piping the data into Elekta magnetometer channels. The FIF files thus appear to come from a TRIUX system that is only acquiring a small number of magnetometer channels instead of the whole array.

The OPM coil_type is custom, requiring a custom coil_def.dat. The new coil_type is 9999.

OPM co-registration differs a bit from the typical SQUID-MEG workflow. No -trans.fif file is needed for the OPMs, the FIF files include proper sensor locations in MRI coordinates and no digitization of RPA/LPA/Nasion. Thus the MEG<->Head coordinate transform is taken to be an identity matrix (i.e., everything is in MRI coordinates), even though this mis-identifies the head coordinate frame (which is defined by the relationship of the LPA, RPA, and Nasion).

Triggers include:

  • Median nerve stimulation: trigger value 257.

  • Magnetic trigger (in OPM measurement only): trigger value 260. 1 second before the median nerve stimulation, a magnetic trigger is piped into the MSR. This was to be able to check the synchronization between OPMs retrospectively, as each sensor runs on an independent clock. Synchronization turned out to be satisfactory.

The Sleep PolySomnoGraphic Database#

mne.datasets.sleep_physionet.age.fetch_data() mne.datasets.sleep_physionet.temazepam.fetch_data()

The sleep PhysioNet database contains 197 whole-night PolySomnoGraphic sleep recordings, containing EEG, EOG, chin EMG, and event markers. Some records also contain respiration and body temperature. Corresponding hypnograms (sleep patterns) were manually scored by well-trained technicians according to the Rechtschaffen and Kales manual, and are also available. If you use these data please cite 6 and 2.

Reference channel noise MEG data set#

mne.datasets.refmeg_noise.data_path().

This dataset was obtained with a 4D Neuroimaging / BTi system at the University Clinic - Erlangen, Germany. There are powerful bursts of external magnetic noise throughout the recording, which make it a good example for automatic noise removal techniques.

Miscellaneous Datasets#

These datasets are used for specific purposes in the documentation and in general are not useful for separate analyses.

fsaverage#

mne.datasets.fetch_fsaverage()

For convenience, we provide a function to separately download and extract the (or update an existing) fsaverage subject.

Infant template MRIs#

mne.datasets.fetch_infant_template()

This function will download an infant template MRI from 7 along with MNE-specific files.

ECoG Dataset#

mne.datasets.misc.data_path(). Data exists at /ecog/.

This dataset contains a sample electrocorticography (ECoG) dataset. It includes two grids of electrodes and ten shaft electrodes with simulated motor data (actual data pending availability).

Examples

sEEG Dataset#

mne.datasets.misc.data_path(). Data exists at /seeg/.

This dataset contains a sample stereoelectroencephalography (sEEG) dataset. It includes 21 shaft electrodes during a two-choice movement task on a keyboard.

Examples

LIMO Dataset#

mne.datasets.limo.load_data().

In the original LIMO experiment (see 8), participants performed a two-alternative forced choice task, discriminating between two face stimuli. Subjects discriminated the same two faces during the whole experiment. The critical manipulation consisted of the level of noise added to the face-stimuli during the task, making the faces more or less discernible to the observer.

The presented faces varied across a noise-signal (or phase-coherence) continuum spanning from 0 to 100% in increasing steps of 10%. In other words, faces with high phase-coherence (e.g., 90%) were easy to identify, while faces with low phase-coherence (e.g., 10%) were hard to identify and by extension hard to discriminate.

Examples

ERP CORE Dataset#

mne.datasets.erp_core.data_path()

The original ERP CORE dataset 9 contains data from 40 participants who completed 6 EEG experiments, carefully crafted to evoke 7 well-known event-related potential (ERP) components.

Currently, the MNE-Python ERP CORE dataset only provides data from one participant (subject 001) of the Flankers paradigm, which elicits the lateralized readiness potential (LRP) and error-related negativity (ERN). The data provided is not the original data from the ERP CORE dataset, but rather a slightly modified version, designed to demonstrate the Epochs metadata functionality. For example, we already set the references and montage correctly, and stored events as Annotations. Data is provided in FIFF format.

Examples

SSVEP#

mne.datasets.ssvep.data_path()

This is a simple example dataset with frequency tagged visual stimulation: N=2 participants observed checkerboards patterns inverting with a constant frequency of either 12.0 Hz of 15.0 Hz. 10 trials of 20.0 s length each. 32 channels wet EEG was recorded.

Data format: BrainVision .eeg/.vhdr/.vmrk files organized according to BIDS standard.

References#

1

Gerwin Schalk, Dennis J. McFarland, Thilo Hinterberger, Niels Birbaumer, and Jonathan R. Wolpaw. BCI2000: a general-purpose brain-computer interface (BCI) system. IEEE Transactions on Biomedical Engineering, 51(6):1034–1043, 2004. doi:10.1109/TBME.2004.827072.

2(1,2)

Ary L. Goldberger, Luis A. N. Amaral, Leon Glass, Jeffrey M. Hausdorff, Plamen Ch. Ivanov, Roger G. Mark, Joseph E. Mietus, George B. Moody, Chung-Kang Peng, and H. Eugene Stanley. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation, 2000. doi:10.1161/01.CIR.101.23.e215.

3(1,2)

Radoslaw Martin Cichy, Dimitrios Pantazis, and Aude Oliva. Resolving human object recognition in space and time. Nature Neuroscience, 17(3):455–462, 2014. doi:10.1038/nn.3635.

4(1,2)

Michael J. Crosse, Giovanni M. Di Liberto, Adam Bednar, and Edmund C. Lalor. The multivariate temporal response function (mTRF) toolbox: a MATLAB toolbox for relating neural signals to continuous stimuli. Frontiers in Human Neuroscience, 2016. doi:10.3389/fnhum.2016.00604.

5

Stéphane Dufau, Jonathan Grainger, Katherine J. Midgley, and Phillip J. Holcomb. A thousand words are worth a picture: snapshots of printed-word processing in an event-related potential megastudy. Psychological Science, 26(12):1887–1897, 2015. doi:10.1177/0956797615603934.

6

B. Kemp, A. H. Zwinderman, B. Tuk, H. A. C. Kamphuisen, and J. J. L. Oberyé. Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the EEG. IEEE Transactions on Biomedical Engineering, 47(9):1185–1194, 2000. doi:10.1109/10.867928.

7

Christian O’Reilly, Eric Larson, John E. Richards, and Mayada Elsabbagh. Structural templates for imaging EEG cortical sources in infants. NeuroImage, 227:117682, 2021. doi:10.1016/j.neuroimage.2020.117682.

8

Guillaume A. Rousselet, Carl M. Gaspar, Cyril R. Pernet, Jesse S. Husk, Patrick J. Bennett, and Allison B. Sekuler. Healthy aging delays scalp EEG sensitivity to noise in a face discrimination task. Frontiers in Psychology, 1(19):1–14, 2010. doi:10.3389/fpsyg.2010.00019.

9

Emily S. Kappenman, Jaclyn L. Farrens, Wendy Zhang, Andrew X. Stewart, and Steven J. Luck. ERP CORE: an open resource for human event-related potential research. NeuroImage, 225:117465, 2021. doi:10.1016/j.neuroimage.2020.117465.