Getting started with mne.Report

mne.Report is a way to create interactive HTML summaries of your data. These reports can show many different visualizations of one subject’s data. A common use case is creating diagnostic summaries to check data quality at different stages in the processing pipeline. The report can show things like plots of data before and after each preprocessing step, epoch rejection statistics, MRI slices with overlaid BEM shells, all the way up to plots of estimated cortical activity.

Compared to a Jupyter notebook, mne.Report is easier to deploy (the HTML pages it generates are self-contained and do not require a running Python environment) but less flexible (you can’t change code and re-run something directly within the browser). This tutorial covers the basics of building a Report. As usual we’ll start by importing the modules we need:

import os
import matplotlib.pyplot as plt
import mne

Before getting started with mne.Report, make sure the files you want to render follow the filename conventions defined by MNE:

Data object

Filename convention (ends with)

raw

-raw.fif(.gz), -raw_sss.fif(.gz), -raw_tsss.fif(.gz), _meg.fif(.gz), _eeg.fif(.gz), _ieeg.fif(.gz)

events

-eve.fif(.gz)

epochs

-epo.fif(.gz)

evoked

-ave.fif(.gz)

covariance

-cov.fif(.gz)

SSP projectors

-proj.fif(.gz)

trans

-trans.fif(.gz)

forward

-fwd.fif(.gz)

inverse

-inv.fif(.gz)

Alternatively, the dash - in the filename may be replaced with an underscore _.

Basic reports

The basic process for creating an HTML report is to instantiate the Report class, then use the parse_folder() method to select particular files to include in the report. Which files are included depends on both the pattern parameter passed to parse_folder() and also the subject and subjects_dir parameters provided to the Report constructor.

For our first example, we’ll generate a barebones report for all the .fif files containing raw data in the sample dataset, by passing the pattern *raw.fif to parse_folder(). We’ll omit the subject and subjects_dir parameters from the Report constructor, but we’ll also pass render_bem=False to the parse_folder() method — otherwise we would get a warning about not being able to render MRI and trans files without knowing the subject.

path = mne.datasets.sample.data_path(verbose=False)
report = mne.Report(verbose=True)
report.parse_folder(path, pattern='*raw.fif', render_bem=False)
report.save('report_basic.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/ernoise_raw.fif...
Isotrak not found
    Read a total of 3 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
    Range : 19800 ... 85867 =     32.966 ...   142.965 secs
Ready.
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
    Range : 6450 ... 48149 =     42.956 ...   320.665 secs
Ready.
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_raw.fif...
    Read a total of 3 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
    Range : 25800 ... 192599 =     42.956 ...   320.670 secs
Ready.
Iterating over 3 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/ernoise_raw.fif
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/ernoise_raw.fif...
Isotrak not found
    Read a total of 3 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
    Range : 19800 ... 85867 =     32.966 ...   142.965 secs
Ready.
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
    Range : 6450 ... 48149 =     42.956 ...   320.665 secs
Ready.
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_raw.fif
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_raw.fif...
    Read a total of 3 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
    Range : 25800 ... 192599 =     42.956 ...   320.670 secs
Ready.
Saving report to location /home/circleci/project/tutorials/intro/report_basic.html
Rendering : Table of Contents
raw
 ... ernoise_raw.fif
 ... sample_audvis_filt-0-40_raw.fif
 ... sample_audvis_raw.fif

This report yields a textual summary of the Raw files selected by the pattern. For a slightly more useful report, we’ll ask for the power spectral density of the Raw files, by passing raw_psd=True to the Report constructor. We’ll also visualize the SSP projectors stored in the raw data’s Info dictionary by setting projs=True. Lastly, let’s also refine our pattern to select only the filtered raw recording (omitting the unfiltered data and the empty-room noise recordings):

pattern = 'sample_audvis_filt-0-40_raw.fif'
report = mne.Report(raw_psd=True, projs=True, verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_raw_psd.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
    Range : 6450 ... 48149 =     42.956 ...   320.665 secs
Ready.
Iterating over 1 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
    Range : 6450 ... 48149 =     42.956 ...   320.665 secs
Ready.
Effective window size : 13.639 (s)
Effective window size : 13.639 (s)
Effective window size : 13.639 (s)
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
Saving report to location /home/circleci/project/tutorials/intro/report_raw_psd.html
Rendering : Table of Contents
raw
 ... sample_audvis_filt-0-40_raw.fif

The sample dataset also contains SSP projectors stored as individual files. To add them to a report, we also have to provide the path to a file containing an Info dictionary, from which the channel locations can be read.

info_fname = os.path.join(path, 'MEG', 'sample',
                          'sample_audvis_filt-0-40_raw.fif')
pattern = 'sample_audvis_*proj.fif'
report = mne.Report(info_fname=info_fname, verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_proj.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Iterating over 2 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_ecg-proj.fif
    Read a total of 6 projection items:
        ECG-planar-999--0.200-0.400-PCA-01 (1 x 203)  idle
        ECG-planar-999--0.200-0.400-PCA-02 (1 x 203)  idle
        ECG-axial-999--0.200-0.400-PCA-01 (1 x 102)  idle
        ECG-axial-999--0.200-0.400-PCA-02 (1 x 102)  idle
        ECG-eeg-999--0.200-0.400-PCA-01 (1 x 59)  idle
        ECG-eeg-999--0.200-0.400-PCA-02 (1 x 59)  idle
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_eog-proj.fif
    Read a total of 6 projection items:
        EOG-planar-998--0.200-0.200-PCA-01 (1 x 203)  idle
        EOG-planar-998--0.200-0.200-PCA-02 (1 x 203)  idle
        EOG-axial-998--0.200-0.200-PCA-01 (1 x 102)  idle
        EOG-axial-998--0.200-0.200-PCA-02 (1 x 102)  idle
        EOG-eeg-998--0.200-0.200-PCA-01 (1 x 59)  idle
        EOG-eeg-998--0.200-0.200-PCA-02 (1 x 59)  idle
Saving report to location /home/circleci/project/tutorials/intro/report_proj.html
Rendering : Table of Contents
ssp
 ... sample_audvis_ecg-proj.fif
 ... sample_audvis_eog-proj.fif

This time we’ll pass a specific subject and subjects_dir (even though there’s only one subject in the sample dataset) and remove our render_bem=False parameter so we can see the MRI slices, with BEM contours overlaid on top if available. Since this is computationally expensive, we’ll also pass the mri_decim parameter for the benefit of our documentation servers, and skip processing the .fif files:

subjects_dir = os.path.join(path, 'subjects')
report = mne.Report(subject='sample', subjects_dir=subjects_dir, verbose=True)
report.parse_folder(path, pattern='', mri_decim=25)
report.save('report_mri_bem.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Iterating over 0 potential files (this may take some time)
Rendering BEM
Using surface: /home/circleci/mne_data/MNE-sample-data/subjects/sample/bem/inner_skull.surf
Using surface: /home/circleci/mne_data/MNE-sample-data/subjects/sample/bem/outer_skull.surf
Using surface: /home/circleci/mne_data/MNE-sample-data/subjects/sample/bem/outer_skin.surf
Saving report to location /home/circleci/project/tutorials/intro/report_mri_bem.html
Rendering : Table of Contents
bem
 ... bem

Now let’s look at how Report handles Evoked data (we will skip the MRIs to save computation time). The following code will produce butterfly plots, topomaps, and comparisons of the global field power (GFP) for different experimental conditions.

pattern = 'sample_audvis-no-filter-ave.fif'
report = mne.Report(verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_evoked.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Iterating over 1 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis-no-filter-ave.fif
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Multiple channel types selected, returning one figure per type.
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
Saving report to location /home/circleci/project/tutorials/intro/report_evoked.html
Rendering : Table of Contents
evoked
 ... sample_audvis-no-filter-ave.fif

You have probably noticed that the EEG recordings look particularly odd. This is because by default, Report does not apply baseline correction before rendering evoked data. So if the dataset you wish to add to the report has not been baseline-corrected already, you can request baseline correction here. The MNE sample dataset we’re using in this example has not been baseline-corrected; so let’s do this now for the report!

To request baseline correction, pass a baseline argument to Report, which should be a tuple with the starting and ending time of the baseline period. For more details, see the documentation on apply_baseline. Here, we will apply baseline correction for a baseline period from the beginning of the time interval to time point zero.

baseline = (None, 0)
pattern = 'sample_audvis-no-filter-ave.fif'
report = mne.Report(baseline=baseline, verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_evoked_baseline.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Iterating over 1 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis-no-filter-ave.fif
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Multiple channel types selected, returning one figure per type.
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
Saving report to location /home/circleci/project/tutorials/intro/report_evoked_baseline.html
Rendering : Table of Contents
evoked
 ... sample_audvis-no-filter-ave.fif

To render whitened Evoked files with baseline correction, pass the baseline argument we just used, and add the noise covariance file. This will display ERP/ERF plots for both the original and whitened Evoked objects, but scalp topomaps only for the original.

cov_fname = os.path.join(path, 'MEG', 'sample', 'sample_audvis-cov.fif')
baseline = (None, 0)
report = mne.Report(cov_fname=cov_fname, baseline=baseline, verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_evoked_whitened.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
    366 x 366 full covariance (kind = 1) found.
    Read a total of 4 projection items:
        PCA-v1 (1 x 102) active
        PCA-v2 (1 x 102) active
        PCA-v3 (1 x 102) active
        Average EEG reference (1 x 60) active
Iterating over 1 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis-no-filter-ave.fif
Computing rank from covariance with rank=None
    Using tolerance 4.7e-14 (2.2e-16 eps * 59 dim * 3.6  max singular value)
    Estimated rank (eeg): 58
    EEG: rank 58 computed from 59 data channels with 1 projector
Computing rank from covariance with rank=None
    Using tolerance 1.8e-13 (2.2e-16 eps * 203 dim * 3.9  max singular value)
    Estimated rank (grad): 203
    GRAD: rank 203 computed from 203 data channels with 0 projectors
Computing rank from covariance with rank=None
    Using tolerance 2.5e-14 (2.2e-16 eps * 102 dim * 1.1  max singular value)
    Estimated rank (mag): 99
    MAG: rank 99 computed from 102 data channels with 3 projectors
    Created an SSP operator (subspace dimension = 4)
Computing rank from covariance with rank={'eeg': 58, 'grad': 203, 'mag': 99, 'meg': 302}
    Setting small MEG eigenvalues to zero (without PCA)
    Setting small EEG eigenvalues to zero (without PCA)
    Created the whitener using a noise covariance matrix with rank 360 (4 small eigenvalues omitted)
Computing rank from covariance with rank=None
    Using tolerance 4.7e-14 (2.2e-16 eps * 59 dim * 3.6  max singular value)
    Estimated rank (eeg): 58
    EEG: rank 58 computed from 59 data channels with 1 projector
Computing rank from covariance with rank=None
    Using tolerance 1.8e-13 (2.2e-16 eps * 203 dim * 3.9  max singular value)
    Estimated rank (grad): 203
    GRAD: rank 203 computed from 203 data channels with 0 projectors
Computing rank from covariance with rank=None
    Using tolerance 2.5e-14 (2.2e-16 eps * 102 dim * 1.1  max singular value)
    Estimated rank (mag): 99
    MAG: rank 99 computed from 102 data channels with 3 projectors
    Created an SSP operator (subspace dimension = 4)
Computing rank from covariance with rank={'eeg': 58, 'grad': 203, 'mag': 99, 'meg': 302}
    Setting small MEG eigenvalues to zero (without PCA)
    Setting small EEG eigenvalues to zero (without PCA)
    Created the whitener using a noise covariance matrix with rank 360 (4 small eigenvalues omitted)
Computing rank from covariance with rank=None
    Using tolerance 4.7e-14 (2.2e-16 eps * 59 dim * 3.6  max singular value)
    Estimated rank (eeg): 58
    EEG: rank 58 computed from 59 data channels with 1 projector
Computing rank from covariance with rank=None
    Using tolerance 1.8e-13 (2.2e-16 eps * 203 dim * 3.9  max singular value)
    Estimated rank (grad): 203
    GRAD: rank 203 computed from 203 data channels with 0 projectors
Computing rank from covariance with rank=None
    Using tolerance 2.5e-14 (2.2e-16 eps * 102 dim * 1.1  max singular value)
    Estimated rank (mag): 99
    MAG: rank 99 computed from 102 data channels with 3 projectors
    Created an SSP operator (subspace dimension = 4)
Computing rank from covariance with rank={'eeg': 58, 'grad': 203, 'mag': 99, 'meg': 302}
    Setting small MEG eigenvalues to zero (without PCA)
    Setting small EEG eigenvalues to zero (without PCA)
    Created the whitener using a noise covariance matrix with rank 360 (4 small eigenvalues omitted)
Computing rank from covariance with rank=None
    Using tolerance 4.7e-14 (2.2e-16 eps * 59 dim * 3.6  max singular value)
    Estimated rank (eeg): 58
    EEG: rank 58 computed from 59 data channels with 1 projector
Computing rank from covariance with rank=None
    Using tolerance 1.8e-13 (2.2e-16 eps * 203 dim * 3.9  max singular value)
    Estimated rank (grad): 203
    GRAD: rank 203 computed from 203 data channels with 0 projectors
Computing rank from covariance with rank=None
    Using tolerance 2.5e-14 (2.2e-16 eps * 102 dim * 1.1  max singular value)
    Estimated rank (mag): 99
    MAG: rank 99 computed from 102 data channels with 3 projectors
    Created an SSP operator (subspace dimension = 4)
Computing rank from covariance with rank={'eeg': 58, 'grad': 203, 'mag': 99, 'meg': 302}
    Setting small MEG eigenvalues to zero (without PCA)
    Setting small EEG eigenvalues to zero (without PCA)
    Created the whitener using a noise covariance matrix with rank 360 (4 small eigenvalues omitted)
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | PCA-v1, active : True, n_channels : 102>
Removing projector <Projection | PCA-v2, active : True, n_channels : 102>
Removing projector <Projection | PCA-v3, active : True, n_channels : 102>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Removing projector <Projection | Average EEG reference, active : True, n_channels : 60>
Multiple channel types selected, returning one figure per type.
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
combining channels using "gfp"
Saving report to location /home/circleci/project/tutorials/intro/report_evoked_whitened.html
Rendering : Table of Contents
evoked
 ... sample_audvis-no-filter-ave.fif (whitened)
 ... sample_audvis-no-filter-ave.fif

If you want to actually view the noise covariance in the report, make sure it is captured by the pattern passed to parse_folder(), and also include a source for an Info object (any of the Raw, Epochs or Evoked .fif files that contain subject data also contain the measurement information and should work):

pattern = 'sample_audvis-cov.fif'
info_fname = os.path.join(path, 'MEG', 'sample', 'sample_audvis-ave.fif')
report = mne.Report(info_fname=info_fname, verbose=True)
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_cov.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Iterating over 1 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis-cov.fif
    366 x 366 full covariance (kind = 1) found.
    Read a total of 4 projection items:
        PCA-v1 (1 x 102) active
        PCA-v2 (1 x 102) active
        PCA-v3 (1 x 102) active
        Average EEG reference (1 x 60) active
Computing rank from covariance with rank=None
    Using tolerance 2.5e-14 (2.2e-16 eps * 102 dim * 1.1  max singular value)
    Estimated rank (mag): 102
    MAG: rank 102 computed from 102 data channels with 0 projectors
Computing rank from covariance with rank=None
    Using tolerance 1.8e-13 (2.2e-16 eps * 203 dim * 3.9  max singular value)
    Estimated rank (grad): 203
    GRAD: rank 203 computed from 203 data channels with 0 projectors
Computing rank from covariance with rank=None
    Using tolerance 4.7e-14 (2.2e-16 eps * 59 dim * 3.6  max singular value)
    Estimated rank (eeg): 59
    EEG: rank 59 computed from 59 data channels with 0 projectors
Saving report to location /home/circleci/project/tutorials/intro/report_cov.html
Rendering : Table of Contents
covariance
 ... sample_audvis-cov.fif

Adding custom plots to a report

The Python interface has greater flexibility compared to the command line interface. For example, custom plots can be added via the add_figs_to_section() method:

report = mne.Report(verbose=True)

fname_raw = os.path.join(path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname_raw, verbose=False).crop(tmax=60)
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
            'visual/right': 4, 'face': 5, 'buttonpress': 32}

# create some epochs and ensure we drop a few, so we can then plot the drop log
reject = dict(eeg=150e-6)
epochs = mne.Epochs(raw=raw, events=events, event_id=event_id,
                    tmin=-0.2, tmax=0.7, reject=reject, preload=True)
fig_drop_log = epochs.plot_drop_log(subject='sample', show=False)

# now also plot an evoked response
evoked_aud_left = epochs['auditory/left'].average()
fig_evoked = evoked_aud_left.plot(spatial_colors=True, show=False)

# add the custom plots to the report:
report.add_figs_to_section([fig_drop_log, fig_evoked],
                           captions=['Dropped Epochs',
                                     'Evoked: Left Auditory'],
                           section='drop-and-evoked')
report.save('report_custom.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
86 events found
Event IDs: [ 1  2  3  4  5 32]
Not setting metadata
Not setting metadata
86 matching events found
Setting baseline interval to [-0.19979521315838786, 0.0] sec
Applying baseline correction (mode: mean)
Created an SSP operator (subspace dimension = 3)
3 projection items activated
Loading data for 86 events and 541 original time points ...
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 007']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006', 'EEG 007', 'EEG 008', 'EEG 009', 'EEG 010', 'EEG 011', 'EEG 012', 'EEG 013', 'EEG 014', 'EEG 015', 'EEG 016', 'EEG 019', 'EEG 022', 'EEG 023', 'EEG 034']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006', 'EEG 007', 'EEG 008', 'EEG 009', 'EEG 010', 'EEG 011', 'EEG 012', 'EEG 013', 'EEG 014', 'EEG 015', 'EEG 016', 'EEG 019', 'EEG 022', 'EEG 023']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006', 'EEG 007', 'EEG 008', 'EEG 009', 'EEG 015']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006', 'EEG 007', 'EEG 008', 'EEG 010', 'EEG 012', 'EEG 013', 'EEG 014', 'EEG 015', 'EEG 016', 'EEG 023']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 003', 'EEG 007']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 003', 'EEG 007']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 005', 'EEG 006', 'EEG 007']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 005', 'EEG 006', 'EEG 007']
    Rejecting  epoch based on EEG : ['EEG 007']
    Rejecting  epoch based on EEG : ['EEG 007']
    Rejecting  epoch based on EEG : ['EEG 007']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 007']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 007']
    Rejecting  epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 007']
16 bad epochs dropped
Saving report to location /home/circleci/project/tutorials/intro/report_custom.html
Rendering : Table of Contents
drop-and-evoked
 ... Dropped Epochs
 ... Evoked: Left Auditory

Adding a slider

Sliders provide an intuitive way for users to interactively browse a predefined set of images. You can add sliders via add_slider_to_section():

report = mne.Report(verbose=True)

figs = list()
times = evoked_aud_left.times[::30]
for t in times:
    figs.append(evoked_aud_left.plot_topomap(t, vmin=-300, vmax=300, res=100,
                show=False))
    plt.close(figs[-1])
report.add_slider_to_section(figs, times, 'Evoked Response',
                             image_format='png')  # can also use 'svg'

report.save('report_slider.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Saving report to location /home/circleci/project/tutorials/intro/report_slider.html
Rendering : Table of Contents
Evoked Response
 ... Slider

Adding coregistration plot to a report

Now we see how Report can plot coregistration results. This is very useful to check the quality of the trans coregistration file that allows to align anatomy and MEG sensors.

report = mne.Report(info_fname=info_fname, subject='sample',
                    subjects_dir=subjects_dir, verbose=True)
pattern = "sample_audvis_raw-trans.fif"
report.parse_folder(path, pattern=pattern, render_bem=False)
report.save('report_coreg.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Iterating over 1 potential files (this may take some time)
Rendering : /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_raw-trans.fif
Using lh.seghead for head surface.
Getting helmet for system 306m
Using surface from /home/circleci/mne_data/MNE-sample-data/subjects/sample/bem/sample-head.fif.
Saving report to location /home/circleci/project/tutorials/intro/report_coreg.html
Rendering : Table of Contents
trans
 ... sample_audvis_raw-trans.fif

Adding SourceEstimate (STC) plot to a report

Now we see how Report handles SourceEstimate data. The following will produce a stc plot with vertex time courses. In this scenario, we also demonstrate how to use the mne.viz.Brain.screenshot() method to save the figs in a slider.

report = mne.Report(verbose=True)
fname_stc = os.path.join(path, 'MEG', 'sample', 'sample_audvis-meg')
stc = mne.read_source_estimate(fname_stc, subject='sample')
figs = list()
kwargs = dict(subjects_dir=subjects_dir, initial_time=0.13,
              clim=dict(kind='value', lims=[3, 6, 9]))
for hemi in ('lh', 'rh'):
    brain = stc.plot(hemi=hemi, **kwargs)
    brain.toggle_interface(False)
    figs.append(brain.screenshot(time_viewer=True))
    brain.close()

# add the stc plot to the report:
report.add_slider_to_section(figs)

report.save('report_stc.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Saving report to location /home/circleci/project/tutorials/intro/report_stc.html
Rendering : Table of Contents
custom
 ... Slider

Managing report sections

The MNE report command internally manages the sections so that plots belonging to the same section are rendered consecutively. Within a section, the plots are ordered in the same order that they were added using the add_figs_to_section() command. Each section is identified by a toggle button in the top navigation bar of the report which can be used to show or hide the contents of the section. To toggle the show/hide state of all sections in the HTML report, press t, or press the toggle-all button in the upper right.

Editing a saved report

Saving to HTML is a write-only operation, meaning that we cannot read an .html file back as a Report object. In order to be able to edit a report once it’s no longer in-memory in an active Python session, save it as an HDF5 file instead of HTML:

report.save('report.h5', overwrite=True)
report_from_disk = mne.open_report('report.h5')
print(report_from_disk)

Out:

Saving report to location /home/circleci/project/tutorials/intro/report.h5
Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
<Report | 1 items
 ... Slider
>

This allows the possibility of multiple scripts adding figures to the same report. To make this even easier, mne.Report can be used as a context manager:

with mne.open_report('report.h5') as report:
    report.add_figs_to_section(fig_evoked,
                               captions='Left Auditory',
                               section='evoked',
                               replace=True)
    report.save('report_final.html', overwrite=True)

Out:

Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Saving report to location /home/circleci/project/tutorials/intro/report_final.html
Rendering : Table of Contents
custom
 ... Slider
evoked
 ... Left Auditory
Saving report to location /home/circleci/project/tutorials/intro/report.h5

With the context manager, the updated report is also automatically saved back to report.h5 upon leaving the block.

Total running time of the script: ( 1 minutes 17.379 seconds)

Estimated memory usage: 225 MB

Gallery generated by Sphinx-Gallery