This page describes the application programming interface of MNE-NIRS. The functions and classes of this package are described below. The description of each interface includes links to the examples relevant to that code.

This library extends the fNIRS functionality available within MNE-Python. When analysing fNIRS data with these tools you are likely to use functions from both MNE-Python and MNE-NIRS. As such, documentation is provided below for relevant functions and classes from both packages. General neuroimaging functionality provided by MNE-Python such as filtering, epoching, visualisation, etc is not included here and can be found in the MNE-Python API page.



IO module for reading raw data.

read_raw_hitachi(fname[, preload, verbose])

Reader for a Hitachi fNIRS recording.

read_raw_nirx(fname[, saturated, preload, ...])

Reader for a NIRX fNIRS recording.

read_raw_snirf(fname[, optode_frame, ...])

Reader for a continuous wave SNIRF data.

read_raw_boxy(fname[, preload, verbose])

Reader for an optical imaging recording.


write_raw_snirf(raw, fname[, add_montage])

Write continuous wave data to disk in SNIRF format.

read_snirf_aux_data(fname, raw)

Read auxiliary data from SNIRF file.

fold_landmark_specificity(raw, landmark[, ...])

Return the specificity of each channel to a specified brain landmark.

fold_channel_specificity(raw[, fold_files, ...])

Return the landmarks and specificity a channel is sensitive to.



NIRS specific preprocessing functions.

optical_density(raw, *[, verbose])

Convert NIRS raw data to optical density.

beer_lambert_law(raw[, ppf])

Convert NIRS optical density data to haemoglobin concentration.

source_detector_distances(info[, picks])

Determine the distance between NIRS source and detectors.

short_channels(info[, threshold])

Determine which NIRS channels are short.

scalp_coupling_index(raw[, l_freq, h_freq, ...])

Calculate scalp coupling index.

temporal_derivative_distribution_repair(raw, *)

Apply temporal derivative distribution repair to data.


Signal Enhancement


Apply negative correlation enhancement algorithm.

short_channel_regression(raw[, max_dist])

Systemic correction regression based on nearest short channel.

Data quality evaluation.

peak_power(raw[, time_window, threshold, ...])

Compute peak spectral power metric for each channel and time window.

scalp_coupling_index_windowed(raw[, ...])

Compute scalp coupling index for each channel and time window.

quantify_mayer_fooof(raw[, ...])

Quantify Mayer wave properties using FOOOF analysis.

Experimental Design#

make_first_level_design_matrix(raw[, ...])

Generate a design matrix based on annotations and model HRF.

create_boxcar(raw[, event_id, stim_dur])

Generate boxcar representation of the experimental paradigm.


Compute longest ISI per annotation.


Compute cosine drift regressor high pass cut off.


First level analysis#

Individual (first) level analysis functions.

run_glm(raw, design_matrix[, noise_model, ...])

GLM fit for an MNE structure containing fNIRS data.

Individual (first) level result classes.

RegressionResults(info, data, design)

Class containing GLM regression results.

ContrastResults(info, data, design)

Class containing GLM contrast results.


Read GLM results from disk.

Individual (first) level result class methods. View the class documentation above for a detailed list of methods.


Compute contrasts on regression results.


Return a tidy dataframe representing the GLM results.


Region of interest results as a dataframe.

RegressionResults.scatter([conditions, ...])

Scatter plot of the GLM results.

RegressionResults.plot_topo([conditions, ...])

Plot 2D topography of GLM data.


Project GLM results on to the surface of the brain.

RegressionResults.scatter([conditions, ...])

Scatter plot of the GLM results.

RegressionResults.save(fname[, overwrite])

Save GLM results to disk.


Return a tidy dataframe representing the GLM results.

ContrastResults.plot_topo([figsize, sphere])

Plot topomap GLM contrast data.

ContrastResults.scatter([conditions, ...])

Scatter plot of the GLM results.

ContrastResults.save(fname[, overwrite])

Save GLM results to disk.

Second level analysis#

Group (second) level analysis functions.

statsmodels_to_results(model[, order])

Convert statsmodels summary to a dataframe.


fNIRS specific data visualisation.

plot_3d_montage(info, view_map, *[, ...])

Plot a 3D sensor montage.

plot_nirs_source_detector(data[, info, ...])

3D visualisation of fNIRS response magnitude.

GLM result visualisation.

plot_glm_group_topo(inst, statsmodel_df[, ...])

Plot topomap of NIRS group level GLM results.

plot_glm_surface_projection(inst, statsmodel_df)

Project GLM results on to the surface of the brain.

Data quality visualisation.

plot_timechannel_quality_metric(raw, scores, ...)

Plot time x channel based quality metrics.


simulate_nirs_raw([sfreq, amplitude, ...])

Create simulated fNIRS data.


Functions to help with handling channel information.


List all the sources in the fNIRS montage.


List all the detectors in the fNIRS montage.

drop_sources(raw, sources)

Drop sources.

drop_detectors(raw, detectors)

Drop detectors.

pick_sources(raw, sources)

Pick sources.

pick_detectors(raw, detectors)

Pick detectors.

get_short_channels(raw[, max_dist])

Return channels with a short source-detector separation.

get_long_channels(raw[, min_dist, max_dist])

Return channels with a long source detector separation.

picks_pair_to_idx(raw, sd_pairs[, on_missing])

Return a list of picks for specified source detector pairs.


The following datasets are accessible using MNE-NIRS. The software will automatically download and extract the data, then provide the path to the data.

fNIRS dataset with finger tapping task

data_path([path, force_update, update_path, ...])

Motor task experiment data with 5 participants.

fNIRS dataset with auditory speech and noise

data_path([path, force_update, update_path, ...])

Audio speech and noise dataset with 18 participants.

fNIRS dataset with audio or visual speech

data_path([path, force_update, update_path, ...])

Audio and visual speech dataset with 8 participants.