This page describes some of the major medium- to long-term goals for MNE-Python. These are goals that require substantial effort and/or API design considerations. Some of these may be suitable for Google Summer of Code projects, while others require more extensive work.
The current clustering statistics code has limited functionality. It should be
re-worked to create a new
cluster_based_statistic or similar function.
The new API will likely be along the lines of:
cluster_stat(obs, design, *, alpha=0.05, cluster_alpha=0.05, ...)
Likely Wilkinson notation to mirror
patsy.dmatrices()(e.g., this is is used by
statsmodels.regression.linear_model.OLS). Getting from the string to the design matrix could be done via Patsy or more likely Formulaic.
This generic API will support mixed within- and between-subjects designs, different statistical functions/tests, etc. This should be achievable without introducing any significant speed penalty (e.g., < 10% slower) compared to the existing more specialized/limited functions, since most computation cost is in clustering rather than statistical testing.
The clustering function will return a user-friendly
ClusterStat object or similar
that retains information about dimensionality, significance, etc. and facilitates
plotting and interpretation of results.
Clear tutorials will be needed to:
Show how different contrasts can be done (toy data).
Show some common analyses on real data (time-freq, sensor space, source space, etc.)
Regression tests will be written to ensure equivalent outputs when compared to FieldTrip for cases that FieldTrip also supports.
More details are in #4859.
LSL has become the de facto standard for streaming data from EEG/MEG systems. We should deprecate MNE-Realtime in favor of the newly minted MNE-LSL. We should then fully support MNE-LSL using modern coding best practices such as CI integration.
Core components of commonly used real-time processing pipelines should be implemented in MNE-LSL, including but not limited to realtime IIR filtering, artifact rejection, montage and reference setting, and online averaging. Integration with standard MNE-Python plotting routines (evoked joint plots, topomaps, etc.) should be supported with continuous updating.
MNE-Python is committed to recruiting and retaining a diverse pool of contributors, see #8221.
MNE-Python has support for reading some OPM data formats such as FIF and FIL/QuSpin. Support should be added for other manufacturers, and standard preprocessing routines should be added to deal with coregistration adjustment and OPM-specific artifacts. See for example #11275, #11276, #11579, #12179.
Existing source modeling and inverse routines are not explicitly designed to deal with deep sources. Advanced algorithms exist from MGH for enhancing deep source localization, and these should be implemented and vetted in MNE-Python. See #6784.
Our current codebase implements classes related to TFRs that
remain incomplete. We should implement new classes from the ground up
that can hold frequency data (
Spectrum), cross-spectral data
CrossSpectrum), multitaper estimates (
time-varying estimates (
Spectrogram). These should work for
continuous, epoched, and averaged sensor data, as well as source-space brain
Historically we have used Mayavi for 3D visualization, but have faced limitations and challenges with it. We should work to use some other backend (e.g., PyVista) to get major improvements, such as:
Proper notebook support (through ``ipyvtklink``) (complete; updated to use
Better interactivity with surface plots (complete)
Time-frequency plotting (complementary to volume-based Time-frequency visualization)
Integration of multiple functions as done in
mne_analyze, e.g., simultaneous source estimate viewing, field map viewing, head surface display, etc. These are all currently available in separate functions, but we should be able to combine them in a single plot as well.
The meta-issue for tracking to-do lists for surface plotting is #7162.
Our documentation has many minor issues, which can be found under the tag #labels/DOC.
iEEG-specific pipeline steps such as electrode localization and visualizations are now available in MNE-gui-addons.
Open EEG/MEG databases are now more easily accessible via standardized tools such as openneuro-py.
We had a GSoC student funded to improve support for eye-tracking data, see the GSoC proposal for details. An EyeLink data reader and analysis/plotting functions are now available.
MNE-Python provides automated analysis of BIDS-compliant datasets via MNE-BIDS-Pipeline. Functionality from the mnefun pipeline, which has been used extensively for pediatric data analysis at I-LABS, now provides better support for pediatric and clinical data processing. Multiple processing steps (e.g., eSSS), sanity checks (e.g., cHPI quality), and reporting (e.g., SSP joint plots, SNR plots) have been added.
OpenMEEG is a state-of-the art solver for forward modeling in the field of brain imaging with MEG/EEG. It solves numerically partial differential equations (PDE). It is written in C++ with Python bindings written in SWIG. The ambition of the project is to integrate OpenMEEG into MNE offering to MNE the ability to solve more forward problems (cortical mapping, intracranial recordings, etc.). Tasks that have been completed:
Cleanup Python bindings (remove useless functions, check memory managements, etc.)
Understand how MNE encodes info about sensors (location, orientation, integration points etc.) and allow OpenMEEG to be used.
Modernize CI systems (e.g., using
Automated deployment on PyPI and conda-forge.
We implemented a viewer for interactive visualization of volumetric source-time-frequency (5-D) maps on MRI slices (orthogonal 2D viewer). NutmegTrip (written by Sarang Dalal) provides similar functionality in MATLAB in conjunction with FieldTrip. Example of NutmegTrip’s source-time-frequency mode in action (click for link to YouTube):
MNE-BIDS-Pipeline has been enhanced with support for cloud computing via Dask and joblib. After configuring Dask to use local or remote distributed computing resources, MNE-BIDS-Pipeline can readily make use of remote workers to parallelize processing across subjects.