AudioCueWalkingStudy | |||||||
---|---|---|---|---|---|---|---|
trial_type | subject | run | AdvanceTempo | DelayTempo | PreferredCadence | UncuedWalking | |
001 | 01 | 378 | 380 | 660 | 575 |
1 rows × 6 columns
"""Mobile brain body imaging (MoBI) gait adaptation experiment.
See ds001971 on OpenNeuro: https://github.com/OpenNeuroDatasets/ds001971
"""
bids_root = "~/mne_data/ds001971"
deriv_root = "~/mne_data/derivatives/mne-bids-pipeline/ds001971"
task = "AudioCueWalkingStudy"
interactive = False
ch_types = ["eeg"]
reject = {"eeg": 150e-6}
conditions = ["AdvanceTempo", "DelayTempo"]
contrasts = [("AdvanceTempo", "DelayTempo")]
subjects = ["001"]
runs = ["01"]
epochs_decim = 5 # to 100 Hz
# This is mostly for testing purposes!
decode = True
decoding_time_generalization = True
decoding_time_generalization_decim = 2
decoding_csp = True
decoding_csp_freqs = {
"beta": [13, 20, 30],
}
decoding_csp_times = [-0.2, 0.0, 0.2, 0.4]
# Just to test that MD5 works
memory_file_method = "hash"
Platform Linux-5.15.0-1053-aws-x86_64-with-glibc2.35
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Executable /home/circleci/python_env/bin/python3.10
CPU x86_64 (36 cores)
Memory 68.6 GB
Core
├☑ mne 1.7.0.dev156+g415e7f68e (devel, latest release is 1.6.1)
├☑ numpy 1.26.4 (OpenBLAS 0.3.23.dev with 2 threads)
├☑ scipy 1.12.0
└☑ matplotlib 3.8.3 (backend=agg)
Numerical (optional)
├☑ sklearn 1.4.1.post1
├☑ numba 0.59.1
├☑ nibabel 5.2.1
├☑ pandas 2.2.1
└☐ unavailable nilearn, dipy, openmeeg, cupy
Visualization (optional)
├☑ pyvista 0.43.4 (OpenGL 4.5 (Core Profile) Mesa 23.2.1-1ubuntu3.1~22.04.2 via llvmpipe (LLVM 15.0.7, 256 bits))
├☑ pyvistaqt 0.11.0
├☑ vtk 9.3.0
├☑ qtpy 2.4.1 (PyQt6=6.6.0)
└☐ unavailable ipympl, pyqtgraph, mne-qt-browser, ipywidgets, trame_client, trame_server, trame_vtk, trame_vuetify
Ecosystem (optional)
├☑ mne-bids 0.15.0.dev43+g17d20c132
├☑ mne-bids-pipeline 1.8.0
└☐ unavailable mne-nirs, mne-features, mne-connectivity, mne-icalabel, neo