Data and Code Sharing

Philosophy

The Bioelectronics Neurophysiology and Engineering Lab is committed to the product of sharing data and code in an effort to create reproducible research.

Projects

Sleep-related Projects

Sleep Staging

Automated Unsupervised Classifier to use iEEG data and score it into AWAKE, N2, N3 sleep stages

A full description of how to use the classifier is in the help section of the Matlab m-file. Also available — features extracted from two patients' data in day-night recordings. These matrixes can be used as an input and to provide an example of how to use the classifier.

Kremen V, Brinkmann BH, Van Gompel JJ, Stead SMM, St Louis EK, Worrell GA. Automated unsupervised behavioral state classification using intracranial electrophysiology. Journal of Neural Engineering. 2019; https://doi.org/10.1088/1741-2552/aae5ab.

Objective

Automated behavioral state classification in intracranial EEG (iEEG) recordings may be beneficial for iEEG interpretation and quantifying sleep patterns to enable behavioral state dependent neuromodulation therapy in next-generation implantable brain stimulation devices. Here, we introduce a fully automated unsupervised framework to differentiate between awake (AW), sleep (N2) and slow wave sleep (N3), using intracranial EEG (iEEG) only and validated with expert-scored polysomnography.

Approach

Data from eight patients undergoing evaluation for epilepsy surgery (age 40+/-11, 3 female) with intracranial depth electrodes for iEEG monitoring were included. Spectral power features (0.1 - 235Hz) spanning several frequency bands from a single electrode were used to classify behavioral states of patients into AW, N2 and N3.

Results

Overall, classification accuracy of 94%, with 94% sensitivity and 93% specificity across eight subjects using multiple spectral power features from a single electrode was achieved. Classification performance of N3 sleep was significantly better (95%, sensitivity 95%, specificity 93%) than that of the N2 sleep phase (87%, sensitivity 78%, specificity 96%).

Significance

Automated, unsupervised and robust classification of behavioral states based on iEEG data is possible, and it is feasible to incorporate these algorithms into future implantable devices with limited computational power, memory and number of electrodes for brain monitoring and stimulation.

Continuous behavioral state tracking in ambulatory humans

Objective

Electrical deep brain stimulation (DBS) is an established treatment for patients with drug-resistant epilepsy. Sleep disorders are common in people with epilepsy, and DBS may actually further disturb normal sleep patterns and sleep quality. Novel implantable devices capable of DBS and streaming of continuous intracranial electroencephalography (iEEG) signals enable detailed assessments of therapy efficacy and tracking of sleep related comorbidities. Here, we investigate the feasibility of automated sleep classification using continuous iEEG data recorded from Papez's circuit in four patients with drug resistant mesial temporal lobe epilepsy using an investigational implantable sensing and stimulation device with electrodes implanted in bilateral hippocampus (HPC) and anterior nucleus of thalamus (ANT).

Approach

The iEEG recorded from HPC is used to classify sleep during concurrent DBS targeting ANT. Simultaneous polysomnography (PSG) and sensing from HPC were used to train, validate and test an automated classifier for a range of ANT DBS frequencies: no stimulation, 2 Hz, 7 Hz, and high frequency (>100 Hz).

Results

We show that it is possible to build a patient specific automated sleep staging classifier using power in band features extracted from one HPC iEEG sensing channel. The patient specific classifiers performed well under all thalamic DBS frequencies with an average F1-score 0.894, and provided viable classification into awake and major sleep categories, rapid eye movement (REM) and non-REM. We retrospectively analyzed classification performance with gold-standard PSG annotations, and then prospectively deployed the classifier on chronic continuous iEEG data spanning multiple months to characterize sleep patterns in ambulatory patients living in their home environment.

Significance

The ability to continuously track behavioral state and fully characterize sleep should prove useful for optimizing DBS for epilepsy and associated sleep, cognitive and mood comorbidities.

Reference

Mivalt F, Kremen V, Sladky V, Balzekas I, Nejedly P, Gregg NM, Lundstrom BN, Lepkova K, Pridalova T, Brinkmann BH, et al. Electrical brain stimulation and continuous behavioral state tracking in ambulatory humans. Journal of Neural Engineering. 2022; doi:10.1088/1741-2552/ac4bfd

Codes available at:

https://github.com/mselair/best_toolbox

Artificial Intelligence

Varatharajah, et al. Integrating artificial intelligence with real-time intracranial EEG monitoring to automate interictal identification of seizure onset zones in focal epilepsy. Journal of Neural Engineering. 2018; https://doi.org/10.1088/1741-2552/aae5ab.

This open-source software performs interictal SOZ classification using iEEG data. A typical workflow comprises:

  • Feature extraction
  • Feature preprocessing
  • Classification

See Instructions file to get started.

High-frequency oscillations

High-frequency oscillations (HFOs) are brief discrete events seen in EEG that are promising biomarkers of both epileptic neural tissue and cognitive processing.

Matsumoto A, et al. Physiological and pathological high frequency oscillations in focal human epilepsy. Journal of Neurophysiology. 2013; https://doi.org/10.1152/jn.00341.2013.

Guragain, et al. Spatial variation in high-frequency oscillation rates and amplitudes in intracranial EEG. Neurology. 2018; https://doi.org/10.1212/WNL.0000000000004998.

Cimbalnik, et al. The CS algorithm: A novel method for high frequency oscillation detection in EEG. Journal of Neuroscience Methods. 2018; https://doi.org/10.1016/j.jneumeth.2017.08.023.

References: Worrell GA 2008 (PDF), Kucewicz MT 2014 (PDF), Stead 2016 (PDF)

Multiscale Electrophysiology Format

The Multiscale Electrophysiology File (MEF) Format version 3.0 is an open-source file format for storing electrophysiology and other time-series data employing data compression and encryption. The PDF file below gives a comprehensive description of the file format.

Source code libraries

Functions that handle MEF header operations, data compression and decompression, encryption, calculation of the CRC checksum, and byte order adjustments. Included in this distribution is sample code for decimating, filtering, converting EDF format data to MEF 3.0 and sample code to read MEF 3.0 files.

Example program source code

Three example programs below illustrate the use of the MEF library functions:

  • Decimate MEF3. This script takes an MEF 3 channel and decimates (downsamples) to a lower sampling rate. It uses the FFTW library and a predefined filter file.

    decimate_mef3.zip

  • EDF to MEF 3 converter. This script takes an EDF file as input and converts it to an MEF3 channel. Level 1 and level 2 passwords can be specified for the MEF 3 output channel.

    edf2mef3.zip

  • Filter MEF 3. This script filters an MEF3 channel based on high- and low-frequency cutoffs. It uses the FFTW library and a predefined filter file.

    filter_mef3.zip

  • MEF 3 to Raw. This script converts an MEF3 channel to an output file (Matlab-readable) of 4-byte signed integer samples.

    mef3_to_raw.zip

  • Test Read MEF 3 Session. This script calls the read_mef_session() routine of the library, which outputs basic information about the MEF 3 session as well as the contained channels.

    test_read_MEF_session.zip

  • Test MEF 3 Records. This script creates a simple records file with dummy records, and then reads the file and displays the records.

    testrecs.zip

Example data

This distribution contains four EEG time-series data channels in MEF 3.0 format. The files are anonymized and encrypted with technical (level 1) and subject (level 2) encryption. The level 1 password is password1 and the level 2 password is password2. The dataset is a two-hour segment of 256 Hz samples.

References: Brinkmann, et al. 2009, Stead 2016 (PDF)

Seizure Forecasting

Accurate seizure forecasting could transform epilepsy care, allowing patients to modify activities to avoid risk, or take additional AED to stop seizures before they develop. Our laboratory is engaged in efforts to develop and validate robust algorithms for seizure forecasting.

Kaggle.com seizure prediction competition

Seizure detection and forecasting competitions were run on Kaggle.com using open access chronic ambulatory intracranial EEG (iEEG) from five canines with naturally occurring epilepsy and two humans undergoing prolonged wide-bandwidth iEEG monitoring. The competitions were sponsored by the National Institutes of Health, the American Epilepsy Society and the Epilepsy Foundation.

The seizure detection contest ran from May to August 2014. Ambulatory iEEG data clips one second in duration were provided from seizures and from interictal, seizure-free epochs. Data clips were extracted from chronic ambulatory recordings from four canines with naturally occurring epilepsy, and eight patients undergoing intracranial monitoring as part of their pre-surgical evaluation for epilepsy. Contestants provided two classifications for the clips: seizure vs. interictal, and early seizure vs. interictal or late seizure. Submissions were ranked based on the area under the ROC curve.

The seizure forecasting contest ran from August to November 2014. Data was provided to participants as 10-minute interictal and preictal clips, with approximately half of the 60Gb data bundle labeled (interictal/preictal) for algorithm training and half unlabeled for evaluation. In total 654 participants submitted 17,856 classifications of the unlabeled test data. The contestants developed custom algorithms and uploaded their classifications (interictal/preictal) for the unknown testing data, and a randomly selected 40% of data segments were scored and results broadcasted on a public leaderboard. After the competition ended, the top-placing contestants were invited to run their algorithms on unlabeled held-out data from four of the canines, in order to assess the robustness and broader applicability of these algorithms.

Detection Contest
Prediction Contest

References: Brinkmann 2016