A Knowledge Distillation Framework for Enhancing Ear-EEG Based Sleep Staging with Scalp-EEG Data Conference

Anandakumar, M, Pradeepkumar, J, Kappel, SL et al. (2023). A Knowledge Distillation Framework for Enhancing Ear-EEG Based Sleep Staging with Scalp-EEG Data . 514-519. 10.1109/SMC53992.2023.10394011

cited authors

  • Anandakumar, M; Pradeepkumar, J; Kappel, SL; Edussooriya, CUS; De Silva, AC

abstract

  • Sleep plays a crucial role in the well-being of human lives. Traditional sleep studies using Polysomnography are associated with discomfort and often lower sleep quality caused by the acquisition setup. Previous works have focused on developing less obtrusive methods to conduct high-quality sleep studies, and ear- EEG is among popular alternatives. However, the performance of sleep staging based on ear-EEG is still inferior to scalp- EEG based sleep staging. In order to address the performance gap between scalp-EEG and ear- EEG based sleep staging, we propose a cross-modal knowledge distillation strategy1†https://github.com/Mithunjha/EarEEG-KnowledgeDistillation, which is a domain adaptation approach. We employ model architectures from the transformer and convolutional neural network families to demonstrate the model-agnostic nature of the method. Our experiments and analysis validate the effectiveness of the proposed approach with existing architectures, where it enhances the accuracy of the ear-EEG based sleep staging by 3.46% and Cohen's kappa coefficient by a margin of 0.038. Furthermore, our findings indicate that our approach is not limited to a specific model architecture and can be applied to a wide range of deep learning models.

publication date

  • January 1, 2023

Digital Object Identifier (DOI)

International Standard Book Number (ISBN) 13

start page

  • 514

end page

  • 519