Authors: SHAHNAWAZ QURESHI, SEPPO KARILLA, SIRIRUT VANICHAYOBON
Abstract: This study presents a method for designing--by a genetic algorithm, without manual intervention--the feature learning architecture for classification of sleep stages from a single EEG channel, when using a convolutional neural network called GACNN SleepTuneNet. Two EEG electrode positions were selected, namely FP2-F4 and FPz-Cz, from two available datasets. Twenty-five generations were involved in diagnosis without hand-crafted features, to learn the architecture for classification of sleep stages based on AASM standard. Based on the results, our model not only achieved the highest classification accuracy, but it also distinguished the sleep stages based on either of the two EEG electrode signals, in both datasets. The results show that our model performed the best with highest overall accuracy rates and kappa statistic (CAP sleep: 95.61 % and 0.94; Sleep EDF: 92.51 % and 0.90) among other state-of-the-art methods that require no manual intervention. Our model could automatically learn the features for classification of sleep stages, for different raw EEG electrode positions in different datasets, without user-assisted feature extraction.
Keywords: Without hand-crafted feature extraction, electroencephalogram (EEG), genetic algorithm, convolutional neural network
Full Text: PDF