In recent years, emotion recognition using Electroencephalogram (EEG) signals has garnered significant interest due to its non-invasive nature and high temporal resolution. We introduced a groundbreaking method that bypasses traditional manual feature engineering, emphasizing data preprocessing and leveraging the topological relationships between channels to transform EEG signals from two-dimensional time sequences into three-dimensional spatio-temporal representations. Maximizing the potential of deep learning, our approach provides a data-driven and robust method for identifying emotional states. Leveraging the synergy between Convolutional Neural Network (CNN) and attention mechanisms facilitated automatic feature extraction and dynamic learning of inter-channel dependencies. Our method showcased remarkable performance in emotion recognition tasks, confirming the effectiveness of our approach, achieving average accuracy of 98.62% for arousal and 98.47% for valence, surpassing previous state-of-the-art results of 95.76% and 95.15%. Furthermore, we conducted a series of pivotal experiments that broadened the scope of emotion recognition research, exploring further possibilities in the field of emotion recognition.
Keywords: Attention Mechanisms; CNN; Data Preprocessing; EEG; Emotion Recognition.
© 2024 Institute of Physics and Engineering in Medicine. All rights, including for text and data mining, AI training, and similar technologies, are reserved.