Publication Details
Issue: Vol 3, No 4 (2026)
ISSN: 2997-9382
Visit Journal Website

Abstract

Facial emotion recognition (FER) poses continuous algorithmic difficulty because of facial expression variations, occlusion, and lighting as well as class imbalance. The current paper presents AMACNN, an attention-enhanced deep learning (DL) algorithm, which is specific to emotion pattern mining using static facial images. The suggested model integrates multi-scale convolutional streams and spatial attention for extracting local and global features and target emotion-relevant face regions. In order to counter the problem of data imbalance, we use a hybrid loss function that is a combination of weighted cross-entropy loss and focal loss to make the model more sensitive to under-represented classes. Assessed on three big-scale databases of facial emotions (more than 100,000 images) the AMACNN reaches its highest accuracy of 95.4% and its precision, recall, and F1-scores are balanced across all the emotions classes. These findings support the generalizability and robustness of the algorithm and makes it one of the competitive FER solutions to be used in computer vision and affective computing in real-world applications.

Keywords
Deep Learning Facial Emotion Recognition Computer Vision Emotion Analysis Machine Learning Data Mining Adaptive algorithms Image processing Artificial intelligence