A Systematic Literature Review of Multimodal Emotion Recognition

Yeni Dwi Rahayu(1), Lutfi Ali Muharrom(2), Ika Safitri Windiarti(3), Auratania Hadisah Sugianto(4),


(1) Department of Informatics Engineering, Universitas Muhammadiyah Jember, Indonesia
(2) Department of Informatics Engineering, Universitas Muhammadiyah Jember, Indonesia
(3) Information Technology Study Programme, Universiti Muhammadiyah Malaysia, Malaysia
(4) Department of Informatics Engineering, Universitas Muhammadiyah Jember, Indonesia

Abstract

Purpose: This literature review aims to identify Multimodal Emotion Recognition (MER) in depth and breadth by analysing the topics, trends, modalities, and other supporting sources discussed in research over the years and between 2010 and 2022. Based on the screening analysis, a total of 14,533 articles were analysed to achieve this goal.

Methods: This research was conducted in 3 (three) phases, including Planning, Conducting and Reporting. The first step was defining the research objectives by searching for systematic reviews with similar topics to this study, then reviewing them to develop research questions and systematic review protocols for this study. The second stage is to collect articles according to a pre-determined protocol, selecting the articles obtained and then conducting an analysis of the filtered articles in order to answer the research questions. The final stage is to summarize the results of the analysis so new findings from this research can be reported.  

Result: In general, the focus of MER research can be categorised into two issues, namely the object background and the source or modality of emotion recognition. When looking at the object background, most of the 55% to support emotion recognition with a health background, especially brain function decline, 34% based on age, 10% based on gender, 1% data collection situation and a small portion of less than 1% related to ethnic culture. In terms of the source of emotion recognition, research is divided into electromagnetic signals, voice signals, text, photo/video and the development of wearable devices. Based on the above results, there are at least 7 scientific fields that discuss MER research, namely health, psychology, electronics, grammar, communication, socio-culture and computer science.

Novelty: MER research has the potential to develop further. There are still many areas that have received less attention, while the ecosystem that uses them has grown massively. Emotion recognition modalities are numerous and diverse, but research is still focused on validating the emotions of each modality, rather than exploring the strengths of each modality to improve the quality of recognition results.

Keywords

Emotion recognition; Modalities; Research topics

Full Text:

PDF

References

A. Reeves, “Emotional Intelligence: Recognizing and Regulating Emotions,” Workplace Health Saf, vol. 53, no. 4, pp. 172–176, Apr. 2005, doi: 10.1177/216507990505300407.

P. Rai Jain, S. M. K. Quadri, and M. Lalit, “Recent Trends in Artificial Intelligence for Emotion Detection Using Facial Image Analysis,” in 2021 Thirteenth International Conference on Contemporary Computing (IC3-2021), in IC3 ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 18–36. doi: 10.1145/3474124.3474205.

P. Ekman and R. J. Davidson, Eds., The nature of emotion: Fundamental questions. in Series in affective science. New York, NY, US: Oxford University Press, 1994.

L. Kessous, G. Castellano, and G. Caridakis, “Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis,” Journal on Multimodal User Interfaces, vol. 3, no. 1, pp. 33–48, Mar. 2010, doi: 10.1007/s12193-009-0025-5.

P. A. Abhang, B. W. Gawali, and S. C. Mehrotra, Introduction to EEG- and Speech-Based Emotion Recognition. 2016. doi: 10.1016/C2015-0-01959-1.

Y. Li, J. Tao, B. Schuller, S. Shan, D. Jiang, and J. Jia, “MEC 2017: Multimodal Emotion Recognition Challenge,” 2018 1st Asian Conference on Affective Computing and Intelligent Interaction, ACII Asia 2018. 2018. doi: 10.1109/ACIIAsia.2018.8470342.

S. Khan, “Systematic Literature Review for Facial Expression Recognition,” 2016.

S. Saganowski et al., “Emotion Recognition Using Wearables: A Systematic Literature Review-Work-in-progress,” in 2020 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2020, IEEE, Mar. 2020, pp. 1–6. doi: 10.1109/PerComWorkshops48775.2020.9156096.

S. Ullah and W. Tian, “A Systematic Literature Review of Recognition of Compound Facial Expression of Emotions,” ACM International Conference Proceeding Series, vol. PartF16834, pp. 116–121, 2020, doi: 10.1145/3447450.3447469.

R. V. Aranha, C. G. Correa, and F. L. S. Nunes, “Adapting Software with Affective Computing: A Systematic Review,” IEEE Trans Affect Comput, vol. 12, no. 4, pp. 883–899, 2021, doi: 10.1109/TAFFC.2019.2902379.

B. Kitchenham, “Procedures for Performing Systematic Reviews,” Department of Computer Science, Keele University, UK, 2004.

R. S. Wahono, “A Systematic Literature Review of Software Defect Prediction: Research Trends, Datasets, Methods and Frameworks,” Journal of Software Engineering, vol. 1, no. 1, pp. 1–16, 2015.

A. Maalej and I. Kallel, “Does Keystroke Dynamics tell us about Emotions? A Systematic Literature Review and Dataset Construction,” in Proceedings of the 2020 16th International Conference on Intelligent Environments, IE 2020, IEEE, Jul. 2020, pp. 60–67. doi: 10.1109/IE49459.2020.9155004.

L. S. Chen, T. S. Huang, T. Miyasato, and R. Nakatsu, “Multimodal human emotion/expression recognition,” in Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, IEEE Comput. Soc, 1998, pp. 366–371. doi: 10.1109/AFGR.1998.670976.

K. Sailunaz, M. Dhaliwal, J. Rokne, and R. Alhajj, “Emotion detection from text and speech: a survey,” Soc Netw Anal Min, vol. 8, no. 1, 2018, doi: 10.1007/s13278-018-0505-2.

C. S. Löffler and T. Greitemeyer, “Are women the more empathetic gender? The effects of gender role expectations,” Current Psychology, 2021, doi: 10.1007/s12144-020-01260-8.

V. A. Barabanschikov and E. V. Suvorova, “Gender Differences in the Recognition of Emotional States,” Psychological Science and Education, vol. 26, no. 6, pp. 107–116, 2021, doi: 10.17759/PSE.2021260608.

L. Ola and F. Gullon-Scott, “Facial emotion recognition in autistic adult females correlates with alexithymia, not autism,” Autism, vol. 24, no. 8, pp. 2021–2034, 2020, doi: 10.1177/1362361320932727.

J. L. Tauro, T. A. Wearne, B. Belevski, M. Filipčíková, and H. M. Francis, “Social cognition in female adults with Anorexia Nervosa: A systematic review,” Neuroscience and Biobehavioral Reviews, vol. 132. pp. 197–210, 2022. doi: 10.1016/j.neubiorev.2021.11.035.

M. Monroy, A. S. Cowen, and D. Keltner, “Supplemental Material for Intersectionality in Emotion Signaling and Recognition: The Influence of Gender, Ethnicity, and Social Class,” Emotion, 2022, doi: 10.1037/emo0001082.supp.

T. W. Sun, “End-to-End Speech Emotion Recognition with Gender Information,” IEEE Access, vol. 8, pp. 152423–152438, 2020, doi: 10.1109/ACCESS.2020.3017462.

C. Suman, R. Chaudhari, S. Saha, S. Kumar, and P. Bhattacharyya, “Investigations in Emotion Aware Multimodal Gender Prediction Systems From Social Media Data,” IEEE Trans Comput Soc Syst, pp. 1–10, 2022, doi: 10.1109/tcss.2022.3158605.

F. A. Shaqra, R. Duwairi, and M. Al-Ayyoub, “Recognizing emotion from speech based on age and gender using hierarchical models,” Procedia Comput Sci, vol. 151, pp. 37–44, 2019, doi: 10.1016/j.procs.2019.04.009.

C. Navarretta, “Automatic Gender and Identity Recognition in Annotated Multimodal Face-to-face Conversations,” 9th IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2018 - Proceedings, pp. 87–92, 2019, doi: 10.1109/CogInfoCom.2018.8639905.

A. Nediyanchath, P. Paramasivam, and P. Yenigalla, “Multi-Head Attention for Speech Emotion Recognition with Auxiliary Learning of Gender Recognition,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2020-May. pp. 7179–7183, 2020. doi: 10.1109/ICASSP40776.2020.9054073.

L. Abbruzzese, N. Magnani, I. H. Robertson, and M. Mancuso, “Age and Gender Differences in Emotion Recognition,” Front Psychol, vol. 10, no. OCT, Oct. 2019, doi: 10.3389/fpsyg.2019.02371.

M. M. Davis, H. H. Modi, and K. D. Rudolph, “Sex, Gender, and Emotion,” The Encyclopedia of Child and Adolescent Development, pp. 1–12, 2020, doi: 10.1002/9781119171492.wecad179.

E. M. Rødgaard, K. Jensen, J. N. Vergnes, I. Soulières, and L. Mottron, “Temporal Changes in Effect Sizes of Studies Comparing Individuals with and Without Autism: A Meta-analysis,” JAMA Psychiatry, vol. 76, no. 11, pp. 1124–1132, 2019, doi: 10.1001/jamapsychiatry.2019.1956.

A. Puli and A. Kushki, “Toward Automatic Anxiety Detection in Autism: A Real-Time Algorithm for Detecting Physiological Arousal in the Presence of Motion,” IEEE Trans Biomed Eng, vol. 67, no. 3, pp. 646–657, 2020, doi: 10.1109/TBME.2019.2919273.

M. Zhang, S. Xu, Y. Chen, Y. Lin, H. Ding, and Y. Zhang, “Recognition of affective prosody in autism spectrum conditions: A systematic review and meta-analysis,” Autism, vol. 26, no. 4, pp. 798–813, 2022, doi: 10.1177/1362361321995725.

A. E. Pinkham et al., “Comprehensive comparison of social cognitive performance in autism spectrum disorder and schizophrenia,” Psychol Med, vol. 50, no. 15, pp. 2557–2565, 2020, doi: 10.1017/S0033291719002708.

Q. Su, F. Chen, H. Li, N. Yan, and L. Wang, “Multimodal emotion perception in children with autism spectrum disorder by eye tracking study,” 2018 IEEE EMBS Conference on Biomedical Engineering and Sciences, IECBES 2018 - Proceedings, pp. 382–387, 2019, doi: 10.1109/IECBES.2018.8626642.

M. Rocha et al., “Towards Enhancing the Multimodal Interaction of a Social Robot to Assist Children with Autism in Emotion Regulation,” … on Pervasive Computing …, pp. 398–415, 2022, doi: 10.1007/978-3-030-99194-4_25.

W. Xiao, M. Li, M. Chen, and A. Barnawi, “Deep interaction: Wearable robot-assisted emotion communication for enhancing perception and expression ability of children with Autism Spectrum Disorders,” Future Generation Computer Systems, vol. 108, pp. 709–716, 2020, doi: 10.1016/j.future.2020.03.022.

T. Velikonja, A. K. Fett, and E. Velthorst, “Patterns of Nonsocial and Social Cognitive Functioning in Adults with Autism Spectrum Disorder: A Systematic Review and Meta-analysis,” JAMA Psychiatry, vol. 76, no. 2. pp. 135–151, 2019. doi: 10.1001/jamapsychiatry.2018.3645.

M. K. Yeung, “A systematic review and meta-analysis of facial emotion recognition in autism spectrum disorder: The specificity of deficits and the role of task characteristics,” Neurosci Biobehav Rev, vol. 133, 2022, doi: 10.1016/j.neubiorev.2021.104518.

A. Nemcova et al., “Multimodal Features for Detection of Driver Stress and Fatigue: Review,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 6. pp. 3214–3233, 2021. doi: 10.1109/TITS.2020.2977762.

Y. Yao, M. Papakostas, M. Burzo, M. Abouelenien, and R. Mihalcea, “MUSER: MUltimodal Stress Detection using Emotion Recognition as an Auxiliary Task,” NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, pp. 2714–2725, 2021, doi: 10.18653/v1/2021.naacl-main.216.

L. Stappen, E. M. Meßner, E. Cambria, G. Zhao, and B. W. Schuller, “MuSe 2021 Challenge: Multimodal Emotion, Sentiment, Physiological-Emotion, and Stress Detection,” MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia. pp. 5706–5707, 2021. doi: 10.1145/3474085.3478582.

W. Seo, “Deep Learning Approach for Detecting Work-Related Stress Using Multimodal Signals,” IEEE Sens J, 2022, doi: 10.1109/JSEN.2022.3170915.

P. K. R. Yannam, V. Venkatesh, and M. Gupta, “Research Study and System Design for Evaluating Student Stress in Indian Academic Setting,” 2022 14th International Conference on COMmunication Systems and NETworkS, COMSNETS 2022. pp. 54–59, 2022. doi: 10.1109/COMSNETS53615.2022.9668379.

N. Salankar, D. Koundal, and S. Mian Qaisar, “Stress Classification by Multimodal Physiological Signals Using Variational Mode Decomposition and Machine Learning,” J Healthc Eng, vol. 2021, 2021, doi: 10.1155/2021/2146369.

L. Stappen et al., “The MuSe 2021 multimodal sentiment analysis challenge: Sentiment, emotion, physiological-emotion, and stress,” MuSe 2021 - Proceedings of the 2nd Multimodal Sentiment Analysis Challenge, co-located with ACM MM 2021. pp. 5–14, 2021. doi: 10.1145/3475957.3484450.

A. Teissier et al., “Early-life stress impairs postnatal oligodendrogenesis and adult emotional behaviour through activity-dependent mechanisms,” Mol Psychiatry, vol. 25, no. 6, pp. 1159–1174, 2020, doi: 10.1038/s41380-019-0493-2.

S. Sharma, G. Singh, and M. Sharma, “A comprehensive review and analysis of supervised-learning and soft computing techniques for stress diagnosis in humans,” Computers in Biology and Medicine, vol. 134. 2021. doi: 10.1016/j.compbiomed.2021.104450.

M. Parent et al., “PASS: A Multimodal Database of Physical Activity and Stress for Mobile Passive Body/ Brain-Computer Interface Research,” Front Neurosci, vol. 14, 2020, doi: 10.3389/fnins.2020.542934.

A. A. Nicholson et al., “Machine learning multivariate pattern analysis predicts classification of posttraumatic stress disorder and its dissociative subtype: A multimodal neuroimaging approach,” Psychol Med, vol. 49, no. 12, pp. 2049–2059, 2019, doi: 10.1017/S0033291718002866.

W. Shen, L. M. Long, C. H. Shih, and M. J. Ludy, “A humanities-based explanation for the effects of emotional eating and perceived stress on food choice motives during the COVID-19 pandemic,” Nutrients, vol. 12, no. 9, pp. 1–18, 2020, doi: 10.3390/nu12092712.

K. Shafiei, M. A. Shafa, F. Mohammadi, and A. Arabpour, “Recognition of emotions expressed on the face impairments in parkinson’s disease,” Curr J Neurol, vol. 19, no. 1, pp. 32–35, 2020, doi: 10.18502/ijnl.v19i1.3288.

Y. Wang et al., “Freezing of gait detection in Parkinson’s disease via multimodal analysis of EEG and accelerometer signals,” Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, vol. 2020-July. pp. 847–850, 2020. doi: 10.1109/EMBC44109.2020.9175288.

S. Sirivichayakul, B. Kanchanatawan, S. Thika, A. F. Carvalho, and M. Maes, “Eotaxin, an Endogenous Cognitive Deteriorating Chemokine (ECDC), Is a Major Contributor to Cognitive Decline in Normal People and to Executive, Memory, and Sustained Attention Deficits, Formal Thought Disorders, and Psychopathology in Schizophrenia Patien,” Neurotox Res, vol. 35, no. 1, pp. 122–138, 2019, doi: 10.1007/s12640-018-9937-8.

S. Qi et al., “Parallel group ICA+ICA: Joint estimation of linked functional network variability and structural covariation with application to schizophrenia,” Hum Brain Mapp, vol. 40, no. 13, pp. 3795–3809, 2019, doi: 10.1002/hbm.24632.

H. Yeo, S. Yoon, J. Lee, M. M. Kurtz, and K. Choi, “A meta-analysis of the effects of social-cognitive training in schizophrenia: The role of treatment characteristics and study quality,” British Journal of Clinical Psychology, vol. 61, no. 1, pp. 37–57, 2022, doi: 10.1111/bjc.12320.

E. Bora, D. Velakoulis, and M. Walterfang, “Social cognition in Huntington’s disease: A meta-analysis,” Behavioural Brain Research, vol. 297, pp. 131–140, Jan. 2016, doi: 10.1016/j.bbr.2015.10.001.

E. Dumas, “A review of cognition in Huntington s disease,” Frontiers in Bioscience, vol. S5, no. 1, p. S355, 2013, doi: 10.2741/S355.

R. Sprengelmeyer et al., “Loss of disgust: Perception of faces and emotions in Huntington’s disease,” Brain, vol. 119, no. 5, pp. 1647–1665, 1996, doi: 10.1093/brain/119.5.1647.

G. Guo, S. Wang, S. Wang, Z. Zhou, G. Pei, and T. Yan, “Diagnosing Parkinson’s Disease Using Multimodal Physiological Signals,” Communications in Computer and Information Science, vol. 1369 CCIS. pp. 125–136, 2021. doi: 10.1007/978-981-16-1288-6_9.

S. Argaud, M. Vérin, P. Sauleau, and D. Grandjean, “Facial emotion recognition in Parkinson’s disease: A review and new hypotheses,” Movement Disorders, vol. 33, no. 4, pp. 554–567, Apr. 2018, doi: 10.1002/mds.27305.

H. Kaya and A. A. Salah, “Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions,” in Explainable and Interpretable Models in Computer …, Springer, 2018, pp. 255–275. doi: 10.1007/978-3-319-98131-4_10.

M. Monaro, S. Maldera, C. Scarpazza, G. Sartori, and N. Navarin, “Detecting deception through facial expressions in a dataset of videotaped interviews: A comparison between human judges and machine learning models,” Comput Human Behav, vol. 127, p. 107063, Feb. 2022, doi: 10.1016/j.chb.2021.107063.

K. Priya, S. M. Mansoor Roomi, P. Shanmugavadivu, M. G. Sethuraman, and P. Kalaivani, “An Automated System for the Assesment of Interview Performance through Audio Emotion Cues,” 2019 5th International Conference on Advanced Computing and Communication Systems, ICACCS 2019. pp. 1049–1054, 2019. doi: 10.1109/ICACCS.2019.8728458.

Y. Adepu, V. R. Boga, and U. Sairam, “Interviewee Performance Analyzer Using Facial Emotion Recognition and Speech Fluency Recognition,” 2020 IEEE International Conference for Innovation in Technology, INOCON 2020, 2020, doi: 10.1109/INOCON50539.2020.9298427.

V. Ardulov et al., “Multimodal Interaction Modeling of Child Forensic Interviewing,” in Proceedings of the 20th ACM International Conference on Multimodal Interaction, New York, NY, USA: ACM, Oct. 2018, pp. 179–185. doi: 10.1145/3242969.3243006.

K. Priya, S. M. Mansoor Roomi, P. Shanmugavadivu, M. G. Sethuraman, and P. Kalaivani, “An Automated System for the Assesment of Interview Performance through Audio Emotion Cues,” 2019 5th International Conference on Advanced Computing and Communication Systems, ICACCS 2019. pp. 1049–1054, 2019. doi: 10.1109/ICACCS.2019.8728458.

J. L. Ho, D. M. Powell, and D. J. Stanley, “The relation between deceptive impression management and employment interview ratings: A meta-analysis.,” Can J Behav Sci, vol. 53, no. 2, pp. 164–174, Apr. 2021, doi: 10.1037/cbs0000223.

J. Zhu et al., “Multimodal mild depression recognition based on EEG-EM synchronization acquisition network,” IEEE Access, vol. 7, pp. 28196–28210, 2019, doi: 10.1109/ACCESS.2019.2901950.

E. P. Torres P., E. A. Torres, M. Hernández-Álvarez, and S. G. Yoo, “EEG-based BCI emotion recognition: A survey,” Sensors (Switzerland), vol. 20, no. 18, pp. 1–36, 2020, doi: 10.3390/s20185083.

M. A. Ozdemir, M. Degirmenci, E. Izci, and A. Akan, “EEG-based emotion recognition with deep convolutional neural networks,” Biomedizinische Technik, vol. 66, no. 1, pp. 43–57, 2021, doi: 10.1515/bmt-2019-0306.

X. Wu, W. L. Zheng, and B. L. Lu, “Identifying Functional Brain Connectivity Patterns for EEG-Based Emotion Recognition,” International IEEE/EMBS Conference on Neural Engineering, NER, vol. 2019-March, pp. 235–238, 2019, doi: 10.1109/NER.2019.8717035.

X. Wu, W. L. Zheng, Z. Li, and B. L. Lu, “Investigating EEG-based functional connectivity patterns for multimodal emotion recognition,” Journal of Neural Engineering, vol. 19, no. 1. 2022. doi: 10.1088/1741-2552/ac49a7.

F. Wang et al., “Emotion recognition with convolutional neural network and EEG-based EFDMs,” Neuropsychologia, vol. 146, 2020, doi: 10.1016/j.neuropsychologia.2020.107506.

F. Shen, G. Dai, G. Lin, J. Zhang, W. Kong, and H. Zeng, “EEG-based emotion recognition using 4D convolutional recurrent neural network,” Cogn Neurodyn, vol. 14, no. 6, pp. 815–828, 2020, doi: 10.1007/s11571-020-09634-1.

Y. Zhang et al., “An Investigation of Deep Learning Models for EEG-Based Emotion Recognition,” Frontiers in Neuroscience, vol. 14. frontiersin.org, 2020. doi: 10.3389/fnins.2020.622759.

R. Nawaz, K. H. Cheah, H. Nisar, and V. V. Yap, “Comparison of different feature extraction methods for EEG-based emotion recognition,” Biocybern Biomed Eng, vol. 40, no. 3, pp. 910–926, 2020, doi: 10.1016/j.bbe.2020.04.005.

W. L. Zheng, Z. F. Shi, and B. L. Lv, “Building Cross-Subject EEG-Based Affective Models Using Heterogeneous Transfer Learning,” Jisuanji Xuebao/Chinese Journal of Computers, vol. 43, no. 2, pp. 177–189, 2020, doi: 10.11897/SP.J.1016.2020.00177.

C. Wei, L. lan Chen, Z. zhen Song, X. guang Lou, and D. dong Li, “EEG-based emotion recognition using simple recurrent units network and ensemble learning,” Biomed Signal Process Control, vol. 58, 2020, doi: 10.1016/j.bspc.2019.101756.

Z. Wang, Y. Wang, J. Zhang, C. Hu, Z. Yin, and Y. Song, “Spatial-Temporal Feature Fusion Neural Network for EEG-based Emotion Recognition,” IEEE Trans Instrum Meas, vol. 71, p. 1, 2022, doi: 10.1109/tim.2022.3165280.

Z. Wang, Y. Wang, C. Hu, Z. Yin, and Y. Song, “Transformers for EEG-Based Emotion Recognition: A Hierarchical Spatial Information Learning Model,” IEEE Sens J, vol. 22, no. 5, pp. 4359–4368, 2022, doi: 10.1109/JSEN.2022.3144317.

L. Zhao, X. Yan, and B. Lu, “Plug-and-Play Domain Adaptation for Cross-Subject EEG-based Emotion Recognition,” AAAI Conference on Artificial Intelligence, vol. 1. pp. 863–870, 2021. [Online]. Available: https://api.elsevier.com/content/abstract/scopus_id/85111479770

A. Subasi, T. Tuncer, S. Dogan, D. Tanko, and U. Sakoglu, “EEG-based emotion recognition using tunable Q wavelet transform and rotation forest ensemble classifier,” Biomed Signal Process Control, vol. 68, 2021, doi: 10.1016/j.bspc.2021.102648.

Q. Zhu, G. Lu, and J. Yan, “Valence-arousal model based emotion recognition using EEG, peripheral physiological signals and Facial Expression,” ACM International Conference Proceeding Series. pp. 81–85, 2020. doi: 10.1145/3380688.3380694.

I. V Stuldreher, N. Thammasan, J. B. F. van Erp, and A. M. Brouwer, “Physiological Synchrony in EEG, Electrodermal Activity and Heart Rate Detects Attentionally Relevant Events in Time,” Front Neurosci, vol. 14, 2020, doi: 10.3389/fnins.2020.575521.

N. S. Suhaimi, J. Mountstephens, and J. Teo, “EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities,” Computational Intelligence and Neuroscience, vol. 2020. 2020. doi: 10.1155/2020/8875426.

M. Yu et al., “EEG-based emotion recognition in an immersive virtual reality environment: From local activity to brain network features,” Biomed Signal Process Control, vol. 72, 2022, doi: 10.1016/j.bspc.2021.103349.

C. Pan, C. Shi, H. Mu, J. Li, and X. Gao, “EEG-based emotion recognition using logistic regression with gaussian kernel and laplacian prior and investigation of critical frequency bands,” Applied Sciences (Switzerland), vol. 10, no. 5, 2020, doi: 10.3390/app10051619.

T. Numata, M. Kiguchi, and H. Sato, “Multiple-Time-Scale Analysis of Attention as Revealed by EEG, NIRS, and Pupil Diameter Signals During a Free Recall Task: A Multimodal Measurement Approach,” Front Neurosci, vol. 13, 2019, doi: 10.3389/fnins.2019.01307.

C. Qing, R. Qiao, X. Xu, and Y. Cheng, “Interpretable Emotion Recognition Using EEG Signals,” IEEE Access, vol. 7, pp. 94160–94170, 2019, doi: 10.1109/ACCESS.2019.2928691.

S. Yaacob, N. A. Izzati Affandi, P. Krishnan, A. Rasyadan, M. Yaakop, and F. Mohamed, “Drowsiness detection using EEG and ECG signals,” IEEE International Conference on Artificial Intelligence in Engineering and Technology, IICAIET 2020. 2020. doi: 10.1109/IICAIET49801.2020.9257867.

T. Tuncer, S. Dogan, M. Baygin, and U. Rajendra Acharya, “Tetromino pattern based accurate EEG emotion classification model,” Artif Intell Med, vol. 123, 2022, doi: 10.1016/j.artmed.2021.102210.

M. H. R. Rabbani and S. M. R. Islam, “Multimodal Decision Fusion of EEG and fNIRS Signals,” 2021 5th International Conference on Electrical Engineering and Information and Communication Technology, ICEEICT 2021. 2021. doi: 10.1109/ICEEICT53905.2021.9667844.

T. Song, W. Zheng, P. Song, and Z. Cui, “EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks,” IEEE Trans Affect Comput, vol. 11, no. 3, pp. 532–541, 2020, doi: 10.1109/TAFFC.2018.2817622.

S. Rayatdoost, D. Rudrauf, and M. Soleymani, “Expression-Guided EEG Representation Learning for Emotion Recognition,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2020-May. pp. 3222–3226, 2020. doi: 10.1109/ICASSP40776.2020.9053004.

M. M. Rahman et al., “Recognition of human emotions using EEG signals: A review,” Comput Biol Med, vol. 136, 2021, doi: 10.1016/j.compbiomed.2021.104696.

D. S. Naser and G. Saha, “Influence of music liking on EEG based emotion recognition,” Biomed Signal Process Control, vol. 64, 2021, doi: 10.1016/j.bspc.2020.102251.

W. L. Zheng, J. Y. Zhu, and B. L. Lu, “Identifying stable patterns over time for emotion recognition from eeg,” IEEE Trans Affect Comput, vol. 10, no. 3, pp. 417–429, 2019, doi: 10.1109/TAFFC.2017.2712143.

G. Xiao, M. Shi, M. Ye, B. Xu, Z. Chen, and Q. Ren, “4D attention-based neural network for EEG emotion recognition,” Cogn Neurodyn, vol. 16, no. 4, pp. 805–818, 2022, doi: 10.1007/s11571-021-09751-5.

H. Ullah, M. Uzair, A. Mahmood, M. Ullah, S. D. Khan, and F. A. Cheikh, “Internal Emotion Classification Using EEG Signal with Sparse Discriminative Ensemble,” IEEE Access, vol. 7, pp. 40144–40153, 2019, doi: 10.1109/ACCESS.2019.2904400.

D. Panda, D. Das Chakladar, and T. Dasgupta, “Multimodal system for emotion recognition using eeg and customer review,” Advances in Intelligent Systems and Computing, vol. 1112. pp. 399–410, 2020. doi: 10.1007/978-981-15-2188-1_32.

N. S. Suhaimi, J. Mountstephens, and J. Teo, “A Dataset for Emotion Recognition Using Virtual Reality and EEG (DER-VREEG): Emotional State Classification Using Low-Cost Wearable VR-EEG Headsets,” Big Data and Cognitive Computing, vol. 6, no. 1, 2022, doi: 10.3390/bdcc6010016.

X. Zhong, Z. Yin, and J. Zhang, “Cross-Subject emotion recognition from EEG using Convolutional Neural Networks,” Chinese Control Conference, CCC, vol. 2020-July. pp. 7516–7521, 2020. doi: 10.23919/CCC50068.2020.9189559.

H. Zhang, “Expression-eeg based collaborative multimodal emotion recognition using deep autoencoder,” IEEE Access, vol. 8, pp. 164130–164143, 2020, doi: 10.1109/ACCESS.2020.3021994.

Z. Yin, L. Liu, J. Chen, B. Zhao, and Y. Wang, “Locally robust EEG feature selection for individual-independent emotion recognition,” Expert Syst Appl, vol. 162, 2020, doi: 10.1016/j.eswa.2020.113768.

B. Xing et al., “Exploiting EEG Signals and Audiovisual Feature Fusion for Video Emotion Recognition,” IEEE Access, vol. 7, pp. 59844–59861, 2019, doi: 10.1109/ACCESS.2019.2914872.

D. Williams, A. Thomas, C. Allen, B. Cox, and C. Clark, “Towards neural correlates of environmental noise soundscapes: EEG and acoustic features,” “Advances in Acoustics, Noise and Vibration - 2021” Proceedings of the 27th International Congress on Sound and Vibration, ICSV 2021. 2021. [Online]. Available: https://api.elsevier.com/content/abstract/scopus_id/85117436128

Z. M. Wang, S. Y. Hu, and H. Song, “Channel Selection Method for EEG Emotion Recognition Using Normalized Mutual Information,” IEEE Access, vol. 7, pp. 143303–143311, 2019, doi: 10.1109/ACCESS.2019.2944273.

Y. Wang, Y. Liu, L. Yao, and X. Zhao, “Network features of simultaneous EEG and fMRI predict working memory load,” Progress in Biomedical Optics and Imaging - Proceedings of SPIE, vol. 11313. p. 65, 2020. doi: 10.1117/12.2548415.

J. Wang and M. Wang, “Review of the emotional feature extraction and classification using EEG signals,” Cognitive Robotics, vol. 1. Elsevier, pp. 29–40, 2021. doi: 10.1016/j.cogr.2021.04.001.

A. Topic and M. Russo, “Emotion recognition based on EEG feature maps through deep learning network,” Engineering Science and Technology, an International Journal, vol. 24, no. 6, pp. 1442–1454, 2021, doi: 10.1016/j.jestch.2021.03.012.

Y. Sun, H. Ayaz, and A. N. Akansu, “Multimodal affective state assessment using fNIRS+ EEG and spontaneous facial expression,” Brain Sci, vol. 10, no. 2, 2020, doi: 10.3390/brainsci10020085.

D. Shukla, P. P. Kundu, R. Malapati, S. Poudel, Z. Jin, and V. V Phoha, “Thinking Unveiled: An Inference and Correlation Model to Attack EEG Biometrics,” Digital Threats: Research and Practice, vol. 1, no. 2, 2020, doi: 10.1145/3374137.

V. Padhmashree and A. Bhattacharyya, “Human emotion recognition based on time–frequency analysis of multivariate EEG signal,” Knowl Based Syst, vol. 238, 2022, doi: 10.1016/j.knosys.2021.107867.

H. Zeng et al., “EEG emotion classification using an improved sincnet-based deep learning model,” Brain Sci, vol. 9, no. 11, 2019, doi: 10.3390/brainsci9110326.

Y. Yang, Q. Gao, Y. Song, X. Song, Z. Mao, and J. Liu, “Investigating of Deaf Emotion Cognition Pattern by EEG and Facial Expression Combination,” IEEE J Biomed Health Inform, vol. 26, no. 2, pp. 589–599, 2022, doi: 10.1109/JBHI.2021.3092412.

Y. Yang, Q. Gao, X. Song, Y. Song, Z. Mao, and J. Liu, “Facial Expression and EEG Fusion for Investigating Continuous Emotions of Deaf Subjects,” IEEE Sens J, vol. 21, no. 15, pp. 16894–16903, 2021, doi: 10.1109/JSEN.2021.3078087.

K. Yang, L. Tong, J. Shu, N. Zhuang, B. Yan, and Y. Zeng, “High Gamma Band EEG Closely Related to Emotion: Evidence From Functional Network,” Front Hum Neurosci, vol. 14, 2020, doi: 10.3389/fnhum.2020.00089.

H. Yang, J. Han, and K. Min, “A multi-column CNN model for emotion recognition from EEG signals,” Sensors (Switzerland), vol. 19, no. 21, 2019, doi: 10.3390/s19214736.

X. Xing, Z. Li, T. Xu, L. Shu, B. Hu, and X. Xu, “SAE+LSTM: A new framework for emotion recognition from multi-channel EEG,” Front Neurorobot, vol. 13, 2019, doi: 10.3389/fnbot.2019.00037.

A. Rahman et al., “Multimodal EEG and Keystroke Dynamics Based Biometric System Using Machine Learning Algorithms,” IEEE Access, vol. 9, pp. 94625–94643, 2021, doi: 10.1109/ACCESS.2021.3092840.

A. N. Pusarla, B. A. Singh, and C. S. Tripathi, “Learning DenseNet features from EEG based spectrograms for subject independent emotion recognition,” Biomed Signal Process Control, vol. 74, 2022, doi: 10.1016/j.bspc.2022.103485.

L. Piho and T. Tjahjadi, “A Mutual Information Based Adaptive Windowing of Informative EEG for Emotion Recognition,” IEEE Trans Affect Comput, vol. 11, no. 4, pp. 722–735, 2020, doi: 10.1109/TAFFC.2018.2840973.

P. Ozel, A. Akan, and B. Yilmaz, “Synchrosqueezing transform based feature extraction from EEG signals for emotional state prediction,” Biomed Signal Process Control, vol. 52, pp. 152–161, 2019, doi: 10.1016/j.bspc.2019.04.023.

L. M. Zhao, R. Li, W. L. Zheng, and B. L. Lu, “Classification of Five Emotions from EEG and Eye Movement Signals: Complementary Representation Properties,” International IEEE/EMBS Conference on Neural Engineering, NER, vol. 2019-March. pp. 611–614, 2019. doi: 10.1109/NER.2019.8717055.

Y. Yin, X. Zheng, B. Hu, Y. Zhang, and X. Cui, “EEG emotion recognition using fusion model of graph convolutional neural networks and LSTM,” Appl Soft Comput, vol. 100, 2021, doi: 10.1016/j.asoc.2020.106954.

Y. Wang, S. Qiu, X. Ma, and H. He, “A prototype-based SPD matrix network for domain adaptation EEG emotion recognition,” Pattern Recognit, vol. 110, 2021, doi: 10.1016/j.patcog.2020.107626.

X. H. Wang, T. Zhang, X. M. Xu, L. Chen, X. F. Xing, and C. L. P. Chen, “EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks and Broad Learning System,” Proceedings - 2018 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2018, pp. 1240–1244, 2019, doi: 10.1109/BIBM.2018.8621147.

T. Tuncer, S. Dogan, and A. Subasi, “A new fractal pattern feature generation function based emotion recognition method using EEG,” Chaos Solitons Fractals, vol. 144, 2021, doi: 10.1016/j.chaos.2021.110671.

S. Sheykhivand, Z. Mousavi, T. Y. Rezaii, and A. Farzamnia, “Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals,” IEEE Access, vol. 8, pp. 139332–139345, 2020, doi: 10.1109/ACCESS.2020.3011882.

S. Rayatdoost, D. Rudrauf, and M. Soleymani, “Multimodal Gated Information Fusion for Emotion Recognition from EEG Signals and Facial Behaviors,” ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction. pp. 655–659, 2020. doi: 10.1145/3382507.3418867.

Y. Zhao and D. Chen, “Expression EEG Multimodal Emotion Recognition Method Based on the Bidirectional LSTM and Attention Mechanism,” Comput Math Methods Med, vol. 2021, 2021, doi: 10.1155/2021/9967592.

D. I. Wu, J. Zhang, and Q. Zhao, “Multimodal Fused Emotion Recognition about Expression-EEG Interaction and Collaboration Using Deep Learning,” IEEE Access, vol. 8, pp. 133180–133189, 2020, doi: 10.1109/ACCESS.2020.3010311.

G. A. M. Vasiljevic and L. C. de Miranda, “Brain–Computer Interface Games Based on Consumer-Grade EEG Devices: A Systematic Literature Review,” International Journal of Human-Computer Interaction, vol. 36, no. 2. pp. 105–142, 2020. doi: 10.1080/10447318.2019.1612213.

P. Pandey and K. R. Seeja, “Subject independent emotion recognition system for people with facial deformity: an EEG based approach,” J Ambient Intell Humaniz Comput, vol. 12, no. 2, pp. 2311–2320, 2021, doi: 10.1007/s12652-020-02338-8.

X. Zheng, X. Liu, Y. Zhang, L. Cui, and X. Yu, “A portable HCI system-oriented EEG feature extraction and channel selection for emotion recognition,” International Journal of Intelligent Systems, vol. 36, no. 1, pp. 152–176, 2021, doi: 10.1002/int.22295.

N. Salankar, P. Mishra, and L. Garg, “Emotion recognition from EEG signals using empirical mode decomposition and second-order difference plot,” Biomed Signal Process Control, vol. 65, 2021, doi: 10.1016/j.bspc.2020.102389.

N. Saffaryazdi et al., “Using Facial Micro-Expressions in Combination With EEG and Physiological Signals for Emotion Recognition,” Frontiers in Psychology, vol. 13. 2022. doi: 10.3389/fpsyg.2022.864047.

Y. Wang, W. B. Jiang, R. Li, and B. L. Lu, “Emotion Transformer Fusion: Complementary Representation Properties of EEG and Eye Movements on Recognizing Anger and Surprise,” Proceedings - 2021 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021. pp. 1575–1578, 2021. doi: 10.1109/BIBM52615.2021.9669556.

D. O. Nahmias and K. L. Kontson, “Quantifying Signal Quality From Unimodal and Multimodal Sources: Application to EEG With Ocular and Motion Artifacts,” Front Neurosci, vol. 15, 2021, doi: 10.3389/fnins.2021.566004.

F. Yang, X. Zhao, W. Jiang, P. Gao, and G. Liu, “Multi-method Fusion of Cross-Subject Emotion Recognition Based on High-Dimensional EEG Features,” Front Comput Neurosci, vol. 13, 2019, doi: 10.3389/fncom.2019.00053.

A. Momennezhad, “Matching pursuit algorithm for enhancing EEG signal quality and increasing the accuracy and efficiency of emotion recognition,” Biomedizinische Technik, vol. 65, no. 4, pp. 393–404, 2020, doi: 10.1515/bmt-2019-0327.

M. K. Yeung and V. W. Chu, “Viewing neurovascular coupling through the lens of combined EEG – fNIRS : A systematic review of current methods,” Psychophysiology, vol. 59, no. 6. 2022. doi: 10.1111/psyp.14054.

K. V Sidorov and N. I. Bodrina, “Monitoring the Characteristics of Human Emotional Reactions Based on the Analysis of Attractors Reconstructed According to EEG Patterns,” Advances in Intelligent Systems and Computing, vol. 1295. pp. 430–443, 2020. doi: 10.1007/978-3-030-63319-6_40.

C. Tan, M. Šarlija, and N. Kasabov, “NeuroSense: Short-term emotion recognition and understanding based on spiking neural network modelling of spatio-temporal EEG patterns,” Neurocomputing, vol. 434, pp. 137–148, 2021, doi: 10.1016/j.neucom.2020.12.098.

S. Taran and V. Bajaj, “Emotion recognition from single-channel EEG signals using a two-stage correlation and instantaneous frequency-based filtering method,” Comput Methods Programs Biomed, vol. 173, pp. 157–165, 2019, doi: 10.1016/j.cmpb.2019.03.015.

P. Sarkar and A. Etemad, “Self-Supervised Learning for ECG-Based Emotion Recognition,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2020-May. pp. 3217–3221, 2020. doi: 10.1109/ICASSP40776.2020.9053985.

C. Yaacoubi, R. Besrour, and Z. Lachiri, “A multimodal biometric identification system based on ECG and PPG signals,” ACM International Conference Proceeding Series. 2020. doi: 10.1145/3423603.3424053.

A. Sepúlveda, F. Castillo, C. Palma, and M. Rodriguez-Fernandez, “Emotion recognition from ecg signals using wavelet scattering and machine learning,” Applied Sciences (Switzerland), vol. 11, no. 11, 2021, doi: 10.3390/app11114945.

Pushparaj, A. Kumar, and G. Saini, “Comparative Study of Biomedical Physiological based ECG Signal heart monitoring for Human body,” CEUR Workshop Proceedings, vol. 3058. 2021. [Online]. Available: https://api.elsevier.com/content/abstract/scopus_id/85122698933

R. Song, H. Chen, J. Cheng, C. Li, Y. Liu, and X. Chen, “PulseGAN: Learning to Generate Realistic Pulse Waveforms in Remote Photoplethysmography,” IEEE J Biomed Health Inform, vol. 25, no. 5, pp. 1373–1384, 2021, doi: 10.1109/JBHI.2021.3051176.

X. Zhao and G. Sun, “A multi-class automatic sleep staging method based on photoplethysmography signals,” Entropy, vol. 23, no. 1, pp. 1–12, 2021, doi: 10.3390/e23010116.

R. Yuvaraj et al., “On the analysis of EEG power, frequency and asymmetry in Parkinson’s disease during emotion processing,” Behavioral and Brain Functions, vol. 10, no. 1, 2014, doi: 10.1186/1744-9081-10-12.

S. Zaunseder, A. Trumpp, D. Wedekind, and H. Malberg, “Cardiovascular assessment by imaging photoplethysmography-a review,” Biomedizinische Technik, vol. 63, no. 5, pp. 529–535, 2018, doi: 10.1515/bmt-2017-0119.

J. Marín-Morales et al., “Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors,” Sci Rep, vol. 8, no. 1, 2018, doi: 10.1038/s41598-018-32063-4.

V. P. Rachim and W.-Y. Chung, “Multimodal Wrist Biosensor for Wearable Cuff-less Blood Pressure Monitoring System,” Sci Rep, vol. 9, no. 1, p. 7947, May 2019, doi: 10.1038/s41598-019-44348-3.

L. Shu et al., “Wearable Emotion Recognition Using Heart Rate Data from a Smart Bracelet,” Sensors, vol. 20, no. 3, p. 718, Jan. 2020, doi: 10.3390/s20030718.

N. Siddiqui and R. H. M. Chan, “Multimodal hand gesture recognition using single IMU and acoustic measurements at wrist,” PLoS One, vol. 15, no. 1, 2020, doi: 10.1371/journal.pone.0227039.

L. Shu et al., “Wearable Emotion Recognition Using Heart Rate Data from a Smart Bracelet,” Sensors, vol. 20, no. 3, p. 718, Jan. 2020, doi: 10.3390/s20030718.

Q. Lin et al., “Wearable Multiple Modality Bio-Signal Recording and Processing on Chip: A Review,” IEEE Sens J, vol. 21, no. 2, pp. 1108–1123, Jan. 2021, doi: 10.1109/JSEN.2020.3016115.

F. S. Tsai, Y. M. Weng, C. J. Ng, and C. C. Lee, “Pain versus Affect? An investigation in the relationship between observed emotional states and self-reported pain,” 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2019. pp. 508–512, 2019. doi: 10.1109/APSIPAASC47483.2019.9023134.

P. K. Murali, M. Kaboli, and R. Dahiya, “Intelligent In‐Vehicle Interaction Technologies,” Advanced Intelligent Systems, vol. 4, no. 2, p. 2100122, 2022, doi: 10.1002/aisy.202100122.

S. Khan, S. Ali, and A. Bermak, “Recent Developments in Printing Flexible and Wearable Sensing Electronics for Healthcare Applications,” Sensors, vol. 19, no. 5, p. 1230, Mar. 2019, doi: 10.3390/s19051230.

J. S. Heo, M. F. Hossain, and I. Kim, “Challenges in Design and Fabrication of Flexible/Stretchable Carbon- and Textile-Based Wearable Sensors for Health Monitoring: A Critical Review,” Sensors, vol. 20, no. 14, p. 3927, Jul. 2020, doi: 10.3390/s20143927.

K. López-de-Ipiña et al., “On the Selection of Non-Invasive Methods Based on Speech Analysis Oriented to Automatic Alzheimer Disease Diagnosis,” Sensors, vol. 13, no. 5, pp. 6730–6745, May 2013, doi: 10.3390/s130506730.

K. López-de-Ipiña et al., “On Automatic Diagnosis of Alzheimer’s Disease Based on Spontaneous Speech Analysis and Emotional Temperature,” Cognit Comput, vol. 7, no. 1, pp. 44–55, 2015, doi: 10.1007/s12559-013-9229-9.

S. Mirsamadi, E. Barsoum, and C. Zhang, “Automatic speech emotion recognition using recurrent neural networks with local attention,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, Mar. 2017, pp. 2227–2231. doi: 10.1109/ICASSP.2017.7952552.

S. Yoon, S. Byun, and K. Jung, “Multimodal Speech Emotion Recognition Using Audio and Text,” 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings, pp. 112–118, 2019, doi: 10.1109/SLT.2018.8639583.

J. Akram and A. Tahir, “Lexicon and heuristics based approach for identification of emotion in text,” Proceedings - 2018 International Conference on Frontiers of Information Technology, FIT 2018, pp. 293–297, 2019, doi: 10.1109/FIT.2018.00058.

M. B. Akçay and K. Oğuz, “Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers,” Speech Communication, vol. 116. pp. 56–76, 2020. doi: 10.1016/j.specom.2019.12.001.

S. N. Mohammed and A. K. Hassan, “Automatic voice activity detection using fuzzy-neuro classifier,” Journal of Engineering Science and Technology, vol. 15, no. 5, pp. 2854–2870, 2020.

L. Sun, B. Zou, S. Fu, J. Chen, and F. Wang, “Speech emotion recognition based on DNN-decision tree SVM model,” Speech Commun, vol. 115, pp. 29–37, 2019, doi: 10.1016/j.specom.2019.10.004.

L. Sun, B. Zou, S. Fu, J. Chen, and F. Wang, “Speech emotion recognition based on DNN-decision tree SVM model,” Speech Commun, vol. 115, pp. 29–37, Dec. 2019, doi: 10.1016/j.specom.2019.10.004.

L. Sun, S. Fu, and F. Wang, “Decision tree SVM model with Fisher feature selection for speech emotion recognition,” EURASIP J Audio Speech Music Process, vol. 2019, no. 1, 2019, doi: 10.1186/s13636-018-0145-5.

W. Mellouk and W. Handouzi, “Facial emotion recognition using deep learning: Review and insights,” Procedia Comput Sci, vol. 175, pp. 689–694, 2020, doi: 10.1016/j.procs.2020.07.101.

S. K. Singh, R. K. Thakur, S. Kumar, and R. Anand, “Deep Learning and Machine Learning based Facial Emotion Detection using CNN,” Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development, INDIACom 2022, pp. 530–535, 2022, doi: 10.23919/INDIACom54597.2022.9763165.

B. Nakisa, M. N. Rastgoo, A. Rakotonirainy, F. Maire, and V. Chandran, “Automatic Emotion Recognition Using Temporal Multimodal Deep Learning,” IEEE Access, vol. 8, pp. 225463–225474, 2020, doi: 10.1109/ACCESS.2020.3027026.

G. Muhammad and M. S. Hossain, “Emotion Recognition for Cognitive Edge Computing Using Deep Learning,” IEEE Internet Things J, vol. 8, no. 23, pp. 16894–16901, 2021, doi: 10.1109/JIOT.2021.3058587.

S. Zhang, X. Tao, Y. Chuang, and X. Zhao, “Learning deep multimodal affective features for spontaneous speech emotion recognition,” Speech Commun, vol. 127, pp. 73–81, 2021, doi: 10.1016/j.specom.2020.12.009.

P. V Rouast, M. T. P. Adam, and R. Chiong, “Deep Learning for Human Affect Recognition: Insights and New Developments,” IEEE Trans Affect Comput, vol. 12, no. 2, pp. 524–543, 2021, doi: 10.1109/TAFFC.2018.2890471.

Z. Wang, X. Zhou, W. Wang, and C. Liang, “Emotion recognition using multimodal deep learning in multiple psychophysiological signals and video,” International Journal of Machine Learning and Cybernetics, vol. 11, no. 4, pp. 923–934, 2020, doi: 10.1007/s13042-019-01056-8.

R. Sharma, R. B. Pachori, and P. Sircar, “Automated emotion recognition based on higher order statistics and deep learning algorithm,” Biomed Signal Process Control, vol. 58, 2020, doi: 10.1016/j.bspc.2020.101867.

L. Schoneveld, A. Othmani, and H. Abdelkawy, “Leveraging recent advances in deep learning for audio-Visual emotion recognition,” Pattern Recognit Lett, vol. 146, pp. 1–7, 2021, doi: 10.1016/j.patrec.2021.03.007.

J. Parry et al., “Analysis of deep learning architectures for cross-corpus speech emotion recognition,” Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, vol. 2019-Septe. pp. 1656–1660, 2019. doi: 10.21437/Interspeech.2019-2753.

D. Nguyen et al., “Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition,” IEEE Trans Multimedia, vol. 24, pp. 1313–1324, 2022, doi: 10.1109/TMM.2021.3063612.

S. D. Subramaniam and B. Dass, “Automated Nociceptive Pain Assessment Using Physiological Signals and a Hybrid Deep Learning Network,” IEEE Sens J, vol. 21, no. 3, pp. 3335–3343, 2021, doi: 10.1109/JSEN.2020.3023656.

X. Zhang, M. J. Wang, and X. Da Guo, “Multi-modal Emotion Recognition Based on Deep Learning in Speech, Video and Text,” 2020 IEEE 5th International Conference on Signal and Image Processing, ICSIP 2020. pp. 328–333, 2020. doi: 10.1109/ICSIP49896.2020.9339464.

A. Sakalle, P. Tomar, H. Bhardwaj, D. Acharya, and A. Bhardwaj, “A LSTM based deep learning network for recognizing emotions using wireless brainwave driven system,” Expert Syst Appl, vol. 173, 2021, doi: 10.1016/j.eswa.2020.114516.

A. I. Middya, B. Nag, and S. Roy, “Deep learning based multimodal emotion recognition using model-level fusion of audio–visual modalities,” Knowl Based Syst, vol. 244, 2022, doi: 10.1016/j.knosys.2022.108580.

Z. Sun, P. K. Sarma, W. A. Sethares, and Y. Liang, “Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis,” AAAI 2020 - 34th AAAI Conference on Artificial Intelligence. pp. 8992–8999, 2020. doi: 10.1609/aaai.v34i05.6431.

C. Tan, G. Ceballos, N. Kasabov, and N. P. Subramaniyam, “Fusionsense: Emotion classification using feature fusion of multimodal data and deep learning in a brain-inspired spiking neural network,” Sensors (Switzerland), vol. 20, no. 18, pp. 1–27, 2020, doi: 10.3390/s20185328.

M. Ravikiran and K. Madgula, “Fusing deep quick response code representations improves malware text classification,” WCRML 2019 - Proceedings of the ACM Workshop on Crossmodal Learning and Application. pp. 11–18, 2019. doi: 10.1145/3326459.3329166.

Z. Yao, Z. Wang, W. Liu, Y. Liu, and J. Pan, “Speech emotion recognition using fusion of three multi-task learning-based classifiers: HSF-DNN, MS-CNN and LLD-RNN,” Speech Commun, vol. 120, pp. 11–19, 2020, doi: 10.1016/j.specom.2020.03.005.

C. M. Aqdus Ilyas, R. Nunes, K. Nasrollahi, M. Rehm, and T. B. Moeslund, “Deep emotion recognition through upper body movements and facial expression,” VISIGRAPP 2021 - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, vol. 5. pp. 669–679, 2021. doi: 10.5220/0010359506690679.

S. Emerich, E. Lupu, and A. Apatean, “Emotions recognition by speech and facial expressions analysis,” European Signal Processing Conference, no. Eusipco, pp. 1617–1621, 2009.

P. Ekman and W. V Friesen, “Facial action coding system: a technique for the measurement of facial movement,” 1978.

P. Ekman and E. L. Rosenberg, What the Face RevealsBasic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). in Series in affective science. New York, NY, US: Oxford University Press, 2005. doi: 10.1093/acprof:oso/9780195179644.001.0001.

T. Buzan, C. Griffiths, and J. Harrison, Mind Maps for Business: Using the Ultimate Thinking Tool to Revolutionise how You Work. Pearson, 2014.

Refbacks

  • There are currently no refbacks.




Scientific Journal of Informatics (SJI)
p-ISSN 2407-7658 | e-ISSN 2460-0040
Published By Department of Computer Science Universitas Negeri Semarang
Website: https://journal.unnes.ac.id/nju/index.php/sji
Email: [email protected]

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.