Social:Emotion recognition

From HandWiki

Emotion recognition is the process of identifying human emotion, most typically from facial expressions as well as from verbal expressions. This is both something that humans do automatically but computational methodologies have also been developed.

Human

Main page: Philosophy:Emotion perception

Humans show universal consistency in recognising emotions but also show a great deal of variability between individuals in their abilities. This has been a major topic of study in psychology.

Automatic

This process leverages techniques from multiple areas, such as signal processing, machine learning, and computer vision. Different methodologies and techniques may be employed to interpret emotion such as Bayesian networks.[1] , Gaussian Mixture models[2] and Hidden Markov Models[3].

Approaches

The task of emotion recognition often involves the analysis of human expressions in multimodal forms such as texts, audio, or video.[4] Different emotion types are detected through the integration of information from facial expressions, body movement and gestures, and speech.[5] The existing approaches in emotion recognition to classify certain emotion types can be generally classified into three main categories: knowledge-based techniques, statistical methods, and hybrid approaches.[6]

Knowledge-based Techniques

Knowledge-based techniques (sometimes referred to as lexicon-based techniques), utilize domain knowledge and the semantic and syntactic characteristics of language in order to detect certain emotion types.[7] In this approach, it is common to use knowledge-based resources during the emotion classification process such as WordNet, SenticNet[8], ConceptNet, and EmotiNet[9], to name a few.[10] One of the advantages of this approach is the accessibility and economy brought about by the large availability of such knowledge-based resources.[6] A limitation of this technique on the other hand, is its inability to handle concept nuances and complex linguistic rules.[6]

Knowledge-based techniques can be mainly classified into two categories: dictionary-based and corpus-based approaches.[7] Dictionary-based approaches find opinion or emotion seed words in a dictionary and search for their synonyms and antonyms to expand the initial list of opinions or emotions.[11] Corpus-based approaches on the other hand, start with a seed list of opinion or emotion words, and expand the database by finding other words with context-specific characteristics in a large corpus.[11] While corpus-based approaches take into account context, their performance still vary in different domains since a word in one domain can have a different orientation in another domain.[12]

Statistical Methods

Statistical methods commonly involve the use of different supervised machine learning algorithms in which a large set of annotated data is fed into the algorithms for the system to learn and predict the appropriate emotion types.[6] This approach normally involves two sets of data: the training set and the testing set, where the former is used to learn the attributes of the data, while the latter is used to validate the performance of the machine learning algorithm.[13] Machine learning algorithms generally provide more reasonable classification accuracy compared to other approaches, but one of the challenges in achieving good results in the classification process, is the need to have a sufficiently large training set.[6][13]

Some of the most commonly used machine learning algorithms include Support Vector Machines (SVM), Naive Bayes, and Maximum Entropy.[14] Deep learning, which is under the unsupervised family of machine learning, is also widely employed in emotion recognition.[15][16][17] Well-known deep learning algorithms include different architectures of Artificial Neural Network (ANN) such as Convolutional Neural Network (CNN), Long Short-term Memory (LSTM), and Extreme Learning Machine (ELM).[14] The popularity of deep learning approaches in the domain of emotion recognition maybe mainly attributed to its success in related applications such as in computer vision, speech recognition, and Natural Language Processing (NLP).[14]

Hybrid Approaches

Hybrid approaches in emotion recognition are essentially a combination of knowledge-based techniques and statistical methods, which exploit complementary characteristics from both techniques.[6] Some of the works that have applied an ensemble of knowledge-driven linguistic elements and statistical methods include sentic computing and iFeel, both of which have adopted the concept-level knowledge-based resource SenticNet.[18][19] The role of such knowledge-based resources in the implementation of hybrid approaches is highly important in the emotion classification process.[10] Since hybrid techniques gain from the benefits offered by both knowledge-based and statistical approaches, they tend to have better classification performance as opposed to employing knowledge-based or statistical methods independently.[7] A downside of using hybrid techniques however, is the computational complexity during the classification process.[10]

Datasets

Data is an integral part of the existing approaches in emotion recognition and in most cases it is a challenge to obtain annotated data that is necessary to train machine learning algorithms.[11] While most publicly available data are not annotated, there are existing annotated datasets available to perform emotion recognition research.[13] For the task of classifying different emotion types from multimodal sources in the form of texts, audio, videos or physiological signals, the following datasets are available:

  1. HUMAINE: provides natural clips with emotion words and context labels in multiple modalities[20]
  2. Belfast database: provides clips with a wide range of emotions from TV programs and interview recordings[21]
  3. SEMAINE: provides audiovisual recordings between a person and a virtual agent and contains emotion annotations such as angry, happy, fear, disgust, sadness, contempt, and amusement[22]
  4. IEMOCAP: provides recordings of dyadic sessions between actors and contains emotion annotations such as happiness, anger, sadness, frustration, and neutral state [23]
  5. eNTERFACE: provides audiovisual recordings of subjects from seven nationalities and contains emotion annotations such as happiness, anger, sadness, surprise, disgust, and fear [24]
  6. DEAP: provides electroencephalography (EEG), electrocardiography (ECG), and face video recordings, as well as emotion annotations in terms of valence, arousal, and dominance of people watching film clips [25]
  7. DREAMER: provides electroencephalography (EEG) and electrocardiography (ECG) recordings, as well as emotion annotations in terms of valence, arousal, and dominance of people watching film clips [26]

Applications

The computer programmers often use Paul Ekman's Facial Action Coding System as a guide.

Emotion recognition is used for a variety of reasons. Affectiva uses it to help advertisers and content creators to sell their products more effectively.[27] Affectiva also makes a Q-sensor that gauges the emotions of autistic children. Emotient was a startup company which utilized artificial intelligence to predict "attitudes and actions based on facial expressions".[28] Apple indicated its intention to buy Emotient in January 2016.[28] nViso provides real-time emotion recognition for web and mobile applications through a real-time API.[29] Visage Technologies AB offers emotion estimation as a part of their Visage SDK for marketing and scientific research and similar purposes.[30] Eyeris is an emotion recognition company that works with embedded system manufacturers including car makers and social robotic companies on integrating its face analytics and emotion recognition software; as well as with video content creators to help them measure the perceived effectiveness of their short and long form video creative.[31][32] Emotion recognition and emotion analysis are being studied by companies and universities around the world.

See also

References

  1. Miyakoshi, Yoshihiro, and Shohei Kato. "Facial Emotion Detection Considering Partial Occlusion Of Face Using Baysian Network". Computers and Informatics (2011): 96–101.
  2. Hari Krishna Vydana, P. Phani Kumar, K. Sri Rama Krishna and Anil Kumar Vuppala. "Improved emotion recognition using GMM-UBMs". 2015 International Conference on Signal Processing and Communication Engineering Systems
  3. B. Schuller, G. Rigoll M. Lang. "Hidden Markov model-based speech emotion recognition". ICME '03. Proceedings. 2003 International Conference on Multimedia and Expo, 2003.
  4. Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion 37: 98–125. doi:10.1016/j.inffus.2017.02.003. 
  5. Caridakis, George; Castellano, Ginevra; Kessous, Loic; Raouzaiou, Amaryllis; Malatesta, Lori; Asteriadis, Stelios; Karpouzis, Kostas (19 September 2007). "Multimodal emotion recognition from expressive faces, body gestures and speech" (in en). IFIP The International Federation for Information Processing (Springer US): 375–388. doi:10.1007/978-0-387-74161-1_41. https://doi.org/10.1007/978-0-387-74161-1_41. 
  6. 6.0 6.1 6.2 6.3 6.4 6.5 Cambria, Erik (March 2016). "Affective Computing and Sentiment Analysis". IEEE Intelligent Systems 31 (2): 102–107. doi:10.1109/MIS.2016.31. 
  7. 7.0 7.1 7.2 Rani, Meesala Shobha; S, Sumathy (26 September 2017). "Perspectives of the performance metrics in lexicon and hybrid based approaches: a review". International Journal of Engineering & Technology 6 (4): 108. doi:10.14419/ijet.v6i4.8295. 
  8. Cambria, Erik; Poria, Soujanya; Bajpai, Rajiv; Schuller, Bjoern (2016). "SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives" (in en). Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. https://aclanthology.info/papers/C16-1251/c16-1251. 
  9. Balahur, Alexandra; Hermida, JesúS M.; Montoyo, AndréS (1 November 2012). "Detecting implicit expressions of emotion in text: A comparative analysis". Decision Support Systems 53 (4): 742–753. doi:10.1016/j.dss.2012.05.024. ISSN 0167-9236. https://dl.acm.org/citation.cfm?id=2364904. 
  10. 10.0 10.1 10.2 Medhat, Walaa; Hassan, Ahmed; Korashy, Hoda (December 2014). "Sentiment analysis algorithms and applications: A survey". Ain Shams Engineering Journal 5 (4): 1093–1113. doi:10.1016/j.asej.2014.04.011. 
  11. 11.0 11.1 11.2 Madhoushi, Zohreh; Hamdan, Abdul Razak; Zainudin, Suhaila (2015). "Sentiment analysis techniques in recent works - IEEE Conference Publication". ieeexplore.ieee.org. doi:10.1109/SAI.2015.7237157. https://ieeexplore.ieee.org/document/7237157/. 
  12. Hemmatian, Fatemeh; Sohrabi, Mohammad Karim (18 December 2017). "A survey on classification techniques for opinion mining and sentiment analysis". Artificial Intelligence Review. doi:10.1007/s10462-017-9599-6. 
  13. 13.0 13.1 13.2 Sharef, Nurfadhlina Mohd; Zin, Harnani Mat; Nadali, Samaneh (1 March 2016). "Overview and Future Opportunities of Sentiment Analysis Approaches for Big Data". Journal of Computer Science 12 (3): 153–168. doi:10.3844/jcssp.2016.153.168. 
  14. 14.0 14.1 14.2 Sun, Shiliang; Luo, Chen; Chen, Junyu (July 2017). "A review of natural language processing techniques for opinion mining systems". Information Fusion 36: 10–25. doi:10.1016/j.inffus.2016.10.004. 
  15. Majumder, Navonil; Poria, Soujanya; Gelbukh, Alexander; Cambria, Erik (March 2017). "Deep Learning-Based Document Modeling for Personality Detection from Text". IEEE Intelligent Systems 32 (2): 74–79. doi:10.1109/MIS.2017.23. 
  16. Mahendhiran, P. D.; Kannimuthu, S. (May 2018). "Deep Learning Techniques for Polarity Classification in Multimodal Sentiment Analysis". International Journal of Information Technology & Decision Making 17 (03): 883–910. doi:10.1142/S0219622018500128. 
  17. Yu, Hongliang; Gui, Liangke; Madaio, Michael; Ogan, Amy; Cassell, Justine; Morency, Louis-Philippe (23 October 2017). Temporally Selective Attention Model for Social and Affective State Recognition in Multimedia Content. ACM. pp. 1743–1751. doi:10.1145/3123266.3123413. https://dl.acm.org/citation.cfm?id=3123413. 
  18. Cambria, Erik; Hussain, Amir (2015). Sentic Computing: A Common-Sense-Based Framework for Concept-Level Sentiment Analysis. Springer Publishing Company, Incorporated. ISBN 3319236539. https://dl.acm.org/citation.cfm?id=2878632. 
  19. Araújo, Matheus; Gonçalves, Pollyanna; Cha, Meeyoung; Benevenuto, Fabrício (7 April 2014). iFeel: a system that compares and combines sentiment analysis methods. ACM. pp. 75–78. doi:10.1145/2567948.2577013. https://dl.acm.org/citation.cfm?id=2577013. 
  20. editors, Paolo Petta, Catherine Pelachaud, Roddy Cowie, (2011). Emotion-oriented systems the humaine handbook. Berlin: Springer. ISBN 978-3-642-15184-2. 
  21. Douglas-Cowie, Ellen; Campbell, Nick; Cowie, Roddy; Roach, Peter (1 April 2003). "Emotional speech: towards a new generation of databases". Speech Communication 40 (1-2): 33–60. doi:10.1016/S0167-6393(02)00070-5. ISSN 0167-6393. https://dl.acm.org/citation.cfm?id=772595. 
  22. McKeown, G.; Valstar, M.; Cowie, R.; Pantic, M.; Schroder, M. (January 2012). "The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent". IEEE Transactions on Affective Computing 3 (1): 5–17. doi:10.1109/T-AFFC.2011.20. 
  23. Busso, Carlos; Bulut, Murtaza; Lee, Chi-Chun; Kazemzadeh, Abe; Mower, Emily; Kim, Samuel; Chang, Jeannette N.; Lee, Sungbok et al. (5 November 2008). "IEMOCAP: interactive emotional dyadic motion capture database" (in en). Language Resources and Evaluation 42 (4): 335–359. doi:10.1007/s10579-008-9076-6. ISSN 1574-020X. https://link.springer.com/article/10.1007/s10579-008-9076-6. 
  24. Martin, O.; Kotsia, I.; Macq, B.; Pitas, I. (3 April 2006). The eNTERFACE'05 Audio-Visual Emotion Database. IEEE Computer Society. pp. 8. doi:10.1109/ICDEW.2006.145. https://dl.acm.org/citation.cfm?id=1130193. 
  25. Koelstra, Sander; Muhl, Christian; Soleymani, Mohammad; Lee, Jong-Seok; Yazdani, Ashkan; Ebrahimi, Touradj; Pun, Thierry; Nijholt, Anton et al. (January 2012). "DEAP: A Database for Emotion Analysis Using Physiological Signals". IEEE Transactions on Affective Computing 3 (1): 18-31. doi:10.1109/T-AFFC.2011.15. ISSN 1949-3045. https://ieeexplore.ieee.org/document/5871728. 
  26. Katsigiannis, Stamos; Ramzan, Naeem (January 2018). "DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals From Wireless Low-cost Off-the-Shelf Devices". IEEE Journal of Biomedical and Health Informatics 22 (1): 98-107. doi:10.1109/JBHI.2017.2688239. ISSN 2168-2194. https://ieeexplore.ieee.org/document/7887697. 
  27. "Affectiva". http://www.affectiva.com. 
  28. 28.0 28.1 DeMuth Jr., Chris (8 January 2016). "Apple Reads Your Mind". M&A Daily (Seeking Alpha). http://seekingalpha.com/article/3798766-apple-reads-your-mind. 
  29. "nViso". http://www.nviso.ch. 
  30. "Visage Technologies". https://visagetechnologies.com/products-and-services/visagesdk/faceanalysis/. 
  31. "Feeling sad, angry? Your future car will know". http://www.cnet.com/roadshow/news/eyeris-emovu-detects-driver-emotions/. 
  32. "Cars May Soon Warn Drivers Before They Nod Off". http://www.huffingtonpost.com/entry/drowsy-driving-warning-system_us_56eadd1be4b09bf44a9c96aa.