Multimodal sentiment analysis
Multimodal sentiment analysis is a technology for traditional text-based sentiment analysis, which includes modalities such as audio and visual data.[1] It can be bimodal, which includes different combinations of two modalities, or trimodal, which incorporates three modalities.[2] With the extensive amount of social media data available online in different forms such as videos and images, the conventional text-based sentiment analysis has evolved into more complex models of multimodal sentiment analysis,[3] which can be applied in the development of virtual assistants,[4] analysis of YouTube movie reviews,[5] analysis of news videos,[6] and emotion recognition (sometimes known as emotion detection) such as depression monitoring,[7] among others.
Similar to the traditional sentiment analysis, one of the most basic task in multimodal sentiment analysis is sentiment classification, which classifies different sentiments into categories such as positive, negative, or neutral.[8] The complexity of analyzing text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion.[3] The performance of these fusion techniques and the classification algorithms applied, are influenced by the type of textual, audio, and visual features employed in the analysis.[9]
Features
Feature engineering, which involves the selection of features that are fed into machine learning algorithms, plays a key role in the sentiment classification performance.[9] In multimodal sentiment analysis, a combination of different textual, audio, and visual features are employed.[3]
Textual features
Similar to the conventional text-based sentiment analysis, some of the most commonly used textual features in multimodal sentiment analysis are unigrams and n-grams, which are basically a sequence of words in a given textual document.[10] These features are applied using bag-of-words or bag-of-concepts feature representations, in which words or concepts are represented as vectors in a suitable space.[11][12]
Audio features
Sentiment and emotion characteristics are prominent in different phonetic and prosodic properties contained in audio features.[13] Some of the most important audio features employed in multimodal sentiment analysis are mel-frequency cepstrum (MFCC), spectral centroid, spectral flux, beat histogram, beat sum, strongest beat, pause duration, and pitch.[3] OpenSMILE[14] and Praat are popular open-source toolkits for extracting such audio features.[15]
Visual features
One of the main advantages of analyzing videos with respect to texts alone, is the presence of rich sentiment cues in visual data.[16] Visual features include facial expressions, which are of paramount importance in capturing sentiments and emotions, as they are a main channel of forming a person's present state of mind.[3] Specifically, smile, is considered to be one of the most predictive visual cues in multimodal sentiment analysis.[11] OpenFace is an open-source facial analysis toolkit available for extracting and understanding such visual features.[17]
Fusion techniques
Unlike the traditional text-based sentiment analysis, multimodal sentiment analysis undergo a fusion process in which data from different modalities (text, audio, or visual) are fused and analyzed together.[3] The existing approaches in multimodal sentiment analysis data fusion can be grouped into three main categories: feature-level, decision-level, and hybrid fusion, and the performance of the sentiment classification depends on which type of fusion technique is employed.[3]
Feature-level fusion
Feature-level fusion (sometimes known as early fusion) gathers all the features from each modality (text, audio, or visual) and joins them together into a single feature vector, which is eventually fed into a classification algorithm.[18] One of the difficulties in implementing this technique is the integration of the heterogeneous features.[3]
Decision-level fusion
Decision-level fusion (sometimes known as late fusion), feeds data from each modality (text, audio, or visual) independently into its own classification algorithm, and obtains the final sentiment classification results by fusing each result into a single decision vector.[18] One of the advantages of this fusion technique is that it eliminates the need to fuse heterogeneous data, and each modality can utilize its most appropriate classification algorithm.[3]
Hybrid fusion
Hybrid fusion is a combination of feature-level and decision-level fusion techniques, which exploits complementary information from both methods during the classification process.[5] It usually involves a two-step procedure wherein feature-level fusion is initially performed between two modalities, and decision-level fusion is then applied as a second step, to fuse the initial results from the feature-level fusion, with the remaining modality.[19][20]
Applications
Similar to text-based sentiment analysis, multimodal sentiment analysis can be applied in the development of different forms of recommender systems such as in the analysis of user-generated videos of movie reviews[5] and general product reviews,[21] to predict the sentiments of customers, and subsequently create product or service recommendations.[22] Multimodal sentiment analysis also plays an important role in the advancement of virtual assistants through the application of natural language processing (NLP) and machine learning techniques.[4] In the healthcare domain, multimodal sentiment analysis can be utilized to detect certain medical conditions such as stress, anxiety, or depression.[7] Multimodal sentiment analysis can also be applied in understanding the sentiments contained in video news programs, which is considered as a complicated and challenging domain, as sentiments expressed by reporters tend to be less obvious or neutral.[23]
References
- ↑ Soleymani, Mohammad; Garcia, David; Jou, Brendan; Schuller, Björn; Chang, Shih-Fu; Pantic, Maja (September 2017). "A survey of multimodal sentiment analysis". Image and Vision Computing 65: 3–14. doi:10.1016/j.imavis.2017.08.003. https://zenodo.org/record/3449163.
- ↑ Karray, Fakhreddine; Milad, Alemzadeh; Saleh, Jamil Abou; Mo Nours, Arab (2008). "Human-Computer Interaction: Overview on State of the Art". International Journal on Smart Sensing and Intelligent Systems 1: 137–159. doi:10.21307/ijssis-2017-283. http://s2is.org/Issues/v1/n1/papers/paper9.pdf.
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion 37: 98–125. doi:10.1016/j.inffus.2017.02.003. http://researchrepository.napier.ac.uk/Output/1792429.
- ↑ 4.0 4.1 "Google AI to make phone calls for you". 8 May 2018. https://www.bbc.com/news/technology-44045424.
- ↑ 5.0 5.1 5.2 Wollmer, Martin; Weninger, Felix; Knaup, Tobias; Schuller, Bjorn; Sun, Congkai; Sagae, Kenji; Morency, Louis-Philippe (May 2013). "YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context". IEEE Intelligent Systems 28 (3): 46–53. doi:10.1109/MIS.2013.34. https://opus.bibliothek.uni-augsburg.de/opus4/files/72633/72633.pdf.
- ↑ Pereira, Moisés H. R.; Pádua, Flávio L. C.; Pereira, Adriano C. M.; Benevenuto, Fabrício; Dalip, Daniel H. (9 April 2016). "Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos". arXiv:1604.02612 [cs.CL].
- ↑ 7.0 7.1 Zucco, Chiara; Calabrese, Barbara; Cannataro, Mario (November 2017). "Sentiment analysis and affective computing for depression monitoring" (in en). 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE. pp. 1988–1995. doi:10.1109/bibm.2017.8217966. ISBN 978-1-5090-3050-7.
- ↑ Pang, Bo; Lee, Lillian (2008). Opinion mining and sentiment analysis. Hanover, MA: Now Publishers. ISBN 978-1601981509.
- ↑ 9.0 9.1 Sun, Shiliang; Luo, Chen; Chen, Junyu (July 2017). "A review of natural language processing techniques for opinion mining systems". Information Fusion 36: 10–25. doi:10.1016/j.inffus.2016.10.004.
- ↑ Yadollahi, Ali; Shahraki, Ameneh Gholipour; Zaiane, Osmar R. (25 May 2017). "Current State of Text Sentiment Analysis from Opinion to Emotion Mining". ACM Computing Surveys 50 (2): 1–33. doi:10.1145/3057270.
- ↑ 11.0 11.1 Perez Rosas, Veronica; Mihalcea, Rada; Morency, Louis-Philippe (May 2013). "Multimodal Sentiment Analysis of Spanish Online Videos". IEEE Intelligent Systems 28 (3): 38–45. doi:10.1109/MIS.2013.9.
- ↑ Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin (March 2015). "Towards an intelligent framework for multimodal affective data analysis". Neural Networks 63: 104–116. doi:10.1016/j.neunet.2014.10.005. PMID 25523041.
- ↑ Chung-Hsien Wu; Wei-Bin Liang (January 2011). "Emotion Recognition of Affective Speech Based on Multiple Classifiers Using Acoustic-Prosodic Information and Semantic Labels". IEEE Transactions on Affective Computing 2 (1): 10–21. doi:10.1109/T-AFFC.2010.16.
- ↑ Eyben, Florian; Wöllmer, Martin; Schuller, Björn (2009). "OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit". OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit - IEEE Conference Publication. pp. 1. doi:10.1109/ACII.2009.5349350. ISBN 978-1-4244-4800-5. https://nbn-resolving.org/urn:nbn:de:bvb:384-opus4-766112.
- ↑ Morency, Louis-Philippe; Mihalcea, Rada; Doshi, Payal (14 November 2011). "Towards multimodal sentiment analysis". Towards multimodal sentiment analysis: harvesting opinions from the web. ACM. pp. 169–176. doi:10.1145/2070481.2070509. ISBN 9781450306416.
- ↑ Poria, Soujanya; Cambria, Erik; Hazarika, Devamanyu; Majumder, Navonil; Zadeh, Amir; Morency, Louis-Philippe (2017). "Context-Dependent Sentiment Analysis in User-Generated Videos". Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers): 873–883. doi:10.18653/v1/p17-1081.
- ↑ OpenFace: An open source facial behavior analysis toolkit - IEEE Conference Publication. doi:10.1109/WACV.2016.7477553. https://www.repository.cam.ac.uk/handle/1810/280724.
- ↑ 18.0 18.1 Poria, Soujanya; Cambria, Erik; Howard, Newton; Huang, Guang-Bin; Hussain, Amir (January 2016). "Fusing audio, visual and textual clues for sentiment analysis from multimodal content". Neurocomputing 174: 50–59. doi:10.1016/j.neucom.2015.01.095.
- ↑ Shahla, Shahla; Naghsh-Nilchi, Ahmad Reza (2017). Exploiting evidential theory in the fusion of textual, audio, and visual modalities for affective music video retrieval - IEEE Conference Publication. doi:10.1109/PRIA.2017.7983051.
- ↑ Poria, Soujanya; Peng, Haiyun; Hussain, Amir; Howard, Newton; Cambria, Erik (October 2017). "Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis". Neurocomputing 261: 217–230. doi:10.1016/j.neucom.2016.09.117.
- ↑ Pérez-Rosas, Verónica; Mihalcea, Rada; Morency, Louis Philippe (1 January 2013). "Utterance-level multimodal sentiment analysis". Long Papers (Association for Computational Linguistics (ACL)). https://experts.umich.edu/en/publications/utterance-level-multimodal-sentiment-analysis.
- ↑ Chui, Michael; Manyika, James; Miremadi, Mehdi; Henke, Nicolaus; Chung, Rita; Nel, Pieter; Malhotra, Sankalp. "Notes from the AI frontier. Insights from hundreds of use cases" (in en). https://www.mckinsey.com/mgi/.
- ↑ Ellis, Joseph G.; Jou, Brendan; Chang, Shih-Fu (12 November 2014). "Why We Watch the News". Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News. ACM. pp. 104–111. doi:10.1145/2663204.2663237. ISBN 9781450328852.
Original source: https://en.wikipedia.org/wiki/Multimodal sentiment analysis.
Read more |