Mostrar el registro sencillo del ítem

dc.creatorChaparro V.
dc.creatorGomez A.
dc.creatorSalgado A.
dc.creatorQuintero O.L.
dc.creatorLopez N.
dc.creatorVilla L.F.
dc.date2018
dc.date.accessioned2021-02-05T14:59:55Z
dc.date.available2021-02-05T14:59:55Z
dc.identifier.isbn9781538636466
dc.identifier.issn1557170X
dc.identifier.urihttp://hdl.handle.net/11407/6135
dc.descriptionThe understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy. © 2018 IEEE.
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.relation.isversionofhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85056655879&doi=10.1109%2fEMBC.2018.8512407&partnerID=40&md5=fb06ef7c87c0ab62281ad17f00d1ece4
dc.sourceProceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
dc.titleEmotion Recognition from EEG and Facial Expressions: A Multimodal Approach
dc.typeConference Papereng
dc.rights.accessrightsinfo:eu-repo/semantics/restrictedAccess
dc.publisher.programIngeniería de Sistemasspa
dc.identifier.doi10.1109/EMBC.2018.8512407
dc.relation.citationvolume2018-July
dc.relation.citationstartpage530
dc.relation.citationendpage533
dc.publisher.facultyFacultad de Ingenieríasspa
dc.affiliationChaparro, V., Mathematical Modelling Research Group, Universidad EAFIT, Colombia
dc.affiliationGomez, A., Mathematical Modelling Research Group, Universidad EAFIT, Colombia
dc.affiliationSalgado, A., Mathematical Modelling Research Group, Universidad EAFIT, Colombia
dc.affiliationQuintero, O.L., Mathematical Modelling Research Group, Universidad EAFIT, Colombia
dc.affiliationLopez, N., Universidad Nacional de San Juan, Argentina
dc.affiliationVilla, L.F., System Engineering Research Group, ARKADIUS, Universidad de Medellin, Colombia
dc.relation.referencesScherer, K.R., What are emotions and how can they be measured (2005) Social Science Information, 44 (4), pp. 695-729. , http://journals.sagepub.com/doi/10.1177/0539018405058216, dec
dc.relation.referencesOhman, A., Soares, J.J., Unconscious anxiety : Phobic responses to masked stimuli (1994) Journal of Abnormal Psychology, 103 (2), pp. 231-240. , http://www.ncbi.nlm.nih.gov/pubmed/8040492, may
dc.relation.referencesBunce, S.C., Bernat, E., Wong, P.S., Shevrin, H., Further evidence for unconscious learning: Preliminary support for the conditioning of facial EMG to subliminal stimuli (1999) Journal of Psychiatric Research, 33 (4), pp. 341-347. , http://www.ncbi.nlm.nih.gov/pubmed/10404472
dc.relation.referencesWong, P.S., Shevrin, H., Williams, W.J., Conscious and nonconscious processes: An ERP index of an anticipatory response in a conditioning paradigm using visually masked stimuli (1994) Psychophysiology, 31 (1), pp. 87-101. , http://www.ncbi.nlm.nih.gov/pubmed/8146258, jan
dc.relation.referencesBusso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Narayanan, S., Analysis of emotion recognition using facial expressions, speech and multimodal information (2004) Proceedings of the 6th International Conference on Multimodal Interfaces-ICMI '04, p. 205. , http://portal.acm.org/citation.cfm?doid=1027933.1027968, New York, New York, USA: ACM Press
dc.relation.referencesBahreini, K., Nadolski, R., Westera, W., Data fusion for real-time multimodal emotion recognition through webcams and microphones in e-learning (2016) International Journal of Human-Computer Interaction, 32 (5), pp. 415-430. , https://www.tandfonline.com/doi/full/10.1080/10447318.2016.1159799, may
dc.relation.referencesChen, L., Huang, T., Miyasato, T., Nakatsu, R., Multimodal human emotion/expression recognition (1998) Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition. IEEE Comput. Soc, pp. 366-371. , http://ieeexplore.ieee.org/document/670976/
dc.relation.referencesDe Silva, L., Chi Ng, P., Bimodal emotion recognition (2000) Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition. IEEE Comput. Soc, pp. 332-335. , http://ieeexplore.ieee.org/document/840655/
dc.relation.referencesChang, C.-Y., Tsai, J.-S., Wang, C.-J., Chung, P.-C., Emotion recognition with consideration of facial expression and physiological signals (2009) 2009 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology. IEEE, pp. 278-283. , http://ieeexplore.ieee.org/document/4925739/, mar
dc.relation.referencesVerma, G.K., Tiwary, U.S., Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals (2014) NeuroImage, 102 (P1), pp. 162-172. , http://dx.doi.org/10.1016/j.neuroimage.2013.11.007, nov
dc.relation.referencesBarros, P., Jirak, D., Weber, C., Wermter, S., Multimodal emotional state recognition using sequence-dependent deep hierarchical features (2015) Neural Networks, 72, pp. 140-151. , http://dx.doi.org/10.1016/j.neunet.2015.09.009, dec
dc.relation.referencesSoleymani, M., Lichtenauer, J., Pun, T., Pantic, M., A multimodal database for affect recognition and implicit tagging (2012) IEEE Transactions on Affective Computing, 3 (1), pp. 42-55. , http://ieeexplore.ieee.org/document/5975141/, jan
dc.relation.referencesHuang, X., Kortelainen, J., Zhao, G., Li, X., Moilanen, A., Seppänen, T., Pietikäinen, M., Multi-modal emotion analysis from facial expressions and electroencephalogram (2016) Computer Vision and Image Understanding, 147, pp. 114-124. , http://linkinghub.elsevier.com/retrieve/pii/S1077314215002106, jun
dc.relation.referencesHuang, Y., Yang, J., Liao, P., Pan, J., Fusion of facial expressions and EEG for multimodal emotion recognition (2017) Computational Intelligence and Neuroscience, 2017, pp. 1-8. , https://www.hindawi.com/journals/cin/2017/2107451/
dc.relation.referencesGómez, A., Quintero, L., López, N., Castro, J., Villa, L., Mejía, G., An approach to emotion recognition in single-channel EEG signals using stationary wavelet transform (2017) IFMBE Proceedings, pp. 654-657. , http://link.springer.com/10.1007/978-981-10-4086-3164, Claib 2016
dc.relation.referencesGómez, A., Quintero, L., López, N., Castro, J., An approach to emotion recognition in single-channel EEG signals: A mother child interaction (2016) Journal of Physics: Conference Series, 705 (1). , http://stacks.iop.org/1742-6596/705/i=1/a=012051, apr
dc.relation.referencesGomez, A., Quintero, L., Lopez, M., Castro, J., Villa, L., An approach to emotion recognition in single-channel EEG signals using discrete wavelet transform (2016) The 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 6, p. 6045
dc.relation.referencesRestrepo, D., Gomez, A., Short research advanced project: Development of strategies for automatic facial feature extraction and emotion recognition (2017) 2017 IEEE 3rd Colombian Conference on Automatic Control (CCAC). IEEE, pp. 1-6. , http://ieeexplore.ieee.org/document/8276413/, oct
dc.relation.referencesEkman, P., Rosenberg, E., (2005) What the Face Reveals
dc.relation.referencesLucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I., The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression (2010) 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE, pp. 94-101. , http://ieeexplore.ieee.org/document/5543262/, jun
dc.relation.referencesSimon, T., Joo, H., Matthews, I., Sheikh, Y., Hand keypoint detection in single images using multiview bootstrapping (2017) 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp. 4645-4653. , http://ieeexplore.ieee.org/document/8099977/, jul
dc.relation.referencesCao, Z., Simon, T., Wei, S.-E., Sheikh, Y., Realtime multi-person 2d pose estimation using part affinity fields (2017) 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1302-1310
dc.relation.referencesWei, S.-E., Ramakrishna, V., Kanade, T., Sheikh, Y., Convolutional pose machines (2016) 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 4724-4732. , http://arxiv.org/abs/1602.00134, jan
dc.relation.referencesTracy, J.L., Randles, D., Four models of basic emotions: A review of ekman and cordaro, izard, levenson, and panksepp and watt (2011) Emotion Review, 3 (4), pp. 397-405. , http://journals.sagepub.com/doi/10.1177/1754073911410747, oct
dc.relation.referencesPedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Duchesnay, E., Scikit-learn: Machine learning in python (2012) Journal of Machine Learning Research, 12, pp. 2825-2830. , http://arxiv.org/abs/1201.0490
dc.type.versioninfo:eu-repo/semantics/publishedVersion
dc.type.driverinfo:eu-repo/semantics/other


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem