REPOSITORIO
INSTITUCIONAL

    • español
    • English
  • Site map
  • English 
    • español
    • English
  • Login
  • Artículos(current)
  • Libros
  • Tesis
  • Trabajos de grado
  • Documentos Institucionales
    • Actas
    • Acuerdos
    • Decretos
    • Resoluciones
  • Multimedia
  • Acerca de
View Item 
  •   Home
  • Artículos
  • Indexados Scopus
  • View Item
  •   Home
  • Artículos
  • Indexados Scopus
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Emotion Recognition from EEG and Facial Expressions: A Multimodal Approach

Thumbnail
Share this
Author
Chaparro V.
Gomez A.
Salgado A.
Quintero O.L.
Lopez N.
Villa L.F.
TY - GEN T1 - Emotion Recognition from EEG and Facial Expressions: A Multimodal Approach AU - Chaparro V. AU - Gomez A. AU - Salgado A. AU - Quintero O.L. AU - Lopez N. AU - Villa L.F. UR - http://hdl.handle.net/11407/6135 PB - Institute of Electrical and Electronics Engineers Inc. AB - The understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy. © 2018 IEEE. ER - @misc{11407_6135, author = {Chaparro V. and Gomez A. and Salgado A. and Quintero O.L. and Lopez N. and Villa L.F.}, title = {Emotion Recognition from EEG and Facial Expressions: A Multimodal Approach}, year = {}, abstract = {The understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy. © 2018 IEEE.}, url = {http://hdl.handle.net/11407/6135} }RT Generic T1 Emotion Recognition from EEG and Facial Expressions: A Multimodal Approach A1 Chaparro V. A1 Gomez A. A1 Salgado A. A1 Quintero O.L. A1 Lopez N. A1 Villa L.F. LK http://hdl.handle.net/11407/6135 PB Institute of Electrical and Electronics Engineers Inc. AB The understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy. © 2018 IEEE. OL Spanish (121)
Gestores bibliográficos
Refworks
Zotero
BibTeX
CiteULike
Metadata
Show full item record
Abstract
The understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy. © 2018 IEEE.
URI
http://hdl.handle.net/11407/6135
Collections
  • Indexados Scopus [1069]
All of RI UdeMCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects
My AccountLoginRegister
Statistics GTMSee Statistics GTM
OFERTA ACADÉMICA
  • Oferta académica completa
  • Facultad de Derecho
  • Facultad de Comunicación
  • Facultad de Ingenierías
  • Facultad de Ciencias Económicas y Administrativas
  • Facultad de Ciencias Sociales y Humanas
  • Facultad de Ciencias Básicas
  • Facultad de Diseño
SERVICIOS
  • Teatro
  • Educación continuada
  • Centro de Idiomas
  • Consultorio Jurídico
  • Centro de Asesorías y Consultorías
  • Prácticas empresariales
  • Operadora Profesional de Certámenes
INVESTIGACIÓN
  • Centros de investigación
  • Revistas científicas
  • Repositorio institucional
  • Universidad - Empresa - Estado - Sociedad

Universidad de Medellín - Teléfono: +57 (4) 590 4500 Ext. 11422 - Dirección: Carrera 87 N° 30 - 65 Medellín - Colombia - Suramérica
© Copyright 2012 ® Todos los Derechos Reservados
Contacto

 infotegra.com