Robot Emotional State through Bayesian Visuo-Auditory Perception - Technological Innovation for Sustainability Access content directly
Conference Papers Year : 2011

Robot Emotional State through Bayesian Visuo-Auditory Perception

José Augusto Prado
  • Function : Author
  • PersonId : 1013348
Jorge Dias
  • Function : Author
  • PersonId : 988853

Abstract

In this paper we focus on auditory analysis as the sensory stimulus, and on vocalization synthesis as the output signal. Our scenario is to have one robot interacting with one human through vocalization channel. Notice that vocalization is far beyond speech; while speech analysis would give us what was said, vocalization analysis gives us how was said. A social robot shall be able to perform actions in different manners according to its emotional state. Thus we propose a novel Bayesian approach to determine the emotional state the robot shall assume according to how the interlocutor is talking to it. Results shows that the classification happens as expected converging to the correct decision after two iterations.
Fichier principal
Vignette du fichier
978-3-642-19170-1_18_Chapter.pdf (286.67 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01566595 , version 1 (21-07-2017)

Licence

Attribution

Identifiers

Cite

José Augusto Prado, Carlos Simplício, Jorge Dias. Robot Emotional State through Bayesian Visuo-Auditory Perception. 2nd Doctoral Conference on Computing, Electrical and Industrial Systems (DoCEIS), Feb 2011, Costa de Caparica, Portugal. pp.165-172, ⟨10.1007/978-3-642-19170-1_18⟩. ⟨hal-01566595⟩
33 View
52 Download

Altmetric

Share

Gmail Facebook X LinkedIn More