MAUI: a Multimodal Affective User Interface Conference

Lisetti, CL, Nasoz, F. (2002). MAUI: a Multimodal Affective User Interface . 161-170. 10.1145/641007.641038

cited authors

  • Lisetti, CL; Nasoz, F

abstract

  • Human intelligence is being increasingly redefined to include the all-encompassing effect of emotions upon what used to be considered 'pure reason'. With the recent progress of research in computer vision, speech/prosody recognition, and bio-feedback, real-time recognition of affect will enhance human-computer interaction considerably, as well as assist further progress in the development of new emotion theories.In this article, we describe how affect, moods and emotions closely interact with cognition and how affect and emotion are the quintessential multimodal processes in humans. We then propose an adaptive system architecture designed to sense the user's emotional and affective states via three multimodal subsystems (V, K, A): namely (1) the Visual (from facial images and videos), (2) Kinesthetic (from autonomic nervous system (ANS) signals), and (3) Auditory (from speech). The results of the system sensing are then integrated into the multimodal perceived multimodal anthropomorphic interface agent then adapts its interface by responding most appropriately to the current emotional states of its user, and provides intelligent multi-modal feedback to the user.

publication date

  • December 1, 2002

Digital Object Identifier (DOI)

International Standard Book Number (ISBN) 10

International Standard Book Number (ISBN) 13

start page

  • 161

end page

  • 170