Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems - HAL-SHS - Sciences de l'Homme et de la Société Accéder directement au contenu
Article Dans Une Revue Signal Processing Année : 2006

Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems

Résumé

The way we see the objects around us determines speech and gestures we use to refer to them. The gestures we produce structure our visual perception. The words we use have an influence on the way we see. In this manner, visual perception, language and gesture present multiple interactions between each other. The problem is global and has to be tackled as a whole in order to understand the complexity of reference phenomena and to deduce a formal model. This model may be useful for any kind of human-machine dialogue system that focuses on deep comprehension. We show how a referring act takes place into a contextual subset of objects. This subset is called `reference domain' and is implicit. It can be deduced from a lot of clues. Among these clues are those which come from the visual context and those which come from the multimodal utterance. We present the `multimodal reference domain' model that takes these clues into account and that can be exploited in a multimodal dialogue system when interpreting.
Fichier principal
Vignette du fichier
06_SP.pdf (223.39 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

halshs-00137947 , version 1 (22-03-2007)

Identifiants

  • HAL Id : halshs-00137947 , version 1

Citer

Frédéric Landragin. Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems. Signal Processing, 2006, 86 (12), pp.3578-3595. ⟨halshs-00137947⟩
198 Consultations
347 Téléchargements

Partager

Gmail Facebook X LinkedIn More