Modeling Context for Referring in Multimodal Dialogue Systems - HAL Accéder directement au contenu
Communication dans un congrès Année : 2005

Modeling Context for Referring in Multimodal Dialogue Systems

Résumé

The way we see the objects around us determines speech and gestures we use to refer to them. The gestures we produce structure our visual perception. The words we use have an influence on the way we see. In this manner, visual perception, language and gesture present multiple interactions between each other. The problem is global and has to be tackled as a whole in order to understand the complexity of reference phenomena and to deduce a formal model. This model may be useful for any kind of human-machine dialogue system that focuses on deep comprehension. We show how a referring act takes place into a contextual subset of objects, called `reference domain,' and we present the `multimodal reference domain' model that can be exploited in a dialogue system when interpreting.
Loading...
Fichier non déposé

Dates et versions

halshs-00137695, version 1 (21-03-2007)

Identifiants

  • HAL Id : halshs-00137695 , version 1

Citer

Frédéric Landragin. Modeling Context for Referring in Multimodal Dialogue Systems. 2005, pp.240-253. ⟨halshs-00137695⟩
30 Consultations
0 Téléchargements
Dernière date de mise à jour le 20/04/2024
comment ces indicateurs sont-ils produits

Partager

Gmail Facebook Twitter LinkedIn Plus