Abstract de la publi numéro 15103

With the efforts to improve the use of Human-Computer-Interaction, there has been an important interest in trying to integrate human gestures into human–computer interface. This paper presents a modelling of Sign language recognition system, which is summarized in a dialogue between deaf people and signing avatar. With this modelling, the system can be configurable: we can keep the general modelling and only we change the scenario and the vocabulary. We have included to these modelling two important elements, which are context and prediction, to improve the reliability of sign language recognition system compared to the classic systems, which don't use semantic concept.