Abstract de la publi numéro 10377
This article deals with the posture reconstruction from a mono view video of a signed utterance. Our method makes no use of additional sensors or visual markers. The head and the two hands are tracked by means of a particle fil- ter. The elbows are detected as convolution local maxima. A non linear filter is first used to remove the outliers, then some criteria using French Sign Language phonology are used to process the hand disambiguation. The posture recon- struction is achieved by using inverse kinematics, using a Kalman smoothing and the correlation between strong and week hand depth that can be noticed in the signed utterances. The article ends with a quantitative and qualitative evaluation of the reconstruction. We show how the results could be used in the framework of automatic Sign Language video processing.