Automatic 2D Hand Tracking in Video Sequences


Autoria(s): Yuan, Quan; Sclaroff, Stan; Athitsos, Vassilis
Data(s)

20/10/2011

20/10/2011

02/11/2004

Resumo

In gesture and sign language video sequences, hand motion tends to be rapid, and hands frequently appear in front of each other or in front of the face. Thus, hand location is often ambiguous, and naive color-based hand tracking is insufficient. To improve tracking accuracy, some methods employ a prediction-update framework, but such methods require careful initialization of model parameters, and tend to drift and lose track in extended sequences. In this paper, a temporal filtering framework for hand tracking is proposed that can initialize and reset itself without human intervention. In each frame, simple features like color and motion residue are exploited to identify multiple candidate hand locations. The temporal filter then uses the Viterbi algorithm to select among the candidates from frame to frame. The resulting tracking system can automatically identify video trajectories of unambiguous hand motion, and detect frames where tracking becomes ambiguous because of occlusions or overlaps. Experiments on video sequences of several hundred frames in duration demonstrate the system's ability to track hands robustly, to detect and handle tracking ambiguities, and to extract the trajectories of unambiguous hand motion.

National Science Foundation (CNS-0202067, IIS-0208876, IIS-0308213, IIS-0329009); Office of Naval Research (N00014-03-1-0108)

Identificador

http://hdl.handle.net/2144/1561

Idioma(s)

en_US

Publicador

Boston University Computer Science Department

Relação

BUCS Technical Reports;BUCS-TR-2004-035

Tipo

Technical Report