Understanding and analyzing a large collection of archived swimming videos
Data(s) |
2014
|
---|---|
Resumo |
In elite sports, nearly all performances are captured on video. Despite the massive amounts of video that has been captured in this domain over the last 10-15 years, most of it remains in an 'unstructured' or 'raw' form, meaning it can only be viewed or manually annotated/tagged with higher-level event labels which is time consuming and subjective. As such, depending on the detail or depth of annotation, the value of the collected repositories of archived data is minimal as it does not lend itself to large-scale analysis and retrieval. One such example is swimming, where each race of a swimmer is captured on a camcorder and in-addition to the split-times (i.e., the time it takes for each lap), stroke rate and stroke-lengths are manually annotated. In this paper, we propose a vision-based system which effectively 'digitizes' a large collection of archived swimming races by estimating the location of the swimmer in each frame, as well as detecting the stroke rate. As the videos are captured from moving hand-held cameras which are located at different positions and angles, we show our hierarchical-based approach to tracking the swimmer and their different parts is robust to these issues and allows us to accurately estimate the swimmer location and stroke rates. |
Identificador | |
Publicador |
IEEE |
Relação |
DOI:10.1109/WACV.2014.6836037 Sha, Long, Lucey, Patrick, Sridharan, Sridha, Morgan, Stuart, & Pease, Dave (2014) Understanding and analyzing a large collection of archived swimming videos. In Proceedings of the 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014, IEEE, Steamboat Springs, CO; United States of America, pp. 674-681. |
Fonte |
School of Electrical Engineering & Computer Science; Science & Engineering Faculty |
Palavras-Chave | #Archived data #Hand-held cameras #Large-scale analysis #Stroke rates #Vision-based system |
Tipo |
Conference Paper |