84 resultados para ALEPH training sessions
Resumo:
We introduce a new algorithm to automatically identify the time and pixel location of foot contact events in high speed video of sprinters. We use this information to autonomously synchronise and overlay multiple recorded performances to provide feedback to athletes and coaches during their training sessions. The algorithm exploits the variation in speed of different parts of the body during sprinting. We use an array of foreground accumulators to identify short-term static pixels and a temporal analysis of the associated static regions to identify foot contacts. We evaluated the technique using 13 videos of three sprinters. It successfully identifed 55 of the 56 contacts, with a mean localisation error of 1.39±1.05 pixels. Some videos were also seen to produce additional, spurious contacts. We present heuristics to help identify the true contacts. © 2011 Springer-Verlag Berlin Heidelberg.
Resumo:
On-body sensor systems for sport are challenging since the sensors must be lightweight and small to avoid discomfort, and yet robust and highly accurate to withstand and capture the fast movements associated with sport. In this work, we detail our experience of building such an on-body system for track athletes. The paper describes the design, implementation and deployment of an on-body sensor system for sprint training sessions. We autonomously profile sprints to derive quantitative metrics to improve training sessions. Inexpensive Force Sensitive Resistors (FSRs) are used to capture foot events that are subsequently analysed and presented back to the coach. We show how to identify periods of sprinting from the FSR data and how to compute metrics such as ground contact time. We evaluate our system using force plates and show that millisecond-level accuracy is achievable when estimating contact times. © 2012 Elsevier B.V. All rights reserved.
Resumo:
A significant cost in obtaining acoustic training data is the generation of accurate transcriptions. For some sources close-caption data is available. This allows the use of lightly-supervised training techniques. However, for some sources and languages close-caption is not available. In these cases unsupervised training techniques must be used. This paper examines the use of unsupervised techniques for discriminative training. In unsupervised training automatic transcriptions from a recognition system are used for training. As these transcriptions may be errorful data selection may be useful. Two forms of selection are described, one to remove non-target language shows, the other to remove segments with low confidence. Experiments were carried out on a Mandarin transcriptions task. Two types of test data were considered, Broadcast News (BN) and Broadcast Conversations (BC). Results show that the gains from unsupervised discriminative training are highly dependent on the accuracy of the automatic transcriptions. © 2007 IEEE.