854 resultados para Football video annotation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a semi-parametric Algorithm for parsing football video structures. The approach works on a two interleaved based process that closely collaborate towards a common goal. The core part of the proposed method focus perform a fast automatic football video annotation by looking at the enhance entropy variance within a series of shot frames. The entropy is extracted on the Hue parameter from the HSV color system, not as a global feature but in spatial domain to identify regions within a shot that will characterize a certain activity within the shot period. The second part of the algorithm works towards the identification of dominant color regions that could represent players and playfield for further activity recognition. Experimental Results shows that the proposed football video segmentation algorithm performs with high accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A video annotation system includes clips organization, feature description and pattern determination. This paper aims to present a system for basketball zone-defence detection. Particularly, a character-angle based descriptor for feature description is proposed. The well-performed experimental results in basketball zone-defence detection demonstrate that it is robust for both simulations and real-life cases, with less sensitivity to the distribution caused by local translation of subprime defenders. Such a framework can be easily applied to other team-work sports.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic video segmentation plays a vital role in sports videos annotation. This paper presents a fully automatic and computationally efficient algorithm for analysis of sports videos. Various methods of automatic shot boundary detection have been proposed to perform automatic video segmentation. These investigations mainly concentrate on detecting fades and dissolves for fast processing of the entire video scene without providing any additional feedback on object relativity within the shots. The goal of the proposed method is to identify regions that perform certain activities in a scene. The model uses some low-level feature video processing algorithms to extract the shot boundaries from a video scene and to identify dominant colours within these boundaries. An object classification method is used for clustering the seed distributions of the dominant colours to homogeneous regions. Using a simple tracking method a classification of these regions to active or static is performed. The efficiency of the proposed framework is demonstrated over a standard video benchmark with numerous types of sport events and the experimental results show that our algorithm can be used with high accuracy for automatic annotation of active regions for sport videos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clustering identities in a broadcast video is a useful task to aid in video annotation and retrieval. Quality based frame selection is a crucial task in video face clustering, to both improve the clustering performance and reduce the computational cost. We present a frame work that selects the highest quality frames available in a video to cluster the face. This frame selection technique is based on low level and high level features (face symmetry, sharpness, contrast and brightness) to select the highest quality facial images available in a face sequence for clustering. We also consider the temporal distribution of the faces to ensure that selected faces are taken at times distributed throughout the sequence. Normalized feature scores are fused and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face clustering system. We present a news video database to evaluate the clustering system performance. Experiments on the newly created news database show that the proposed method selects the best quality face images in the video sequence, resulting in improved clustering performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel, fast automatic motion segmentation approach is presented. It differs from conventional pixel or edge based motion segmentation approaches in that the proposed method uses labelled regions (facets) to segment various video objects from the background. Facets are clustered into objects based on their motion and proximity details using Bayesian logic. Because the number of facets is usually much lower than the number of edges and points, using facets can greatly reduce the computational complexity of motion segmentation. The proposed method can tackle efficiently the complexity of video object motion tracking, and offers potential for real-time content-based video annotation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Individual Video Training iVT and Annotating Academic Videos AAV: two complementing technologies 1. Recording communication skills training sessions and reviewing them by oneself, with peers, and with tutors has become standard in medical education. Increasing numbers of students paired with restrictions of financial and human resources create a big obstacle to this important teaching method. 2. Everybody who wants to increase efficiency and effectiveness of communication training can get new ideas from our technical solution. 3. Our goal was to increase the effectiveness of communication skills training by supporting self, peer and tutor assessment over the Internet. Two technologies of SWITCH, the national foundation to support IT solutions for Swiss universities, came handy for our project. The first is the authentication and authorization infrastructure providing all Swiss students with a nationwide single login. The second is SWITCHcast which allows automated recording, upload and publication of videos in the Internet. Students start the recording system by entering their single login. This automatically links the video with their password. Within a few hours, they find their video password protected on the Internet. They now can give access to peers and tutors. Additionally, an annotation interface was developed. This software has free text as well as checklist annotations capabilities. Tutors as well as students can create checklists. Tutor’s checklists are not editable by students. Annotations are linked to tracks. Tracks can be private or public. Public means visible to all who have access to the video. Annotation data can be exported for statistical evaluation. 4. The system was well received by students and tutors. Big numbers of videos were processed simultaneously without any problems. 5. iVT http://www.switch.ch/aaa/projects/detail/UNIBE.7 AAV http://www.switch.ch/aaa/projects/detail/ETHZ.9

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dance videos are interesting and semantics-intensive. At the same time, they are the complex type of videos compared to all other types such as sports, news and movie videos. In fact, dance video is the one which is less explored by the researchers across the globe. Dance videos exhibit rich semantics such as macro features and micro features and can be classified into several types. Hence, the conceptual modeling of the expressive semantics of the dance videos is very crucial and complex. This paper presents a generic Dance Video Semantics Model (DVSM) in order to represent the semantics of the dance videos at different granularity levels, identified by the components of the accompanying song. This model incorporates both syntactic and semantic features of the videos and introduces a new entity type called, Agent, to specify the micro features of the dance videos. The instantiations of the model are expressed as graphs. The model is implemented as a tool using J2SE and JMF to annotate the macro and micro features of the dance videos. Finally examples and evaluation results are provided to depict the effectiveness of the proposed dance video model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this seminar, I will share my experience in the early process of becoming an entrepreneur from a research background. Since 2008, I have been working with Prof. Mike Wald on an innovative video annotation tool called Synote. After about eight years of research around Synote, I have applied for the Royal Acadamy of Engineering Enterprise Fellowship in order to focus on developing Synote for real clients and making Synote sustainable and profitable. Now, it is already eight months into the fellowship, which has totally changed my life. It is very exciting, but at the same time I'm struggling all the time. The seminar will briefly go through my experience so far on the way of commercializing Synote from a research background. I will also discuss the valuable resources you can get from RAEng Enterprise Hub and Future Worlds, which is a Southampton based organization to help startups. If you are a Ph.D. student or research fellow in the University, and you want to start your own business, this is the seminar you want to attend.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A pressing concern within the literature on anticipatory perceptual-motor behaviour is the lack of clarity on the applicability of data, observed under video-simulation task constraints, to actual performance in which actions are coupled to perception, as captured during in-situ experimental conditions. We developed an in-situ experimental paradigm which manipulated the duration of anticipatory visual information from a penalty taker’s actions to examine experienced goalkeepers’ vulnerability to deception for the penalty kick in association football. Irrespective of the penalty taker’s kick strategy, goalkeepers initiated movement responses earlier across consecutively earlier presentation points. Overall goalkeeping performance was better in non-deception trials than in deception conditions. In deception trials, the kinematic information presented up until the penalty taker initiated his/her kicking action had a negative effect on goalkeepers’ performance. It is concluded that goalkeepers are likely to benefit from not anticipating a penalty taker’s performance outcome based on information from the run-up, in preference to later information that emerges just before the initiation of the penalty taker’s kicking action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaze and movement behaviors of association football goalkeepers were compared under two video simulation conditions (i.e., verbal and joystick movement responses) and three in situ conditions (i.e., verbal, simplified body movement, and interceptive response). The results showed that the goalkeepers spent more time fixating on information from the penalty kick taker’s movements than ball location for all perceptual judgment conditions involving limited movement (i.e., verbal responses, joystick movement, and simplified body movement). In contrast, an equivalent amount of time was spent fixating on the penalty taker’s relative motions and the ball location for the in situ interception condition, which required the goalkeepers to attempt to make penalty saves. The data suggest that gaze and movement behaviors function differently, depending on the experimental task constraints selected for empirical investigations. These findings highlight the need for research on perceptual— motor behaviors to be conducted in representative experimental conditions to allow appropriate generalization of conclusions to performance environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Viewer interests, evoked by video content, can potentially identify the highlights of the video. This paper explores the use of facial expressions (FE) and heart rate (HR) of viewers captured using camera and non-strapped sensor for identifying interesting video segments. The data from ten subjects with three videos showed that these signals are viewer dependent and not synchronized with the video contents. To address this issue, new algorithms are proposed to effectively combine FE and HR signals for identifying the time when viewer interest is potentially high. The results show that, compared with subjective annotation and match report highlights, ‘non-neutral’ FE and ‘relatively higher and faster’ HR is able to capture 60%-80% of goal, foul, and shot-on-goal soccer video events. FE is found to be more indicative than HR of viewer’s interests, but the fusion of these two modalities outperforms each of them.