789 resultados para Video annotation
Resumo:
A video annotation system includes clips organization, feature description and pattern determination. This paper aims to present a system for basketball zone-defence detection. Particularly, a character-angle based descriptor for feature description is proposed. The well-performed experimental results in basketball zone-defence detection demonstrate that it is robust for both simulations and real-life cases, with less sensitivity to the distribution caused by local translation of subprime defenders. Such a framework can be easily applied to other team-work sports.
Resumo:
Automatic video segmentation plays a vital role in sports videos annotation. This paper presents a fully automatic and computationally efficient algorithm for analysis of sports videos. Various methods of automatic shot boundary detection have been proposed to perform automatic video segmentation. These investigations mainly concentrate on detecting fades and dissolves for fast processing of the entire video scene without providing any additional feedback on object relativity within the shots. The goal of the proposed method is to identify regions that perform certain activities in a scene. The model uses some low-level feature video processing algorithms to extract the shot boundaries from a video scene and to identify dominant colours within these boundaries. An object classification method is used for clustering the seed distributions of the dominant colours to homogeneous regions. Using a simple tracking method a classification of these regions to active or static is performed. The efficiency of the proposed framework is demonstrated over a standard video benchmark with numerous types of sport events and the experimental results show that our algorithm can be used with high accuracy for automatic annotation of active regions for sport videos.
Resumo:
This paper presents a semi-parametric Algorithm for parsing football video structures. The approach works on a two interleaved based process that closely collaborate towards a common goal. The core part of the proposed method focus perform a fast automatic football video annotation by looking at the enhance entropy variance within a series of shot frames. The entropy is extracted on the Hue parameter from the HSV color system, not as a global feature but in spatial domain to identify regions within a shot that will characterize a certain activity within the shot period. The second part of the algorithm works towards the identification of dominant color regions that could represent players and playfield for further activity recognition. Experimental Results shows that the proposed football video segmentation algorithm performs with high accuracy.
Resumo:
Clustering identities in a broadcast video is a useful task to aid in video annotation and retrieval. Quality based frame selection is a crucial task in video face clustering, to both improve the clustering performance and reduce the computational cost. We present a frame work that selects the highest quality frames available in a video to cluster the face. This frame selection technique is based on low level and high level features (face symmetry, sharpness, contrast and brightness) to select the highest quality facial images available in a face sequence for clustering. We also consider the temporal distribution of the faces to ensure that selected faces are taken at times distributed throughout the sequence. Normalized feature scores are fused and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face clustering system. We present a news video database to evaluate the clustering system performance. Experiments on the newly created news database show that the proposed method selects the best quality face images in the video sequence, resulting in improved clustering performance.
Resumo:
A novel, fast automatic motion segmentation approach is presented. It differs from conventional pixel or edge based motion segmentation approaches in that the proposed method uses labelled regions (facets) to segment various video objects from the background. Facets are clustered into objects based on their motion and proximity details using Bayesian logic. Because the number of facets is usually much lower than the number of edges and points, using facets can greatly reduce the computational complexity of motion segmentation. The proposed method can tackle efficiently the complexity of video object motion tracking, and offers potential for real-time content-based video annotation.
Resumo:
Individual Video Training iVT and Annotating Academic Videos AAV: two complementing technologies 1. Recording communication skills training sessions and reviewing them by oneself, with peers, and with tutors has become standard in medical education. Increasing numbers of students paired with restrictions of financial and human resources create a big obstacle to this important teaching method. 2. Everybody who wants to increase efficiency and effectiveness of communication training can get new ideas from our technical solution. 3. Our goal was to increase the effectiveness of communication skills training by supporting self, peer and tutor assessment over the Internet. Two technologies of SWITCH, the national foundation to support IT solutions for Swiss universities, came handy for our project. The first is the authentication and authorization infrastructure providing all Swiss students with a nationwide single login. The second is SWITCHcast which allows automated recording, upload and publication of videos in the Internet. Students start the recording system by entering their single login. This automatically links the video with their password. Within a few hours, they find their video password protected on the Internet. They now can give access to peers and tutors. Additionally, an annotation interface was developed. This software has free text as well as checklist annotations capabilities. Tutors as well as students can create checklists. Tutor’s checklists are not editable by students. Annotations are linked to tracks. Tracks can be private or public. Public means visible to all who have access to the video. Annotation data can be exported for statistical evaluation. 4. The system was well received by students and tutors. Big numbers of videos were processed simultaneously without any problems. 5. iVT http://www.switch.ch/aaa/projects/detail/UNIBE.7 AAV http://www.switch.ch/aaa/projects/detail/ETHZ.9
Resumo:
Dance videos are interesting and semantics-intensive. At the same time, they are the complex type of videos compared to all other types such as sports, news and movie videos. In fact, dance video is the one which is less explored by the researchers across the globe. Dance videos exhibit rich semantics such as macro features and micro features and can be classified into several types. Hence, the conceptual modeling of the expressive semantics of the dance videos is very crucial and complex. This paper presents a generic Dance Video Semantics Model (DVSM) in order to represent the semantics of the dance videos at different granularity levels, identified by the components of the accompanying song. This model incorporates both syntactic and semantic features of the videos and introduces a new entity type called, Agent, to specify the micro features of the dance videos. The instantiations of the model are expressed as graphs. The model is implemented as a tool using J2SE and JMF to annotate the macro and micro features of the dance videos. Finally examples and evaluation results are provided to depict the effectiveness of the proposed dance video model.
Resumo:
In this seminar, I will share my experience in the early process of becoming an entrepreneur from a research background. Since 2008, I have been working with Prof. Mike Wald on an innovative video annotation tool called Synote. After about eight years of research around Synote, I have applied for the Royal Acadamy of Engineering Enterprise Fellowship in order to focus on developing Synote for real clients and making Synote sustainable and profitable. Now, it is already eight months into the fellowship, which has totally changed my life. It is very exciting, but at the same time I'm struggling all the time. The seminar will briefly go through my experience so far on the way of commercializing Synote from a research background. I will also discuss the valuable resources you can get from RAEng Enterprise Hub and Future Worlds, which is a Southampton based organization to help startups. If you are a Ph.D. student or research fellow in the University, and you want to start your own business, this is the seminar you want to attend.
Resumo:
Viewer interests, evoked by video content, can potentially identify the highlights of the video. This paper explores the use of facial expressions (FE) and heart rate (HR) of viewers captured using camera and non-strapped sensor for identifying interesting video segments. The data from ten subjects with three videos showed that these signals are viewer dependent and not synchronized with the video contents. To address this issue, new algorithms are proposed to effectively combine FE and HR signals for identifying the time when viewer interest is potentially high. The results show that, compared with subjective annotation and match report highlights, ‘non-neutral’ FE and ‘relatively higher and faster’ HR is able to capture 60%-80% of goal, foul, and shot-on-goal soccer video events. FE is found to be more indicative than HR of viewer’s interests, but the fusion of these two modalities outperforms each of them.
Resumo:
Clustering identities in a video is a useful task to aid in video search, annotation and retrieval, and cast identification. However, reliably clustering faces across multiple videos is challenging task due to variations in the appearance of the faces, as videos are captured in an uncontrolled environment. A person's appearance may vary due to session variations including: lighting and background changes, occlusions, changes in expression and make up. In this paper we propose the novel Local Total Variability Modelling (Local TVM) approach to cluster faces across a news video corpus; and incorporate this into a novel two stage video clustering system. We first cluster faces within a single video using colour, spatial and temporal cues; after which we use face track modelling and hierarchical agglomerative clustering to cluster faces across the entire corpus. We compare different face recognition approaches within this framework. Experiments on a news video database show that the Local TVM technique is able effectively model the session variation observed in the data, resulting in improved clustering performance, with much greater computational efficiency than other methods.
Resumo:
For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 1 (Obs.: Scene 1 of the "General simulated scenes" class): Comprises one of the scenes with two persons. Both of them use clothes of light colors. Both persons remain still in front of "J9"; one goes to the computer and then come back, and both go out. Video file labels: "20140326145315_IPCAM": recorded by the right camera,