4 resultados para Video Processing

em University of Queensland eSpace - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Flexible video bronchoscopes, in particular the Olympus BF Type 3C160, are commonly used in pediatric respiratory medicine. There is no data on the magnification and distortion effects of these bronchoscopes yet important clinical decisions are made from the images. The aim of this study was to systematically describe the magnification and distortion of flexible bronchoscope images taken at various distances from the object. Methods: Using images of known objects and processing these by digital video and computer programs both magnification and distortion scales were derived. Results: Magnification changes as a linear function between 100 mm ( x 1) and 10 mm ( x 9.55) and then as an exponential function between 10 mm and 3 mm ( x 40) from the object. Magnification depends on the axis of orientation of the object to the optic axis or geometrical axis of the bronchoscope. Magnification also varies across the field of view with the central magnification being 39% greater than at the periphery of the field of view at 15 mm from the object. However, in the paediatric situation the diameter of the orifices is usually less than 10 mm and thus this limits the exposure to these peripheral limits of magnification reduction. Intraclass correlations for measurements and repeatability studies between instruments are very high, r = 0.96. Distortion occurs as both barrel and geometric types but both types are heterogeneous across the field of view. Distortion of geometric type ranges up to 30% at 3 mm from the object but may be as low as 5% depending on the position of the object in relation to the optic axis. Conclusion: We conclude that the optimal working distance range is between 40 and 10 mm from the object. However the clinician should be cognisant of both variations in magnification and distortion in clinical judgements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.

Relevância:

30.00% 30.00%

Publicador: