786 resultados para Video cameras
Resumo:
Nonindigenous species (NIS) are a major threat to marine ecosystems, with possible dramatic effects on biodiversity, biological productivity, habitat structure and fisheries. The Papahānaumokuākea Marine National Monument (PMNM) has taken active steps to mitigate the threats of NIS in Northwestern Hawaiian Islands (NWHI). Of particular concern are the 13 NIS already detected in NWHI and two invasive species found among the main Hawaiian Islands, snowflake coral (Carijoa riseii) and a red alga (Hypnea musciformis). Much of the information regarding NIS in NWHI has been collected or informed by surveys using conventional SCUBA or fishing gear. These technologies have significant drawbacks. SCUBA is generally constrained to depths shallower than 40 m and several NIS of concern have been detected well below this limit (e.g., L. kasmira – 256 m) and fishing gear is highly selective. Consequently, not all habitats or species can be properly represented. Effective management of NIS requires knowledge of their spatial distribution and abundance over their entire range. Surveys which provide this requisite information can be expensive, especially in the marine environment and even more so in deepwater. Technologies which minimize costs, increase the probability of detection and are capable of satisfying multiple objectives simultaneously are desired. This report examines survey technologies, with a focus on towed camera systems (TCSs), and modeling techniques which can increase NIS detection and sampling efficiency in deepwater habitats of NWHI; thus filling a critical data gap in present datasets. A pilot study conducted in 2008 at French Frigate Shoals and Brooks Banks was used to investigate the application of TCSs for surveying NIS in habitats deeper than 40 m. Cost and data quality were assessed. Over 100 hours of video was collected, in which 124 sightings of NIS were made among benthic habitats from 20 to 250 m. Most sightings were of a single cosmopolitan species, Lutjanus kasmira, but Cephalopholis argus, and Lutjanus fulvus, were also detected. The data expand the spatial distributions of observed NIS into deepwater habitats, identify algal plain as an important habitat and complement existing data collected using SCUBA and fishing gear. The technology’s principal drawback was its inability to identify organisms of particular concern, such as Carijoa riseii and Hypnea musciformis due to inadequate camera resolution and inability to thoroughly inspect sites. To solve this issue we recommend incorporating high-resolution cameras into TCSs, or using alternative technologies, such as technical SCUBA diving or remotely operated vehicles, in place of TCSs. We compared several different survey technologies by cost and their ability to detect NIS and these results are summarized in Table 3.
Resumo:
In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. In particular there are three areas of novelty: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation, learnt offline, to generalize in the presence of extreme illumination changes; (ii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve invariance to unseen head poses; and (iii) we introduce an accurate video sequence "reillumination" algorithm to achieve robustness to face motion patterns in video. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our system consistently demonstrated a nearly perfect recognition rate (over 99.7%), significantly outperforming state-of-the-art commercial software and methods from the literature. © Springer-Verlag Berlin Heidelberg 2006.
Resumo:
In this paper, we describe a video tracking application using the dual-tree polar matching algorithm. The models are specified in a probabilistic setting, and a particle ilter is used to perform the sequential inference. Computer simulations demonstrate the ability of the algorithm to track a simulated video moving target in an urban environment with complete and partial occlusions. © The Institution of Engineering and Technology.
Resumo:
We propose a system that can reliably track multiple cars in congested traffic environments. Our system's key basis is the implementation of a sequential Monte Carlo algorithm, which introduces robustness against problems arising due to the proximity between vehicles. By directly modelling occlusions and collisions between cars we obtain promising results on an urban traffic dataset. Extensions to this initial framework are also suggested. © 2010 IEEE.
Resumo:
We present a novel, implementation friendly and occlusion aware semi-supervised video segmentation algorithm using tree structured graphical models, which delivers pixel labels alongwith their uncertainty estimates. Our motivation to employ supervision is to tackle a task-specific segmentation problem where the semantic objects are pre-defined by the user. The video model we propose for this problem is based on a tree structured approximation of a patch based undirected mixture model, which includes a novel time-series and a soft label Random Forest classifier participating in a feedback mechanism. We demonstrate the efficacy of our model in cutting out foreground objects and multi-class segmentation problems in lengthy and complex road scene sequences. Our results have wide applicability, including harvesting labelled video data for training discriminative models, shape/pose/articulation learning and large scale statistical analysis to develop priors for video segmentation. © 2011 IEEE.
Resumo:
Spread Transform (ST) is a quantization watermarking algorithm in which vectors of the wavelet coefficients of a host work are quantized, using one of two dithered quantizers, to embed hidden information bits; Loo had some success in applying such a scheme to still images. We extend ST to the video watermarking problem. Visibility considerations require that each spreading vector refer to corresponding pixels in each of several frames, that is, a multi-frame embedding approach. Use of the hierarchical complex wavelet transform (CWT) for a visual mask reduces computation and improves robustness to jitter and valumetric scaling. We present a method of recovering temporal synchronization at the detector, and give initial results demonstrating the robustness and capacity of the scheme.
Resumo:
Models capturing the connectivity between different domains of a design, e.g. between components and functions, can provide a tool for tracing and analysing aspects of that design. In this paper, video experiments are used to explore the role of cross-domain modelling in building up information about a design. The experiments highlight that cross-domain modelling can be a useful tool to create and structure design information. Findings suggest that consideration of multiple domains encourages discussion during modelling, helps identify design aspects that might otherwise be overlooked, and can help promote consideration of alternative design options. Copyright © 2002-2012 The Design Society. All rights reserved.