970 resultados para Projector-Camera system


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A stereo-video baited camera system (BotCam) has been developed as a fishery-independent tool to monitor and study deepwater fish species and their habitat. During testing, BotCam was deployed primarily in water depths between 100 and 300 m for an assessment of its use in monitoring and studying Hawaiian bottomfish species. Details of the video analyses and data from the pilot study with BotCam in Hawai`i are presented. Multibeam bathymetry and backscatter data were used to delineate bottomfish habitat strata, and a stratified random sampling design was used for BotCam deployment locations. Video data were analyzed to assess relative fish abundance and to measure f ish size composition. Results corroborate published depth ranges and zones of the target species, as well as their habitat preferences. The results indicate that BotCam is a promising tool for monitoring and studying demersal fish populations associated with deepwater habitats to a depth of 300 m, at mesohabitat scales. BotCam is a flexible, nonextractive, and economical means to better understand deepwater ecosystems and improve science-based ecosystem approaches to management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the use of a baited stereo-video camera system, this study semiquantitatively defined the habitat associations of 4 species of Lutjanidae: Opakapaka (Pristipomoides filamentosus), Kalekale (P. sieboldii), Onaga (Etelis coruscans), and Ehu (E. carbunculus). Fish abundance and length data from 6 locations in the main Hawaiian Islands were evaluated for species-specific and size-specific differences between regions and habitat types. Multibeam bathymetry and backscatter were used to classify habitats into 4 types on the basis of substrate (hard or soft) and slope (high or low). Depth was a major influence on bottomfish distributions. Opakapaka occurred at depths shallower than the depths at which other species were observed, and this species showed an ontogenetic shift to deeper water with increasing size. Opakapaka and Ehu had an overall preference for hard substrate with low slope (hard-low), and Onaga was found over both hard-low and hard-high habitats. No significant habitat preferences were recorded for Kalekale. Opakapaka, Kalekale, and Onaga exhibited size-related shifts with habitat type. A move into hard-high environments with increasing size was evident for Opakapaka and Kalekale. Onaga was seen predominantly in hard-low habitats at smaller sizes and in either hard-low or hard-high at larger sizes. These ontogenetic habitat shifts could be driven by reproductive triggers because they roughly coincided with the length at sexual maturity of each species. However, further studies are required to determine causality. No ontogenetic shifts were seen for Ehu, but only a limited number of juveniles were observed. Regional variations in abundance and length were also found and could be related to fishing pressure or large-scale habitat features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compared particle data from a moored video camera system with sediment trap derived fluxes at ~1100 m depth in the highly dynamic coastal upwelling system off Cape Blanc, Mauritania. Between spring 2008 and winter 2010 the trap collected settling particles in 9-day intervals, while the camera recorded in-situ particle abundance and size-distribution every third day. Particle fluxes were highly variable (40-1200 mg m**-2 d**-1) and followed distinct seasonal patterns with peaks during spring, summer and fall. The particle flux patterns from the sediment traps correlated to the total particle volume captured by the video camera, which ranged from1 to 22 mm**3 l**-1. The measured increase in total particle volume during periods of high mass flux appeared to be better related to increases in the particle concentrations, rather than to increased average particle size. We observed events that had similar particle fluxes, but showed clear differences in particle abundance and size-distribution, and vice versa. Such observations can only be explained by shifts in the composition of the settling material, with changes both in particle density and chemical composition. For example, the input of wind-blown dust from the Sahara during September 2009 led to the formation of high numbers of comparably small particles in the water column. This suggests that, besides seasonal changes, the composition of marine particles in one region underlies episodical changes. The time between the appearance of high dust concentrations in the atmosphere and the increase lithogenic flux in the 1100 m deep trap suggested an average settling rate of 200 m d**-1, indicating a close and fast coupling between dust input and sedimentation of the material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an adaptive multi-camera system for real time object detection able to efficiently adjust the computational requirements of video processing blocks to the available processing power and the activity of the scene. The system is based on a two level adaptation strategy that works at local and at global level. Object detection is based on a Gaussian mixtures model background subtraction algorithm. Results show that the system can efficiently adapt the algorithm parameters without a significant loss in the detection accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Optical, Spectroscopic, and Infrared Remote Imaging System OSIRIS is the scientific camera system onboard the Rosetta spacecraft (Figure 1). The advanced high performance imaging system will be pivotal for the success of the Rosetta mission. OSIRIS will detect 67P/Churyumov-Gerasimenko from a distance of more than 106 km, characterise the comet shape and volume, its rotational state and find a suitable landing spot for Philae, the Rosetta lander. OSIRIS will observe the nucleus, its activity and surroundings down to a scale of ~2 cm px−1. The observations will begin well before the onset of cometary activity and will extend over months until the comet reaches perihelion. During the rendezvous episode of the Rosetta mission, OSIRIS will provide key information about the nature of cometary nuclei and reveal the physics of cometary activity that leads to the gas and dust coma. OSIRIS comprises a high resolution Narrow Angle Camera (NAC) unit and a Wide Angle Camera (WAC) unit accompanied by three electronics boxes. The NAC is designed to obtain high resolution images of the surface of comet 7P/Churyumov-Gerasimenko through 12 discrete filters over the wavelength range 250–1000 nm at an angular resolution of 18.6 μrad px−1. The WAC is optimised to provide images of the near-nucleus environment in 14 discrete filters at an angular resolution of 101 μrad px−1. The two units use identical shutter, filter wheel, front door, and detector systems. They are operated by a common Data Processing Unit. The OSIRIS instrument has a total mass of 35 kg and is provided by institutes from six European countries

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thése a été réalisée dans le cadre d'une cotutelle avec l'Institut National Polytechnique de Grenoble (France). La recherche a été effectuée au sein des laboratoires de vision 3D (DIRO, UdM) et PERCEPTION-INRIA (Grenoble).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For structured-light scanners, the projective geometry between a projector-camera pair is identical to that of a camera-camera pair. Consequently, in conjunction with calibration, a variety of geometric relations are available for three-dimensional Euclidean reconstruction. In this paper, we use projector-camera epipolar properties and the projective invariance of the cross-ratio to solve for 3D geometry. A key contribution of our approach is the use of homographies induced by reference planes, along with a calibrated camera, resulting in a simple parametric representation for projector and system calibration. Compared to existing solutions that require an elaborate calibration process, our method is simple while ensuring geometric consistency. Our formulation using the invariance of the cross-ratio is also extensible to multiple estimates of 3D geometry that can be analysed in a statistical sense. The performance of our system is demonstrated on some cultural artifacts and geometric surfaces.