930 resultados para camera trapping


Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the use of a baited stereo-video camera system, this study semiquantitatively defined the habitat associations of 4 species of Lutjanidae: Opakapaka (Pristipomoides filamentosus), Kalekale (P. sieboldii), Onaga (Etelis coruscans), and Ehu (E. carbunculus). Fish abundance and length data from 6 locations in the main Hawaiian Islands were evaluated for species-specific and size-specific differences between regions and habitat types. Multibeam bathymetry and backscatter were used to classify habitats into 4 types on the basis of substrate (hard or soft) and slope (high or low). Depth was a major influence on bottomfish distributions. Opakapaka occurred at depths shallower than the depths at which other species were observed, and this species showed an ontogenetic shift to deeper water with increasing size. Opakapaka and Ehu had an overall preference for hard substrate with low slope (hard-low), and Onaga was found over both hard-low and hard-high habitats. No significant habitat preferences were recorded for Kalekale. Opakapaka, Kalekale, and Onaga exhibited size-related shifts with habitat type. A move into hard-high environments with increasing size was evident for Opakapaka and Kalekale. Onaga was seen predominantly in hard-low habitats at smaller sizes and in either hard-low or hard-high at larger sizes. These ontogenetic habitat shifts could be driven by reproductive triggers because they roughly coincided with the length at sexual maturity of each species. However, further studies are required to determine causality. No ontogenetic shifts were seen for Ehu, but only a limited number of juveniles were observed. Regional variations in abundance and length were also found and could be related to fishing pressure or large-scale habitat features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Super-Resolution imaging techniques such as Fluorescent Photo-Activation Localisation Microscopy (FPALM) have created a powerful new toolkit for investigating living cells, however a simple platform for growing, trapping, holding and controlling the cells is needed before the approach can become truly widespread. We present a microfluidic device formed in polydimethylsiloxane (PDMS) with a fluidic design which traps cells in a high-density array of wells and holds them very still throughout the life cycle, using hydrodynamic forces only. The device meets or exceeds all the necessary criteria for FPALM imaging of Schizosaccharomyces pombe and is designed to remain flexible, robust and easy to use. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study optical trapping of nanotubes and graphene. We extract the distribution of both centre-of-mass and angular fuctuations from three-dimensional tracking of these optically trapped carbon nanostructures. The optical force and torque constants are measured from auto and cross-correlation of the tracking signals. We demonstrate that nanotubes enable nanometer spatial, and femto-Newton force resolution in photonic force microscopy by accurately measuring the radiation pressure in a double frequency optical tweezers. Finally, we integrate optical trapping with Raman and photoluminescence spectroscopy demonstrating the use of a Raman and photoluminescence tweezers by investigating the spectroscopy of nanotubes and graphene fakes in solution. Experimental results are compared with calculations based on electromagnetic scattering theory. © 2011 by the Author(s); licensee Accademia Peloritana dei Pericolanti, Messina, Italy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.