945 resultados para Camera Obscura
Resumo:
With the use of a baited stereo-video camera system, this study semiquantitatively defined the habitat associations of 4 species of Lutjanidae: Opakapaka (Pristipomoides filamentosus), Kalekale (P. sieboldii), Onaga (Etelis coruscans), and Ehu (E. carbunculus). Fish abundance and length data from 6 locations in the main Hawaiian Islands were evaluated for species-specific and size-specific differences between regions and habitat types. Multibeam bathymetry and backscatter were used to classify habitats into 4 types on the basis of substrate (hard or soft) and slope (high or low). Depth was a major influence on bottomfish distributions. Opakapaka occurred at depths shallower than the depths at which other species were observed, and this species showed an ontogenetic shift to deeper water with increasing size. Opakapaka and Ehu had an overall preference for hard substrate with low slope (hard-low), and Onaga was found over both hard-low and hard-high habitats. No significant habitat preferences were recorded for Kalekale. Opakapaka, Kalekale, and Onaga exhibited size-related shifts with habitat type. A move into hard-high environments with increasing size was evident for Opakapaka and Kalekale. Onaga was seen predominantly in hard-low habitats at smaller sizes and in either hard-low or hard-high at larger sizes. These ontogenetic habitat shifts could be driven by reproductive triggers because they roughly coincided with the length at sexual maturity of each species. However, further studies are required to determine causality. No ontogenetic shifts were seen for Ehu, but only a limited number of juveniles were observed. Regional variations in abundance and length were also found and could be related to fishing pressure or large-scale habitat features.
Resumo:
This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.
Resumo:
观察了新近发现于我国云南的果蝇属暗果蝇种组( Drosophila obscura species group ) 种类 D1luguensis 、D1 dianensis 和D1limingi 的有丝分裂中期核型, 并将3 个种的核型与各自的近缘种类进行了比较。 D1luguensis 具2n = 12 条染色体, 包括3 对中央着丝粒(V 形) 染色体、2 对近端着丝粒(棒状) 染色体以及1 对微小(点状) 染色体。其中X 和Y染色体均为中央着丝粒染色体。D1 dianensis 和D1limingi 具2n = 10 条染 色体, 包括1 对大的V 形常染色体, 1 对小的V 形常染色体, 2 对J 形(亚中着丝粒型) 常染色体和1 对点状染 色体。其中X 染色体为J 形, Y染色体为短棒状。基于核型比较的结果以及D1sinobscura 亚组地理分布的资料, 结合种间系统发育关系研究结果, 认为D1 luguensis 可能保留了该亚组祖先种类的核型。D1sinobscura 的核型(2n = 12 : 2V , 1J , 2R , 1D) 可能由一个pre2“sinobscura2hubeiensis”谱系的一个分支通过臂间倒位演化而来, 而D1 hubeiensis 的核型(2n = 10 : 4V , 1D) 可能由该谱系的另一分支通过着丝粒融合(2 对近端着丝粒常染色 体的融合) 而形成。推测在D1 dianensis 和近缘欧洲种D1subsilvestris (2n = 12 : 3V , 2R , 1D) 间、D1limingi 和 东亚近缘种D1tsukubaensis (2n = 12 : 3V , 2R , 1D) 间的物种分化过程中, 可能有相似的染色体变异类型发生。
Resumo:
Three new and two known species of the Drosophila (Sophophora) obscura species-group are reported from Yunnan Province, southern China. The sinobscura species-subgroup is newly established by D. sinobscura, D. hubeiensis and D. luguensis sp. nov. Geographic distribution of the obscura group in and around China is discussed, and a key to 10 Chinese species of the obscura group is provided.
Resumo:
Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.
Resumo:
Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.
Resumo:
The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.
Resumo:
Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.
Resumo:
Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.
Resumo:
We present a system for augmenting depth camera output using multispectral photometric stereo. The technique is demonstrated using a Kinect sensor and is able to produce geometry independently for each frame. Improved reconstruction is demonstrated using the Kinect's inbuilt RGB camera and further improvements are achieved by introducing an additional high resolution camera. As well as qualitative improvements in reconstruction a quantitative reduction in temporal noise is shown. As part of the system an approach is presented for relaxing the assumption of multispectral photometric stereo that scenes are of constant chromaticity to the assumption that scenes contain multiple piecewise constant chromaticities.