337 resultados para 3D camera

em Cambridge University Engineering Department Publications Database


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work addresses the challenging problem of unconstrained 3D human pose estimation (HPE) from a novel perspective. Existing approaches struggle to operate in realistic applications, mainly due to their scene-dependent priors, such as background segmentation and multi-camera network, which restrict their use in unconstrained environments. We therfore present a framework which applies action detection and 2D pose estimation techniques to infer 3D poses in an unconstrained video. Action detection offers spatiotemporal priors to 3D human pose estimation by both recognising and localising actions in space-time. Instead of holistic features, e.g. silhouettes, we leverage the flexibility of deformable part model to detect 2D body parts as a feature to estimate 3D poses. A new unconstrained pose dataset has been collected to justify the feasibility of our method, which demonstrated promising results, significantly outperforming the relevant state-of-the-arts. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Physical forces generated by cells drive morphologic changes during development and can feedback to regulate cellular phenotypes. Because these phenomena typically occur within a 3-dimensional (3D) matrix in vivo, we used microelectromechanical systems (MEMS) technology to generate arrays of microtissues consisting of cells encapsulated within 3D micropatterned matrices. Microcantilevers were used to simultaneously constrain the remodeling of a collagen gel and to report forces generated during this process. By concurrently measuring forces and observing matrix remodeling at cellular length scales, we report an initial correlation and later decoupling between cellular contractile forces and changes in tissue morphology. Independently varying the mechanical stiffness of the cantilevers and collagen matrix revealed that cellular forces increased with boundary or matrix rigidity whereas levels of cytoskeletal and extracellular matrix (ECM) proteins correlated with levels of mechanical stress. By mapping these relationships between cellular and matrix mechanics, cellular forces, and protein expression onto a bio-chemo-mechanical model of microtissue contractility, we demonstrate how intratissue gradients of mechanical stress can emerge from collective cellular contractility and finally, how such gradients can be used to engineer protein composition and organization within a 3D tissue. Together, these findings highlight a complex and dynamic relationship between cellular forces, ECM remodeling, and cellular phenotype and describe a system to study and apply this relationship within engineered 3D microtissues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D thermo-electro-mechanical device simulations are presented of a novel fully CMOS-compatible MOSFET gas sensor operating in a SOI membrane. A comprehensive stress analysis of a Si-SiO2-based multilayer membrane has been performed to ensure a high degree of mechanical reliability at a high operating temperature (e.g. up to 400°C). Moreover, optimisation of the layout dimensions of the SOI membrane, in particular the aspect ratio between the membrane length and membrane thickness, has been carried out to find the best trade-off between minimal device power consumption and acceptable mechanical stress.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A direct comparison between time resolved PLIF measurements of OH and two dimensional slices from a full three dimensional DNS data set of turbulent premixed flame kernels in lean methane/air mixture was presented. The local flame structure and the degree of flame wrinkling were examined in response to differing turbulence intensities and turbulent Reynolds numbers. Simulations were performed using the SEGA DNS code, which is based on the solution of the compressible Navier Stokes, species, and energy equations for a lean hydrocarbon mixture. For the OH PLIF measurements, a cluster of four Nd:YAG laser was fired sequentially at high repetition rates and used to pump a dye laser. The frequency doubled laser beam was formed into a sheet of 40 mm height using a cylindrical telescope. The combination of PLIF and DNS has been demonstrated as a powerful tool for flame analysis. This research will form the basis for the development of sub-grid-scale (SGS) models for LES of lean-premixed combustion systems such as gas turbines. This is an abstract of a paper presented at the 30th International Symposium on Combustion (Chicago, IL 7/25-30/2004).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new liquid crystal device structure has been developed using a vertically grown Multi-Wall Carbon NanoTube (MWCNT) as a 3D electrode structure, which allows complicated phase only hologram to be displayed using conventional liquid crystal materials. The nanotubes act as an individual electrode sites that generate an electric field profile, dictating the refractive index profile with the liquid crystal cell. Changing the electric field applied makes it possible to tune the properties to modulate the light in an ideal kinoform. A perfect 3D image can be generated by a computer generated hologram by using the diffraction of the light from the hologram pixels to create an optical wave front that appears to come from 3D object. A multilevel phase modulating device based on nematic LC's is also under progress, which will be used with the LC/CNT devices on an LCOS backplane to project a full 3D image from the kinoform.