944 resultados para 3d Reconstruction
Resumo:
Reconstruction of patient-specific 3D bone surface from 2D calibrated fluoroscopic images and a point distribution model is discussed. We present a 2D/3D reconstruction scheme combining statistical extrapolation and regularized shape deformation with an iterative image-to-model correspondence establishing algorithm, and show its application to reconstruct the surface of proximal femur. The image-to-model correspondence is established using a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformation to find a fraction of best matched 2D point pairs between features detected from the fluoroscopic images and those extracted from the 3D model. The obtained 2D point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one. We designed and conducted experiments on 11 cadaveric femurs to validate the present reconstruction scheme. An average mean reconstruction error of 1.2 mm was found when two fluoroscopic images were used for each bone. It decreased to 1.0 mm when three fluoroscopic images were used.
Resumo:
The acquisition of conventional X-ray radiographs remains the standard imaging procedure for the diagnosis of hip-related problems. However, recent studies demonstrated the benefit of using three-dimensional (3D) surface models in the clinical routine. 3D surface models of the hip joint are useful for assessing the dynamic range of motion in order to identify possible pathologies such as femoroacetabular impingement. In this paper, we present an integrated system which consists of X-ray radiograph calibration and subsequent 2D/3D hip joint reconstruction for diagnosis and planning of hip-related problems. A mobile phantom with two different sizes of fiducials was developed for X-ray radiograph calibration, which can be robustly detected within the images. On the basis of the calibrated X-ray images, a 3D reconstruction method of the acetabulum was developed and applied together with existing techniques to reconstruct a 3D surface model of the hip joint. X-ray radiographs of dry cadaveric hip bones and one cadaveric specimen with soft tissue were used to prove the robustness of the developed fiducial detection algorithm. Computed tomography scans of the cadaveric bones were used to validate the accuracy of the integrated system. The fiducial detection sensitivity was in the same range for both sizes of fiducials. While the detection sensitivity was 97.96% for the large fiducials, it was 97.62% for the small fiducials. The acetabulum and the proximal femur were reconstructed with a mean surface distance error of 1.06 and 1.01 mm, respectively. The results for fiducial detection sensitivity and 3D surface reconstruction demonstrated the capability of the integrated system for 3D hip joint reconstruction from 2D calibrated X-ray radiographs.
Resumo:
In this paper, a novel method to simulate radio propagation is presented. The method consists of two steps: automatic 3D scenario reconstruction and propagation modeling. For 3D reconstruction, a machine learning algorithm is adopted and improved to automatically recognize objects in pictures taken from target regions, and 3D models are generated based on the recognized objects. The propagation model employs a ray tracing algorithm to compute signal strength for each point on the constructed 3D map. Our proposition reduces, or even eliminates, infrastructure cost and human efforts during the construction of realistic 3D scenes used in radio propagation modeling. In addition, the results obtained from our propagation model proves to be both accurate and efficient
Resumo:
In this paper, a novel method to simulate radio propagation is presented. The method consists of two steps: automatic 3D scenario reconstruction and propagation modeling. For 3D reconstruction, a machine learning algorithm is adopted and improved to automatically recognize objects in pictures taken from target region, and 3D models are generated based on the recognized objects. The propagation model employs a ray tracing algorithm to compute signal strength for each point on the constructed 3D map. By comparing with other methods, the work presented in this paper makes contributions on reducing human efforts and cost in constructing 3D scene; moreover, the developed propagation model proves its potential in both accuracy and efficiency.
Resumo:
3D crop reconstruction with a high temporal resolution and by the use of non-destructive measuring technologies can support the automation of plant phenotyping processes. Thereby, the availability of such 3D data can give valuable information about the plant development and the interaction of the plant genotype with the environment. This article presents a new methodology for georeferenced 3D reconstruction of maize plant structure. For this purpose a total station, an IMU, and several 2D LiDARs with different orientations were mounted on an autonomous vehicle. By the multistep methodology presented, based on the application of the ICP algorithm for point cloud fusion, it was possible to perform the georeferenced point clouds overlapping. The overlapping point cloud algorithm showed that the aerial points (corresponding mainly to plant parts) were reduced to 1.5%–9% of the total registered data. The remaining were redundant or ground points. Through the inclusion of different LiDAR point of views of the scene, a more realistic representation of the surrounding is obtained by the incorporation of new useful information but also of noise. The use of georeferenced 3D maize plant reconstruction at different growth stages, combined with the total station accuracy could be highly useful when performing precision agriculture at the crop plant level.
Resumo:
Self-organising neural models have the ability to provide a good representation of the input space. In particular the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time-consuming, especially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This paper proposes a Graphics Processing Unit (GPU) parallel implementation of the GNG with Compute Unified Device Architecture (CUDA). In contrast to existing algorithms, the proposed GPU implementation allows the acceleration of the learning process keeping a good quality of representation. Comparative experiments using iterative, parallel and hybrid implementations are carried out to demonstrate the effectiveness of CUDA implementation. The results show that GNG learning with the proposed implementation achieves a speed-up of 6× compared with the single-threaded CPU implementation. GPU implementation has also been applied to a real application with time constraints: acceleration of 3D scene reconstruction for egomotion, in order to validate the proposal.
Resumo:
Il existe désormais une grande variété de lentilles panoramiques disponibles sur le marché dont certaines présentant des caractéristiques étonnantes. Faisant partie de cette dernière catégorie, les lentilles Panomorphes sont des lentilles panoramiques anamorphiques dont le profil de distorsion est fortement non-uniforme, ce qui cause la présence de zones de grandissement augmenté dans le champ de vue. Dans un contexte de robotique mobile, ces particularités peuvent être exploitées dans des systèmes stéréoscopiques pour la reconstruction 3D d’objets d’intérêt qui permettent à la fois une bonne connaissance de l’environnement, mais également l’accès à des détails plus fins en raison des zones de grandissement augmenté. Cependant, à cause de leur complexité, ces lentilles sont difficiles à calibrer et, à notre connaissance, aucune étude n’a réellement été menée à ce propos. L’objectif principal de cette thèse est la conception, l’élaboration et l’évaluation des performances de systèmes stéréoscopiques Panomorphes. Le calibrage a été effectué à l’aide d’une technique établie utilisant des cibles planes et d’une boîte à outils de calibrage dont l’usage est répandu. De plus, des techniques mathématiques nouvelles visant à rétablir la symétrie de révolution dans l’image (cercle) et à uniformiser la longueur focale (cercle uniforme) ont été développées pour voir s’il était possible d’ainsi faciliter le calibrage. Dans un premier temps, le champ de vue a été divisé en zones à l’intérieur desquelles la longueur focale instantanée varie peu et le calibrage a été effectué pour chacune d’entre elles. Puis, le calibrage général des systèmes a aussi été réalisé pour tout le champ de vue simultanément. Les résultats ont montré que la technique de calibrage par zone ne produit pas de gain significatif quant à la qualité des reconstructions 3D d’objet d’intérêt par rapport au calibrage général. Cependant, l’étude de cette nouvelle approche a permis de réaliser une évaluation des performances des systèmes stéréoscopiques Panomorphes sur tout le champ de vue et de montrer qu’il est possible d’effectuer des reconstructions 3D de qualité dans toutes les zones. De plus, la technique mathématique du cercle a produit des résultats de reconstructions 3D en général équivalents à l’utilisation des coordonnées originales. Puisqu’il existe des outils de calibrage qui, contrairement à celui utilisé dans ce travail, ne disposent que d’un seul degré de liberté sur la longueur focale, cette technique pourrait rendre possible le calibrage de lentilles Panomorphes à l’aide de ceux-ci. Finalement, certaines conclusions ont pu être dégagées quant aux facteurs déterminants influençant la qualité de la reconstruction 3D à l’aide de systèmes stéréoscopiques Panomorphes et aux caractéristiques à privilégier dans le choix des lentilles. La difficulté à calibrer les optiques Panomorphes en laboratoire a mené à l’élaboration d’une technique de calibrage virtuel utilisant un logiciel de conception optique et une boîte à outils de calibrage. Cette approche a permis d’effectuer des simulations en lien avec l’impact des conditions d’opération sur les paramètres de calibrage et avec l’effet des conditions de calibrage sur la qualité de la reconstruction. Des expérimentations de ce type sont pratiquement impossibles à réaliser en laboratoire mais représentent un intérêt certain pour les utilisateurs. Le calibrage virtuel d’une lentille traditionnelle a aussi montré que l’erreur de reprojection moyenne, couramment utilisée comme façon d’évaluer la qualité d’un calibrage, n’est pas nécessairement un indicateur fiable de la qualité de la reconstruction 3D. Il est alors nécessaire de disposer de données supplémentaires pour juger adéquatement de la qualité d’un calibrage.
Resumo:
We aim to demonstrate unaided visual 3D pose estimation and map reconstruction using both monocular and stereo vision techniques. To date, our work has focused on collecting data from Unmanned Aerial Vehicles, which generates a number of significant issues specific to the application. Such issues include scene reconstruction degeneracy from planar data, poor structure initialisation for monocular schemes and difficult 3D reconstruction due to high feature covariance. Most modern Visual Odometry (VO) and related SLAM systems make use of a number of sensors to inform pose and map generation, including laser range-finders, radar, inertial units and vision [1]. By fusing sensor inputs, the advantages and deficiencies of each sensor type can be handled in an efficient manner. However, many of these sensors are costly and each adds to the complexity of such robotic systems. With continual advances in the abilities, small size, passivity and low cost of visual sensors along with the dense, information rich data that they provide our research focuses on the use of unaided vision to generate pose estimates and maps from robotic platforms. We propose that highly accurate (�5cm) dense 3D reconstructions of large scale environments can be obtained in addition to the localisation of the platform described in other work [2]. Using images taken from cameras, our algorithm simultaneously generates an initial visual odometry estimate and scene reconstruction from visible features, then passes this estimate to a bundle-adjustment routine to optimise the solution. From this optimised scene structure and the original images, we aim to create a detailed, textured reconstruction of the scene. By applying such techniques to a unique airborne scenario, we hope to expose new robotic applications of SLAM techniques. The ability to obtain highly accurate 3D measurements of an environment at a low cost is critical in a number of agricultural and urban monitoring situations. We focus on cameras as such sensors are small, cheap and light-weight and can therefore be deployed in smaller aerial vehicles. This, coupled with the ability of small aerial vehicles to fly near to the ground in a controlled fashion, will assist in increasing the effective resolution of the reconstructed maps.
Resumo:
Gait recognition approaches continue to struggle with challenges including view-invariance, low-resolution data, robustness to unconstrained environments, and fluctuating gait patterns due to subjects carrying goods or wearing different clothes. Although computationally expensive, model based techniques offer promise over appearance based techniques for these challenges as they gather gait features and interpret gait dynamics in skeleton form. In this paper, we propose a fast 3D ellipsoidal-based gait recognition algorithm using a 3D voxel model derived from multi-view silhouette images. This approach directly solves the limitations of view dependency and self-occlusion in existing ellipse fitting model-based approaches. Voxel models are segmented into four components (left and right legs, above and below the knee), and ellipsoids are fitted to each region using eigenvalue decomposition. Features derived from the ellipsoid parameters are modeled using a Fourier representation to retain the temporal dynamic pattern for classification. We demonstrate the proposed approach using the CMU MoBo database and show that an improvement of 15-20% can be achieved over a 2D ellipse fitting baseline.
Resumo:
Topographic structural complexity of a reef is highly correlated to coral growth rates, coral cover and overall levels of biodiversity, and is therefore integral in determining ecological processes. Modeling these processes commonly includes measures of rugosity obtained from a wide range of different survey techniques that often fail to capture rugosity at different spatial scales. Here we show that accurate estimates of rugosity can be obtained from video footage captured using underwater video cameras (i.e., monocular video). To demonstrate the accuracy of our method, we compared the results to in situ measurements of a 2m x 20m area of forereef from Glovers Reef atoll in Belize. Sequential pairs of images were used to compute fine scale bathymetric reconstructions of the reef substrate from which precise measurements of rugosity and reef topographic structural complexity can be derived across multiple spatial scales. To achieve accurate bathymetric reconstructions from uncalibrated monocular video, the position of the camera for each image in the video sequence and the intrinsic parameters (e.g., focal length) must be computed simultaneously. We show that these parameters can be often determined when the data exhibits parallax-type motion, and that rugosity and reef complexity can be accurately computed from existing video sequences taken from any type of underwater camera from any reef habitat or location. This technique provides an infinite array of possibilities for future coral reef research by providing a cost-effective and automated method of determining structural complexity and rugosity in both new and historical video surveys of coral reefs.
Resumo:
Trajectory basis Non-Rigid Structure From Motion (NRSFM) currently faces two problems: the limit of reconstructability and the need to tune the basis size for different sequences. This paper provides a novel theoretical bound on 3D reconstruction error, arguing that the existing definition of reconstructability is fundamentally flawed in that it fails to consider system condition. This insight motivates a novel strategy whereby the trajectory's response to a set of high-pass filters is minimised. The new approach eliminates the need to tune the basis size and is more efficient for long sequences. Additionally, the truncated DCT basis is shown to have a dual interpretation as a high-pass filter. The success of trajectory filter reconstruction is demonstrated quantitatively on synthetic projections of real motion capture sequences and qualitatively on real image sequences.
Resumo:
In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.
Resumo:
This thesis developed a method for real-time and handheld 3D temperature mapping using a combination of off-the-shelf devices and efficient computer algorithms. It contributes a new sensing and data processing framework to the science of 3D thermography, unlocking its potential for application areas such as building energy auditing and industrial monitoring. New techniques for the precise calibration of multi-sensor configurations were developed, along with several algorithms that ensure both accurate and comprehensive surface temperature estimates can be made for rich 3D models as they are generated by a non-expert user.
Resumo:
We learn from the past that invasive species have caused tremendous damage to native species and serious disruption to agricultural industries. It is crucial for us to prevent this in the future. The first step of this process is to identify correctly an invasive species from native ones. Current identification methods, relying on mainly 2D images, can result in low accuracy and be time consuming. Such methods provide little help to a quarantine officer who has time constraints to response when on duty. To deal with this problem, we propose new solutions using 3D virtual models of insects. We explain how working with insects in the 3D domain can be much better than the 2D domain. We also describe how to create true-color 3D models of insects using an image-based 3D reconstruction method. This method is ideal for quarantine control and inspection tasks that involve the verification of a physical specimen against known invasive species. Finally we show that these insect models provide valuable material for other applications such as research, education, arts and entertainment. © 2013 IEEE.
Resumo:
Purpose: To determine the extent to which the accuracy of magnetic resonance imaging (MRI) based virtual 3-dimensional (3D) models of the intact orbit can approach that of the gold standard, computed tomography (CT) based models. The goal was to determine whether MRI is a viable alternative to CT scans in patients with isolated orbital fractures and penetrating eye injuries, pediatric patients, and patients requiring multiple scans in whom radiation exposure is ideally limited. Materials and Methods: Patients who presented with unilateral orbital fractures to the Royal Brisbane and Women’s Hospital from March 2011 to March 2012 were recruited to participate in this cross-sectional study. The primary predictor variable was the imaging technique (MRI vs CT). The outcome measurements were orbital volume (primary outcome) and geometric intraorbital surface deviations (secondary outcome)between the MRI- and CT-based 3D models. Results: Eleven subjects (9 male) were enrolled. The patients’ mean age was 30 years. On average, the MRI models underestimated the orbital volume of the CT models by 0.50 0.19 cm3 . The average intraorbital surface deviation between the MRI and CT models was 0.34 0.32 mm, with 78 2.7% of the surface within a tolerance of 0.5 mm. Conclusions: The volumetric differences of the MRI models are comparable to reported results from CT models. The intraorbital MRI surface deviations are smaller than the accepted tolerance for orbital surgical reconstructions. Therefore, the authors believe that MRI is an accurate radiation-free alternative to CT for the primary imaging and 3D reconstruction of the bony orbit. �