938 resultados para 3D Point Cloud


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rock mass characterization requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in Light Detection and Ranging (LiDAR) instrumentation currently allow quick and accurate 3D data acquisition, yielding on the development of new methodologies for the automatic characterization of rock mass discontinuities. This paper presents a methodology for the identification and analysis of flat surfaces outcropping in a rocky slope using the 3D data obtained with LiDAR. This method identifies and defines the algebraic equations of the different planes of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test, finding principal orientations by Kernel Density Estimation and identifying clusters by the Density-Based Scan Algorithm with Noise. Different sources of information —synthetic and 3D scanned data— were employed, performing a complete sensitivity analysis of the parameters in order to identify the optimal value of the variables of the proposed method. In addition, raw source files and obtained results are freely provided in order to allow to a more straightforward method comparison aiming to a more reproducible research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complete characterization of rock masses implies the acquisition of information of both, the materials which compose the rock mass and the discontinuities which divide the outcrop. Recent advances in the use of remote sensing techniques – such as Light Detection and Ranging (LiDAR) – allow the accurate and dense acquisition of 3D information that can be used for the characterization of discontinuities. This work presents a novel methodology which allows the calculation of the normal spacing of persistent and non-persistent discontinuity sets using 3D point cloud datasets considering the three dimensional relationships between clusters. This approach requires that the 3D dataset has been previously classified. This implies that discontinuity sets are previously extracted, every single point is labeled with its corresponding discontinuity set and every exposed planar surface is analytically calculated. Then, for each discontinuity set the method calculates the normal spacing between an exposed plane and its nearest one considering 3D space relationship. This link between planes is obtained calculating for every point its nearest point member of the same discontinuity set, which provides its nearest plane. This allows calculating the normal spacing for every plane. Finally, the normal spacing is calculated as the mean value of all the normal spacings for each discontinuity set. The methodology is validated through three cases of study using synthetic data and 3D laser scanning datasets. The first case illustrates the fundamentals and the performance of the proposed methodology. The second and the third cases of study correspond to two rock slopes for which datasets were acquired using a 3D laser scanner. The second case study has shown that results obtained from the traditional and the proposed approaches are reasonably similar. Nevertheless, a discrepancy between both approaches has been found when the exposed planes members of a discontinuity set were hard to identify and when the planes pairing was difficult to establish during the fieldwork campaign. The third case study also has evidenced that when the number of identified exposed planes is high, the calculated normal spacing using the proposed approach is minor than those using the traditional approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

.bin files should be opened using CloudCompare

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rock mass classification systems are widely used tools for assessing the stability of rock slopes. Their calculation requires the prior quantification of several parameters during conventional fieldwork campaigns, such as the orientation of the discontinuity sets, the main properties of the existing discontinuities and the geo-mechanical characterization of the intact rock mass, which can be time-consuming and an often risky task. Conversely, the use of relatively new remote sensing data for modelling the rock mass surface by means of 3D point clouds is changing the current investigation strategies in different rock slope engineering applications. In this paper, the main practical issues affecting the application of Slope Mass Rating (SMR) for the characterization of rock slopes from 3D point clouds are reviewed, using three case studies from an end-user point of view. To this end, the SMR adjustment factors, which were calculated from different sources of information and processes, using the different softwares, are compared with those calculated using conventional fieldwork data. In the presented analysis, special attention is paid to the differences between the SMR indexes derived from the 3D point cloud and conventional field work approaches, the main factors that determine the quality of the data and some recognized practical issues. Finally, the reliability of Slope Mass Rating for the characterization of rocky slopes is highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the main concepts of a project under development concerning the analysis process of a scene containing a large number of objects, represented as unstructured point clouds. To achieve what we called the "optimal scene interpretation" (the shortest scene description satisfying the MDL principle) we follow an approach for managing 3-D objects based on a semantic framework based on ontologies for adding and sharing conceptual knowledge about spatial objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a novel method for determining the extrinsic calibration parameters between 2D and 3D LIDAR sensors with respect to a vehicle base frame. To recover the calibration parameters we attempt to optimize the quality of a 3D point cloud produced by the vehicle as it traverses an unknown, unmodified environment. The point cloud quality metric is derived from Rényi Quadratic Entropy and quantifies the compactness of the point distribution using only a single tuning parameter. We also present a fast approximate method to reduce the computational requirements of the entropy evaluation, allowing unsupervised calibration in vast environments with millions of points. The algorithm is analyzed using real world data gathered in many locations, showing robust calibration performance and substantial speed improvements from the approximations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate three-dimensional representations of cultural heritage sites are highly valuable for scientific study, conservation, and educational purposes. In addition to their use for archival purposes, 3D models enable efficient and precise measurement of relevant natural and architectural features. Many cultural heritage sites are large and complex, consisting of multiple structures spatially distributed over tens of thousands of square metres. The process of effectively digitising such geometrically complex locations requires measurements to be acquired from a variety of viewpoints. While several technologies exist for capturing the 3D structure of objects and environments, none are ideally suited to complex, large-scale sites, mainly due to their limited coverage or acquisition efficiency. We explore the use of a recently developed handheld mobile mapping system called Zebedee in cultural heritage applications. The Zebedee system is capable of efficiently mapping an environment in three dimensions by continually acquiring data as an operator holding the device traverses through the site. The system was deployed at the former Peel Island Lazaret, a culturally significant site in Queensland, Australia, consisting of dozens of buildings of various sizes spread across an area of approximately 400 × 250 m. With the Zebedee system, the site was scanned in half a day, and a detailed 3D point cloud model (with over 520 million points) was generated from the 3.6 hours of acquired data in 2.6 hours. We present results demonstrating that Zebedee was able to accurately capture both site context and building detail comparable in accuracy to manual measurement techniques, and at a greatly increased level of efficiency and scope. The scan allowed us to record derelict buildings that previously could not be measured because of the scale and complexity of the site. The resulting 3D model captures both interior and exterior features of buildings, including structure, materials, and the contents of rooms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reconstructing 3D motion data is highly under-constrained due to several common sources of data loss during measurement, such as projection, occlusion, or miscorrespondence. We present a statistical model of 3D motion data, based on the Kronecker structure of the spatiotemporal covariance of natural motion, as a prior on 3D motion. This prior is expressed as a matrix normal distribution, composed of separable and compact row and column covariances. We relate the marginals of the distribution to the shape, trajectory, and shape-trajectory models of prior art. When the marginal shape distribution is not available from training data, we show how placing a hierarchical prior over shapes results in a convex MAP solution in terms of the trace-norm. The matrix normal distribution, fit to a single sequence, outperforms state-of-the-art methods at reconstructing 3D motion data in the presence of significant data loss, while providing covariance estimates of the imputed points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum.We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologramplane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. © 2009 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of methods are commonly used today to collect infrastructure's spatial data (time-of-flight, visual triangulation, etc.). However, current practice lacks a solution that is accurate, automatic, and cost-efficient at the same time. This paper presents a videogrammetric framework for acquiring spatial data of infrastructure which holds the promise to address this limitation. It uses a calibrated set of low-cost high resolution video cameras that is progressively traversed around the scene and aims to produce a dense 3D point cloud which is updated in each frame. It allows for progressive reconstruction as opposed to point-and-shoot followed by point cloud stitching. The feasibility of the framework is studied in this paper. Required steps through this process are presented and the unique challenges of each step are identified. Results specific to each step are also presented.