993 resultados para Aerial image
Resumo:
In this paper, a methodology is proposed for the geometric refinement of laser scanning building roof contours using high-resolution aerial images and Markov Random Field (MRF) models. The proposed methodology takes for granted that the 3D description of each building roof reconstructed from the laser scanning data (i.e., a polyhedron) is topologically correct and that it is only necessary to improve its accuracy. Since roof ridges are accurately extracted from laser scanning data, our main objective is to use high-resolution aerial images to improve the accuracy of roof outlines. In order to meet this goal, the available roof contours are first projected onto the image-space. After that, the projected polygons and the straight lines extracted from the image are used to establish an MRF description, which is based on relations ( relative length, proximity, and orientation) between the two sets of straight lines. The energy function associated with the MRF is minimized by using a modified version of the brute force algorithm, resulting in the grouping of straight lines for each roof object. Finally, each grouping of straight lines is topologically reconstructed based on the topology of the corresponding laser scanning polygon projected onto the image-space. The preliminary results showed that the proposed methodology is promising, since most sides of the refined polygons are geometrically better than corresponding projected laser scanning straight lines.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this paper a photogrammetric method is proposed for refining 3D building roof contours extracted from airborne laser scanning data. It is assumed that laser-derived planar faces of roofs are potentially accurate, while laser-derived building roof contours are not well defined. First, polygons representing building roof contours are extracted from a high-resolution aerial image. In the sequence, straight-line segments delimitating each building roof polygon are projected onto the corresponding laser-derived roof planes by using a new line-based photogrammetric model. Finally, refined 3D building roof contours are reconstructed by connecting every pair of photogrammetrically- projected adjacent straight lines. The obtained results showed that the proposed approach worked properly, meaning that the integration of image data and laser scanning data allows better results to be obtained, when compared to the results generated by using only laser scanning data. © 2013 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Pós-graduação em Ciências Biológicas (Biologia Vegetal) - IBRC
Resumo:
When registering spectral radiance from surface targets, digital numbers recorded by the imagery sensor may vary. Such variation causes imperfections on the images coming from aerial surveys. Variation in the image brightness related to the distance from the center of the image is known as the vignetting effect. Correcting this effect aims at achieving an homogeneous image brightness. The purpose of this paper is to present a specific methodology to determine a model in order to minimize this vignette effect based on a model fit by Least Squares Method (LSM), using digital numbers (DN) from shadowed regions. The main hypothesis is that the recorded DN of shadow pixels should be suitable to model the vignetting effect. Considering that the vignetting effect could be modeled as a trend of spatial image variation, a trend surface analysis of a sample of pixels from shadowed regions was carried out. Two approaches were adopted to represent the shadow regions of an image. The first one takes into account the components R, G, B of the aerial image within the visible spectral band, and the second one considers the component I of the HSI image. In order to evaluate the methodology, a study case with a color aerial image was carried out. The findings showed that the best results were obtained by applying the model in the RGB components, which allows to conclude that the vignetting effect can be modeled based on trend surfaces fit on shadow regions DN.
Localização automática de pontos de controle em imagens aéreas baseada em cenas terrestres verticais
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper, a method is proposed to refine the LASER 3D roofs geometrically by using a high-resolution aerial image and Markov Random Field (MRF) models. In order to do so, a MRF description for grouping straight lines is developed, assuming that each projected side contour and ridge is topologically correct and that it is only necessary to improve its accuracy. Although the combination of laser data with data from image is most justified for refining roof contour, the structure of ridges can give greater robustness in the topological description of the roof structure. The MRF model is formulated based on relationships (length, proximity, and orientation) between the straight lines extracted from the image and projected polygon and also on retangularity and corner injunctions. The energy function associated with MRF is minimized by the genetic algorithm optimization method, resulting in the grouping of straight lines for each roof object. Finally, each grouping of straight lines is topologically reconstructed based on the topology of the corresponding LASER scanning polygon projected onto the image-space. The results obtained were satisfactory. This method was able to provide polygons roof refined buildings in which most of its contour sides and ridges were geometrically improved.
Resumo:
Precise, up-to-date and increasingly detailed road maps are crucial for various advanced road applications, such as lane-level vehicle navigation, and advanced driver assistant systems. With the very high resolution (VHR) imagery from digital airborne sources, it will greatly facilitate the data acquisition, data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lane information from aerial images with employment of the object-oriented image analysis method. Our proposed algorithm starts with constructing the DSM and true orthophotos from the stereo images. The road lane details are detected using an object-oriented rule based image classification approach. Due to the affection of other objects with similar spectral and geometrical attributes, the extracted road lanes are filtered with the road surface obtained by a progressive two-class decision classifier. The generated road network is evaluated using the datasets provided by Queensland department of Main Roads. The evaluation shows completeness values that range between 76% and 98% and correctness values that range between 82% and 97%.
Resumo:
In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi- Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles’ state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle’s state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle’s state for more than one minute, at real-time frame rates based, only on visual information.
Resumo:
An automatic approach to road lane marking extraction from high-resolution aerial images is proposed, which can automatically detect the road surfaces in rural areas based on hierarchical image analysis. The procedure is facilitated by the road centrelines obtained from low-resolution images. The lane markings are further extracted on the generated road surfaces with 2D Gabor filters. The proposed method is applied on the aerial images of the Bruce Highway around Gympie, Queensland. Evaluation of the generated road surfaces and lane markings using four representative test fields has validated the proposed method.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi-Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles' state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle's state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle's state for more than one minute, at real-time frame rates based, only on visual information.