921 resultados para Digital aerial images
Resumo:
Getting images from a Digital Camera is pretty straight forward. However this is the easy part, its getting the right image and making sure your digital file is good enough for your output. Set you camera or mobile phone to the highest settings, this will give you more options when you come to manipulate or edit the file Remember to make copies of files for editing so you can always return to your original image if you need too
Resumo:
Airborne LIght Detection And Ranging (LIDAR) provides accurate height information for objects on the earth, which makes LIDAR become more and more popular in terrain and land surveying. In particular, LIDAR data offer vital and significant features for land-cover classification which is an important task in many application domains. In this paper, an unsupervised approach based on an improved fuzzy Markov random field (FMRF) model is developed, by which the LIDAR data, its co-registered images acquired by optical sensors, i.e. aerial color image and near infrared image, and other derived features are fused effectively to improve the ability of the LIDAR system for the accurate land-cover classification. In the proposed FMRF model-based approach, the spatial contextual information is applied by modeling the image as a Markov random field (MRF), with which the fuzzy logic is introduced simultaneously to reduce the errors caused by the hard classification. Moreover, a Lagrange-Multiplier (LM) algorithm is employed to calculate a maximum A posteriori (MAP) estimate for the classification. The experimental results have proved that fusing the height data and optical images is particularly suited for the land-cover classification. The proposed approach works very well for the classification from airborne LIDAR data fused with its coregistered optical images and the average accuracy is improved to 88.9%.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
Purpose: To evaluate the accuracy of Image Tool Software 3.0 (ITS 3.0) to detect marginal microleakage using the stereomicroscope as the validation criterion and ITS 3.0 as the tool under study.Materials and Methods: Class V cavities were prepared at the cementoenamel junction of 61 bovine incisors, and 53 halves of them were used. Using the stereomicroscope, microleakage was classified dichotomously: presence or absence. Next, ITS 3.0 was used to obtain measurements of the microleakage, so that 0.75 was taken as the cut-off point, and values equal to or greater than 0.75 indicated its presence, while values between 0.00 and 0.75 indicated its absence. Sensitivity and specificity were calculated by point and given as 95% confidence interval (95% CI).Results: The accuracy of the ITS 3.0 was verified with a sensitivity of 0.95 (95% CI: 0.89 to 1.00) and a specificity of 0.92 (95% CI: 0.84 to 0.99).Conclusion: Digital diagnosis of marginal microleakage using ITS 3.0 was sensitive and specific.
Resumo:
The aim of this study was to analyze the color alterations performed by the CIE L*a*b* system in the digital imaging of shade guide tabs, which were obtained photographically according to the automatic and manual modes. This study also sought to examine the observers' agreement in quantifying the coordinates. Four Vita Lumin Vaccum shade guide tabs were used: A3.5, B1, B3 and C4. An EOS Canon digital camera was used to record the digital images of the shade tabs, and the images were processed using Adobe Photoshop software. A total of 80 observations (five replicates of each shade according to two observers in two modes, specifically, automatic and manual) were obtained, leading to color values of L*, a* and b*. The color difference (AE) between the modes was calculated and classified as either clinically acceptable or unacceptable. The results indicated that there was agreement between the two observers in obtaining the L*, a* and b* values related to all guides. However, the B1, B3, and C4 shade tabs had AE values classified as clinically acceptable (Delta E = 0.44, Delta E = 2.04 and Delta E = 2.69, respectively). The A3.5 shade tab had a AE value classified as clinically unacceptable (Delta E = 4.17), as it presented higher values for luminosity in the automatic mode (L* = 54.0) than in the manual mode (L* = 50.6). It was concluded that the B1, B3 and C4 shade tabs can be used at any of the modes in digital camera (manual or automatic), which was a different finding from that observed for the A3.5 shade tab.
Resumo:
Nowadays the real contribution of light on the acceleration of the chemical reaction for the dental bleaching is under incredulity, mostly because the real mechanisms of its contribution still are obscure. Objectives: Determine the influence of pigment of three colored bleaching gels in the light distribution and absorption in the teeth, to accomplish that, we have used in this experiment bovine teeth and three colored bleaching gels. It is well Known that the dark molecules absorb light and increase the local temperature upraising the bleaching rate, these molecules are located in the interface between the enamel and dentin. Methods: This study was realized using an argon laser with 455nm with 150mW of intensity and a LED with the same characteristics, three colored gels (green, blue and red) and to realize the capture of the digital images it was used a CCD camera connected to a PC. The images were processed in a mathematical environment (MATHLAB, R12 (R)). Results: The obtained results show that the color of the bleaching gel influences significantly the absorption of light in the specific sites of the teeth. Conclusions: This poor absorption can be one of the major factors involved with the incredulity of the light contribution on the process that can be observed in the literature nowadays.
Resumo:
New formularizations, techniques and devices have become the dental whitening most safe and with better results. Although this, the verification of the levels whitening is being continued for visual comparison, that is an empirical, subjective method, subject to errors and dependent of the individual interpretation. Normally the result of the whitening is express for the amplitude of displacement between the initial and the final color, being take like reference the tonalities of a scale of color commanded of darkest for more clearly. Although to be the most used scale, the ordinance of the Vita Classical (R) - Vita, according to recommendations of the manufacturer, reveals inadequate for the evaluation of the whitening. From digital images and of algorithm OER (ordinance of the reference scale), especially developed for the ScanWhite (C), the ordinance of the tonalities of the scale Vita Classical (R) was made. For such, the values of the canals of color R, G, and B of medium part average of the crowns was adopted as reference for evaluation. The images had been taken with the camera Sony Cybershoot DSC F828. The results of the computational ordinance had been compared with the sequence proposal for the manufacturer and with the earned one for the visual evaluation, carried through by 10 volunteers, under standardized conditions of illumination. It statistics analyzes demonstrated significant differences between the ordinances.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A computer program, PhotoLin, written for an IBM-PC-compatible microcomputer is described which detects linear features in aerial photographs, satellite images and topographic maps. The program accepts images saved to PCX files as input and applies noise correction and smoothing filters and thinning routines. The output consists of a skeleton containing the median lines of linear features which can be represented on a map. The branches of the skeleton can be broken into sections of constant length for which the mean orientations are obtained for the preparation of rose diagrams. (C) 2001 Elsevier B.V. Ltd. All rights reserved.
Resumo:
The identification of ground control on photographs or images is usually carried out by a human operator, who uses his natural skills to make interpretations. In Digital Photogrammetry, which uses techniques of digital image processing extraction of ground control can be automated by using an approach based on relational matching and a heuristic that uses the analytical relation between straight features of object space and its homologous in the image space. A build-in self-diagnosis is also used in this method. It is based on implementation of data snooping statistic test in the process of spatial resection using the Iterated Extended Kalman Filtering (IEKF). The aim of this paper is to present the basic principles of the proposed approach and results based on real data.
Resumo:
This paper presents a dynamic programming approach for semi-automated road extraction from medium-and high-resolution images. This method is a modified version of a pre-existing dynamic programming method for road extraction from low-resolution images. The basic assumption of this pre-existing method is that roads manifest as lines in low-resolution images (pixel footprint> 2 m) and as such can be modeled and extracted as linear features. On the other hand, roads manifest as ribbon features in medium- and high-resolution images (pixel footprint ≤ 2 m) and, as a result, the focus of road extraction becomes the road centerlines. The original method can not accurately extract road centerlines from medium- and high- resolution images. In view of this, we propose a modification of the merit function of the original approach, which is carried out by a constraint function embedding road edge properties. Experimental results demonstrated the modified algorithm's potential in extracting road centerlines from medium- and high-resolution images.
Resumo:
The radiopacity of esthetic restorative materials has been established as an important requirement, improving the radiographic diagnosis. The aim of this study was to evaluate the radiopacity of six restorative materials using a direct digital image system, comparing them to the dental tissues (enamel-dentin), expressed as equivalent thickness of aluminum (millimeters of aluminum). Five specimens of each material were made. Three 2-mm thick longitudinal sections were cut from an intact extracted permanent molar tooth (including enamel and dentin). An aluminum step wedge with 9 steps was used. The samples of different materials were placed on a phosphor plate together with a tooth section, aluminum step wedge and metal code letter, and were exposed using a dental x-ray unit. Five measurements of radiographic density were obtained from each image of each item assessed (restorative material, enamel, dentin, each step of the aluminum step wedge) and the mean of these values was calculated. Radiopacity values were subsequently calculated as equivalents of aluminum thickness. Analysis of variance (ANOVA) indicated significant differences in radiopacity values among the materials (P<0.0001). The radiopacity values of the restorative materials evaluated were, in decreasing order: TPH, F2000, Synergy, Prisma Flow, Degufill, Luxat. Only Luxat had significantly lower radiopacity values than dentin. One material (Degufill) had similar radiopacity values to enamel and four (TPH, F2000, Synergy and Prisma Flow) had significantly higher radiopacity values than enamel. In conclusion, to assess the adequacy of posterior composite restorations it is important that the restorative material to be used has enough radiopacity, in order to be easily distinguished from the tooth structure in the radiographic image. Knowledge on the radiopacity of different materials helps professionals to select the most suitable material, along with other properties such as biocompatibility, adhesion and esthetic.
Resumo:
Purpose: To determine palpebral dimensions and development in Brazilian children using digital images. Methods: An observational study was performed measuring eyelid angles, palpebral fissure area and interpupillary distance in 220 children aged from 4 to 72 months. Digital images were obtained with a Sony Lithium movie camera (Sony DCR-TRV110, Brazil) in frontal view from awake children in primary ocular position; the object of observation was located at pupil height. The images were saved to tape, transferred to a Macintosh G4 (Apple Computer Inc., USA) computer and processed using NIH 1.58 software (NTIS, 5285 Port Royal Rd., Springfield, VA 22161, USA). Data were submitted to statistical analysis. Results: All parameters studied increased with age. The outer palpebral angle was greater than the inner, and palpebral fissure and angles showed greater changes between 4 and 5 months old and at around 24 to 36 months. Conclusion: There are significant variations in palpebral dimensions in children under 72 months old, especially around 24 to 36 months. Copyright © 2006 Informa Healthcare.