396 resultados para PHOTOGRAMMETRY
Resumo:
As-built models have been proven useful in many project-related applications, such as progress monitoring and quality control. However, they are not widely produced in most projects because a lot of effort is still necessary to manually convert remote sensing data from photogrammetry or laser scanning to an as-built model. In order to automate the generation of as-built models, the first and fundamental step is to automatically recognize infrastructure-related elements from the remote sensing data. This paper outlines a framework for creating visual pattern recognition models that can automate the recognition of infrastructure-related elements based on their visual features. The framework starts with identifying the visual characteristics of infrastructure element types and numerically representing them using image analysis tools. The derived representations, along with their relative topology, are then used to form element visual pattern recognition (VPR) models. So far, the VPR models of four infrastructure-related elements have been created using the framework. The high recognition performance of these models validates the effectiveness of the framework in recognizing infrastructure-related elements.
Resumo:
Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.
Resumo:
As-built models have been proven useful in many project-related applications, such as progress monitoring and quality control. However, they are not widely produced in most projects because a lot of effort is still necessary to manually convert remote sensing data from photogrammetry or laser scanning to an as-built model. In order to automate the generation of as-built models, the first and fundamental step is to automatically recognize infrastructure-related elements from the remote sensing data. This paper outlines a framework for creating visual pattern recognition models that can automate the recognition of infrastructure-related elements based on their visual features. The framework starts with identifying the visual characteristics of infrastructure element types and numerically representing them using image analysis tools. The derived representations, along with their relative topology, are then used to form element visual pattern recognition (VPR) models. So far, the VPR models of four infrastructure-related elements have been created using the framework. The high recognition performance of these models validates the effectiveness of the framework in recognizing infrastructure-related elements.
Resumo:
On the issue of geological hazard evaluation(GHE), taking remote sensing and GIS systems as experimental environment, assisting with some programming development, this thesis combines multi-knowledges of geo-hazard mechanism, statistic learning, remote sensing (RS), high-spectral recognition, spatial analysis, digital photogrammetry as well as mineralogy, and selects geo-hazard samples from Hong Kong and Three Parallel River region as experimental data, to study two kinds of core questions of GHE, geo-hazard information acquiring and evaluation model. In the aspect of landslide information acquiring by RS, three detailed topics are presented, image enhance for visual interpretation, automatic recognition of landslide as well as quantitative mineral mapping. As to the evaluation model, the latest and powerful data mining method, support vector machine (SVM), is introduced to GHE field, and a serious of comparing experiments are carried out to verify its feasibility and efficiency. Furthermore, this paper proposes a method to forecast the distribution of landslides if rainfall in future is known baseing on historical rainfall and corresponding landslide susceptibility map. The details are as following: (a) Remote sensing image enhancing methods for geo-hazard visual interpretation. The effect of visual interpretation is determined by RS data and image enhancing method, for which the most effective and regular technique is image merge between high-spatial image and multi-spectral image, but there are few researches concerning the merging methods of geo-hazard recognition. By the comparing experimental of six mainstream merging methods and combination of different remote sensing data source, this thesis presents merits of each method ,and qualitatively analyzes the effect of spatial resolution, spectral resolution and time phase on merging image. (b) Automatic recognition of shallow landslide by RS image. The inventory of landslide is the base of landslide forecast and landslide study. If persistent collecting of landslide events, updating the geo-hazard inventory in time, and promoting prediction model incessantly, the accuracy of forecast would be boosted step by step. RS technique is a feasible method to obtain landslide information, which is determined by the feature of geo-hazard distribution. An automatic hierarchical approach is proposed to identify shallow landslides in vegetable region by the combination of multi-spectral RS imagery and DEM derivatives, and the experiment is also drilled to inspect its efficiency. (c) Hazard-causing factors obtaining. Accurate environmental factors are the key to analyze and predict the risk of regional geological hazard. As to predict huge debris flow, the main challenge is still to determine the startup material and its volume in debris flow source region. Exerting the merits of various RS technique, this thesis presents the methods to obtain two important hazard-causing factors, DEM and alteration mineral, and through spatial analysis, finds the relationship between hydrothermal clay alteration minerals and geo-hazards in the arid-hot valleys of Three Parallel Rivers region. (d) Applying support vector machine (SVM) to landslide susceptibility mapping. Introduce the latest and powerful statistical learning theory, SVM, to RGHE. SVM that proved an efficient statistic learning method can deal with two-class and one-class samples, with feature avoiding produce ‘pseudo’ samples. 55 years historical samples in a natural terrain of Hong Kong are used to assess this method, whose susceptibility maps obtained by one-class SVM and two-class SVM are compared to that obtained by logistic regression method. It can conclude that two-class SVM possesses better prediction efficiency than logistic regression and one-class SVM. However, one-class SVM, only requires failed cases, has an advantage over the other two methods as only "failed" case information is usually available in landslide susceptibility mapping. (e) Predicting the distribution of rainfall-induced landslides by time-series analysis. Rainfall is the most dominating factor to bring in landslides. More than 90% losing and casualty by landslides is introduced by rainfall, so predicting landslide sites under certain rainfall is an important geological evaluating issue. With full considering the contribution of stable factors (landslide susceptibility map) and dynamic factors (rainfall), the time-series linear regression analysis between rainfall and landslide risk mapis presented, and experiments based on true samples prove that this method is perfect in natural region of Hong Kong. The following 4 practicable or original findings are obtained: 1) The RS ways to enhance geo-hazards image, automatic recognize shallow landslides, obtain DEM and mineral are studied, and the detailed operating steps are given through examples. The conclusion is practical strongly. 2) The explorative researching about relationship between geo-hazards and alteration mineral in arid-hot valley of Jinshajiang river is presented. Based on standard USGS mineral spectrum, the distribution of hydrothermal alteration mineral is mapped by SAM method. Through statistic analysis between debris flows and hazard-causing factors, the strong correlation between debris flows and clay minerals is found and validated. 3) Applying SVM theory (especially one-class SVM theory) to the landslide susceptibility mapping and system evaluation for its performance is also carried out, which proves that advantages of SVM in this field. 4) Establishing time-serial prediction method for rainfall induced landslide distribution. In a natural study area, the distribution of landslides induced by a storm is predicted successfully under a real maximum 24h rainfall based on the regression between 4 historical storms and corresponding landslides.
Resumo:
A modelling scheme is described which uses satellite retrieved sea-surface temperature and chlorophyll-a to derive monthly zooplankton biomass estimates in the eastern North Atlantic; this forms part of a bio-physical model of inter-annual variations in the growth and survival of larvae and post-larvae of mackerel (Scomber scombrus). The temperature and chlorophyll data are incorporated first to model copepod (Calanus) egg production rates. Egg production is then converted to available food using distribution data from the Continuous Plankton Recorder (CPR) Survey, observed population biomass per unit daily egg production and the proportion of the larval mackerel diet comprising Calanus. Results are validated in comparison with field observations of zooplankton biomass. The principal benefit of the modelling scheme is the ability to use the combination of broad scale coverage and fine scale temporal and spatial variability of satellite data as driving forces in the model; weaknesses are the simplicity of the egg production model and the broad-scale generalizations assumed in the raising factors to convert egg production to biomass.
Resumo:
Satellite ocean-colour sensors have life spans lasting typically five-to-ten years. Detection of long-term trends in chlorophyll-a concentration (Chl-a) using satellite ocean colour thus requires the combination of different ocean-colour missions with sufficient overlap to allow for cross-calibration. A further requirement is that the different sensors perform at a sufficient standard to capture seasonal and inter-annual fluctuations in ocean colour. For over eight years, the SeaWiFS, MODIS-Aqua and MERIS ocean-colour sensors operated in parallel. In this paper, we evaluate the temporal consistency in the monthly Chl-a time-series and in monthly inter-annual variations in Chl-a among these three sensors over the 2002–2010 time period. By subsampling the monthly Chl-a data from the three sensors consistently, we found that the Chl-a time-series and Chl-a anomalies among sensors were significantly correlated for >90% of the global ocean. These correlations were also relatively insensitive to the choice of three Chl-a algorithms and two atmospheric-correction algorithms. Furthermore, on the subsampled time-series, correlations between Chl-a and time, and correlations between Chl-a and physical variables (sea-surface temperature and sea-surface height) were not significantly different for >92% of the global ocean. The correlations in Chl-a and physical variables observed for all three sensors also reflect previous theories on coupling between physical processes and phytoplankton biomass. The results support the combining of Chl-a data from SeaWiFS, MODIS-Aqua and MERIS sensors, for use in long-term Chl-a trend analysis, and highlight the importance of accounting for differences in spatial sampling among sensors when combining ocean-colour observations.
Resumo:
The process of making replicas of heritage has traditionally been developed by public agencies, corporations and museums and is not commonly used in schools. Currently there are technologies that allow creating cheap replicas. The new 3D reconstruction software, based on photographs and low cost 3D printers allow to make replicas at a cost much lower than traditional. This article describes the process of creating replicas of the sculpture Goslar Warrior of artist Henry Moore, located in Santa Cruz de Tenerife. To make this process, first, a digital model have been created using Autodesk Recap 360, Autodesk 123D Catch and Autodesk Meshmixer MarkerBot MakerWare applications. Physical replication, has been reproduced in polylactic acid (PLA) by MakerBot Replicator 2 3D printer. In addition, a cost analysis using, in one hand, the printer mentioned, and in the other hand, 3D printing services both online and local, is included. Finally, there has been a specific action with 141 students and 12 high school teachers, who filled a questionnary about the use of sculptural replicas in education.
Resumo:
A number of experimental studies have shown that postbuckling stiffened composite panels, loaded in uniaxial compression, may undergo secondary instabilities, characterised by an abrupt change in the buckled mode-shape of the skin between the supporting stiffeners. In this study high-speed digital speckle photogrammetry is used to gain further insight into an I-stiffened panel's response during this transient phase. This energy-dissipating phenomenon will be shown to be able to cause catastrophic structural failure in vulnerable structures. It is therefore imperative that an accurate and reliable methodology is available to predict this phenomenon. The shortcomings of current non-linear implicit solution schemes, found in most commercially-available finite element codes, are discussed. A robust and efficient strategy, which utilises an automated quasi-, static/pseudo-transient hybrid scheme, is presented in this paper and validated using a number of experimental tests. This approach is shown to be able to predict mode-jumping with good accuracy.
Resumo:
A numerical and experimental investigation on the mode-I intralaminar toughness of a hybrid plain weave composite laminate manufactured using resin infusion under flexible tooling (RIFT) process is presented in this paper. The pre-cracked geometries consisted of overheight compact tension (OCT), double edge notch (DEN) and centrally cracked four-point-bending (4PBT) test specimens. The position as well as the strain field ahead of the crack tip during the loading stage was determined using a digital speckle photogrammetry system. The limitation on the applicability of the standard data reduction schemes for the determination of intralaminar toughness of composite materials is presented and discussed. A methodology based on the numerical evaluation of the strain energy release rate using the J-integral method is proposed to derive new geometric correction functions for the determination of the stress intensity factor for composites. The method accounts for material anisotropy and finite specimen dimension effects regardless of the geometry. The approach has been validated for alternative non-standard specimen geometries. A comparison between different methods currently available for computing the intralaminar fracture toughness in composite laminates is presented and a good agreement between numerical and experimental results using the proposed methodology was obtained.
Resumo:
Lowering intraocular pressure in adults with glaucoma may be associated with an improvement in appearance of the optic nerve head. The stage of disease, the amount of intraocular pressure reduction, and the age of the patient probably influence the occurrence of this event. The clinical relevance of 'reversal' has not been established with certainty. The reversibility of glaucomatous cupping can be detected by subjective and qualitative means (examination of the patient or of fundus photographs) or by quantitative techniques such as photogrammetry, computerized image analysis, and scanning laser tomography. Clinical and experimental studies are providing new information about the behavior of the optic nerve head tissues in response to changes in intraocular pressure.
Resumo:
This paper presents an experimental and numerical study focused on the tensile fibre fracture toughness characterisation of hybrid plain weave composite laminates using non-standardized Overheight Compact Tension (OCT) specimens. The position as well as the strain field ahead of the crack tip in the specimens was determined using a digital speckle photogrammetry system. The limitation on the applicability of standard data reduction schemes for the determination of the intralaminar fibre fracture toughness of composites is presented and discussed. A methodology based on the numerical evaluation of the strain energy release rate using the J-integral method is proposed to derive new geometric correction functions for the determination of stress intensity factor for alternative composite specimen geometries. A comparison between different methods currently available to compute the intralaminar fracture toughness in composites is also presented and discussed. Good agreement between numerical and experimental results using the proposed methodology was obtained.
Resumo:
OBJECTIVE: To assess the impact of laser peripheral iridotomy (LPI) on forward-scatter of light and subjective visual symptoms and to identify LPI parameters influencing these phenomena. DESIGN: Cohort study derived from a randomized trial, using an external control group. PARTICIPANTS: Chinese subjects initially aged 50 or older and 70 years or younger with bilateral narrow angles undergoing LPI in 1 eye selected at random, and age- and gender-matched controls. METHODS: Eighteen months after laser, LPI-treated subjects underwent digital iris photography and photogrammetry to characterize the size and location of the LPI, Lens Opacity Classification System III cataract grading, and measurement of retinal straylight (C-Quant; OCULUS, Wetzlar, Germany) in the treated and untreated eyes and completed a visual symptoms questionnaire. Controls answered the questionnaire and underwent straylight measurement and (in a random one-sixth sample) cataract grading. MAIN OUTCOME MEASURES: Retinal straylight levels and subjective visual symptoms. RESULTS: Among 230 LPI-treated subjects (121 [58.8%] with LPI totally covered by the lid, 43 [19.8%] with LPI partly covered by the lid, 53 [24.4%] with LPI uncovered by the lid), 217 (94.3%) completed all testing, as did 250 (93.3%) of 268 controls. Age, gender, and prevalence of visual symptoms did not differ between treated subjects and controls, although nuclear (P<0.01) and cortical (P = 0.03) cataract were less common among controls. Neither presenting visual acuity nor straylight score differed between the treated and untreated eyes among all treated persons, nor among those (n = 96) with LPI partially or totally uncovered. Prevalence of subjective glare did not differ significantly between participants with totally covered LPI (6.61%; 95% confidence interval [CI], 3.39%-12.5%), partially covered LPI (11.6%; 95% CI, 5.07%-24.5%), or totally uncovered LPI (9.43%; 95% CI, 4.10%-10.3%). In regression models, only worse cortical cataract grade (P = 0.01) was associated significantly with straylight score, and no predictors were associated with subjective glare. None of the LPI size or location parameters were associated with straylight or subjective symptoms. CONCLUSIONS: These results suggests that LPI is safe regarding measures of straylight and visual symptoms. This randomized design provides strong evidence that treatment programs for narrow angles would be unlikely to result in important medium-term visual disability.