928 resultados para Digital terrain model
Resumo:
The research aims at developing a framework for semantic-based digital survey of architectural heritage. Rooted in knowledge-based modeling which extracts mathematical constraints of geometry from architectural treatises, as-built information of architecture obtained from image-based modeling is integrated with the ideal model in BIM platform. The knowledge-based modeling transforms the geometry and parametric relation of architectural components from 2D printings to 3D digital models, and create large amount variations based on shape grammar in real time thanks to parametric modeling. It also provides prior knowledge for semantically segmenting unorganized survey data. The emergence of SfM (Structure from Motion) provides access to reconstruct large complex architectural scenes with high flexibility, low cost and full automation, but low reliability of metric accuracy. We solve this problem by combing photogrammetric approaches which consists of camera configuration, image enhancement, and bundle adjustment, etc. Experiments show the accuracy of image-based modeling following our workflow is comparable to that from range-based modeling. We also demonstrate positive results of our optimized approach in digital reconstruction of portico where low-texture-vault and dramatical transition of illumination bring huge difficulties in the workflow without optimization. Once the as-built model is obtained, it is integrated with the ideal model in BIM platform which allows multiple data enrichment. In spite of its promising prospect in AEC industry, BIM is developed with limited consideration of reverse-engineering from survey data. Besides representing the architectural heritage in parallel ways (ideal model and as-built model) and comparing their difference, we concern how to create as-built model in BIM software which is still an open area to be addressed. The research is supposed to be fundamental for research of architectural history, documentation and conservation of architectural heritage, and renovation of existing buildings.
Resumo:
The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn
Resumo:
Wireless networks rapidly became a fundamental pillar of everyday activities. Whether at work or elsewhere, people often benefits from always-on connections. This trend is likely to increase, and hence actual technologies struggle to cope with the increase in traffic demand. To this end, Cognitive Wireless Networks have been studied. These networks aim at a better utilization of the spectrum, by understanding the environment in which they operate, and adapt accordingly. In particular recently national regulators opened up consultations on the opportunistic use of the TV bands, which became partially free due to the digital TV switch over. In this work, we focus on the indoor use of of TVWS. Interesting use cases like smart metering and WiFI like connectivity arise, and are studied and compared against state of the art technology. New measurements for TVWS networks will be presented and evaluated, and fundamental characteristics of the signal derived. Then, building on that, a new model of spectrum sharing, which takes into account also the height from the terrain, is presented and evaluated in a real scenario. The principal limits and performance of TVWS operated networks will be studied for two main use cases, namely Machine to Machine communication and for wireless sensor networks, particularly for the smart grid scenario. The outcome is that TVWS are certainly interesting to be studied and deployed, in particular when used as an additional offload for other wireless technologies. Seeing TVWS as the only wireless technology on a device is harder to be seen: the uncertainity in channel availability is the major drawback of opportunistic networks, since depending on the primary network channel allocation might lead in having no channels available for communication. TVWS can be effectively exploited as offloading solutions, and most of the contributions presented in this work proceed in this direction.
Resumo:
In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.
Resumo:
A laser scanning microscope collects information from a thin, focal plane and ignores out of focus information. During the past few years it has become the standard imaging method to characterise cellular morphology and structures in static as well as in living samples. Laser scanning microscopy combined with digital image restoration is an excellent tool for analysing the cellular cytoarchitecture, expression of specific proteins and interactions of various cell types, thus defining valid criteria for the optimisation of cell culture models. We have used this tool to establish and evaluate a three dimensional model of the human epithelial airway wall.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
We show that the variation of flow stress with strain rate and grain size in a magnesium alloy deformed at a constant strain rate and 450 °C can be predicted by a crystal plasticity model that includes grain boundary sliding and diffusion. The model predicts the grain size dependence of the critical strain rate that will cause a transition in deformation mechanism from dislocation creep to grain boundary sliding, and yields estimates for grain boundary fluidity and diffusivity.
Resumo:
The Gaussian-2, Gaussian-3, complete basis set- (CBS-) QB3, and CBS-APNO methods have been used to calculate ΔH° and ΔG° values for neutral clusters of water, (H2O)n, where n = 2−6. The structures are similar to those determined from experiment and from previous high-level calculations. The thermodynamic calculations by the G2, G3, and CBS-APNO methods compare well against the estimated MP2(CBS) limit. The cyclic pentamer and hexamer structures release the most heat per hydrogen bond formed of any of the clusters. While the cage and prism forms of the hexamer are the lowest energy structures at very low temperatures, as temperature is increased the cyclic structure is favored. The free energies of cluster formation at different temperatures reveal interesting insights, the most striking being that the cyclic trimer, cyclic tetramer, and cyclic pentamer, like the dimer, should be detectable in the lower troposphere. We predict water dimer concentrations of 9 × 1014 molecules/cm3, water trimer concentrations of 2.6 × 1012 molecules/cm3, tetramer concentrations of approximately 5.8 × 1011 molecules/cm3, and pentamer concentrations of approximately 3.5 × 1010 molecules/cm3 in saturated air at 298 K. These results have important implications for understanding the gas-phase chemistry of the lower troposphere.
Resumo:
The supermolecule approach has been used to model the hydration of cyclic 3‘,5‘-adenosine monophosphate, cAMP. Model building combined with PM3 optimizations predict that the anti conformer of cAMP is capable of hydrogen bonding to an additional solvent water molecule compared to the syn conformer. The addition of one water to the syn superstructure with concurrent rotation of the base about the glycosyl bond to form the anti superstructure leads to an additional enthalpy of stabilization of approximately −6 kcal/mol at the PM3 level. This specific solute−solvent interaction is an example of a large solvent effect, as the method predicts that cAMP has a conformational preference for the anti isomer in solution. This conformational preference results from a change in the number of specific solute−solvent interactions in this system. This prediction could be tested by NMR techniques. The number of waters predicted to be in the first hydration sphere around cAMP is in agreement with the results of hydration studies of nucleotides in DNA. In addition, the detailed picture of solvation about this cyclic nucleotide is in agreement with infrared experimental results.
Resumo:
The Gaussian-2, Gaussian-3, Complete Basis Set-QB3, and Complete Basis Set-APNO methods have been used to calculate geometries of neutral clusters of water, (H2O)n, where n = 2–6. The structures are in excellent agreement with those determined from experiment and those predicted from previous high-level calculations. These methods also provide excellent thermochemical predictions for water clusters, and thus can be used with confidence in evaluating the structures and thermochemistry of water clusters.
Resumo:
The Gaussian-3 method developed by Pople and coworkers has been used to calculate the free energy of neutral octamer clusters of water, (H2O)8. The most energetically stable structures are in excellent agreement with those determined from experiment and those predicted from previous high-level calculations. Cubic structures are favored over noncubic structures over all temperature ranges studied. The D2d cubic structure is the lowest free energy structure and dominates the potential energy and free energy hypersurfaces from 0 K to 298 K.
Resumo:
A series of CCSD(T) single-point calculations on MP4(SDQ) geometries and the W1 model chemistry method have been used to calculate ΔH° and ΔG° values for the deprotonation of 17 gas-phase reactions where the experimental values have reported accuracies within 1 kcal/mol. These values have been compared with previous calculations using the G3 and CBS model chemistries and two DFT methods. The most accurate CCSD(T) method uses the aug-cc-pVQZ basis set. Extrapolation of the aug-cc-pVTZ and aug-cc-pVQZ results yields the most accurate agreement with experiment, with a standard deviation of 0.58 kcal/mol for ΔG° and 0.70 kcal/mol for ΔH°. Standard deviations from experiment for ΔG° and ΔH° for the W1 method are 0.95 and 0.83 kcal/mol, respectively. The G3 and CBS-APNO results are competitive with W1 and are much less expensive. Any of the model chemistry methods or the CCSD(T)/aug-cc-pVQZ method can serve as a valuable check on the accuracy of experimental data reported in the National Institutes of Standards and Technology (NIST) database.
Resumo:
The G2, G3, CBS-QB3, and CBS-APNO model chemistry methods and the B3LYP, B3P86, mPW1PW, and PBE1PBE density functional theory (DFT) methods have been used to calculate ΔH° and ΔG° values for ionic clusters of the ammonium ion complexed with water and ammonia. Results for the clusters NH4+(NH3)n and NH4+(H2O)n, where n = 1−4, are reported in this paper and compared against experimental values. Agreement with the experimental values for ΔH° and ΔG° for formation of NH4+(NH3)n clusters is excellent. Comparison between experiment and theory for formation of the NH4+(H2O)n clusters is quite good considering the uncertainty in the experimental values. The four DFT methods yield excellent agreement with experiment and the model chemistry methods when the aug-cc-pVTZ basis set is used for energetic calculations and the 6-31G* basis set is used for geometries and frequencies. On the basis of these results, we predict that all ions in the lower troposphere will be saturated with at least one complete first hydration shell of water molecules.