897 resultados para estimation of dynamic structural models
Resumo:
Doutoramento em Economia.
Resumo:
Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.
Resumo:
Tall buildings are wind-sensitive structures and could experience high wind-induced effects. Aerodynamic boundary layer wind tunnel testing has been the most commonly used method for estimating wind effects on tall buildings. Design wind effects on tall buildings are estimated through analytical processing of the data obtained from aerodynamic wind tunnel tests. Even though it is widely agreed that the data obtained from wind tunnel testing is fairly reliable the post-test analytical procedures are still argued to have remarkable uncertainties. This research work attempted to assess the uncertainties occurring at different stages of the post-test analytical procedures in detail and suggest improved techniques for reducing the uncertainties. Results of the study showed that traditionally used simplifying approximations, particularly in the frequency domain approach, could cause significant uncertainties in estimating aerodynamic wind-induced responses. Based on identified shortcomings, a more accurate dual aerodynamic data analysis framework which works in the frequency and time domains was developed. The comprehensive analysis framework allows estimating modal, resultant and peak values of various wind-induced responses of a tall building more accurately. Estimating design wind effects on tall buildings also requires synthesizing the wind tunnel data with local climatological data of the study site. A novel copula based approach was developed for accurately synthesizing aerodynamic and climatological data up on investigating the causes of significant uncertainties in currently used synthesizing techniques. Improvement of the new approach over the existing techniques was also illustrated with a case study on a 50 story building. At last, a practical dynamic optimization approach was suggested for tuning structural properties of tall buildings towards attaining optimum performance against wind loads with less number of design iterations.
Resumo:
During its history, several significant earthquakes have shaken the Lower Tagus Valley (Portugal). These earthquakes were destructive; some strong earthquakes were produced by large ruptures in offshore structures located southwest of the Portuguese coastline, and other moderate earthquakes were produced by local faults. In recent years, several studies have successfully obtained strong-ground motion syntheses for the Lower Tagus Valley using the finite difference method. To confirm the velocity model of this sedimentary basin obtained from geophysical and geological data, we analysed the ambient seismic noise measurements by applying the horizontal to vertical spectral ratio (HVSR) method. This study reveals the dependence of the frequency and amplitude of the low-frequency (HVSR) peaks (0.2–2 Hz) on the sediment thickness. We have obtained the depth of the Cenozoic basement along a profile transversal to the basin by the inversion of these ratios, imposing constraints from seismic reflection, boreholes, seismic sounding and gravimetric and magnetic potentials. This technique enables us to improve the existing three-dimensional model of the Lower Tagus Valley structure. The improved model will be decisive for the improvement of strong motion predictions in the earthquake hazard analysis of this highly populated basin. The methodology discussed can be applied to any other sedimentary basin.
Resumo:
The work carried out in this thesis aims at: - studying – in both simulative and experimental methods – the effect of electrical transients (i.e., Voltage Polarity Reversals VPRs, Temporary OverVoltages TOVs, and Superimposed Switching Impulses SSIs) on the aging phenomena in HVDC extruded cable insulations. Dielectric spectroscopy, conductivity measurements, Fourier Transform Infra-Red FTIR spectroscopy, and space charge measurements show variation in the insulating properties of the aged Cross-Linked Polyethylene XLPE specimens compared to non-aged ones. Scission in XLPE bonds and formation of aging chemical bonds is also noticed in aged insulations due to possible oxidation reactions. The aged materials show more ability to accumulate space charges compared to non-aged ones. An increase in both DC electrical conductivity and imaginary permittivity has been also noticed. - The development of life-based geometric design of HVDC cables in a detailed parametric analysis of all parameters that affect the design. Furthermore, the effect of both electrical and thermal transients on the design is also investigated. - The intrinsic thermal instability in HVDC cables and the effect of insulation characteristics on the thermal stability using a temperature and field iterative loop (using numerical methods – Finite Difference Method FDM). The dielectric loss coefficient is also calculated for DC cables and found to be less than that in AC cables. This emphasizes that the intrinsic thermal instability is critical in HVDC cables. - Fitting electrical conductivity models to the experimental measurements using both models found in the literature and modified models to find the best fit by considering the synergistic effect between field and temperature coefficients of electrical conductivity.
Resumo:
Fourier transform near infrared (FT-NIR) spectroscopy was evaluated as an analytical too[ for monitoring residual Lignin, kappa number and hexenuronic acids (HexA) content in kraft pulps of Eucalyptus globulus. Sets of pulp samples were prepared under different cooking conditions to obtain a wide range of compound concentrations that were characterised by conventional wet chemistry analytical methods. The sample group was also analysed using FT-NIR spectroscopy in order to establish prediction models for the pulp characteristics. Several models were applied to correlate chemical composition in samples with the NIR spectral data by means of PCR or PLS algorithms. Calibration curves were built by using all the spectral data or selected regions. Best calibration models for the quantification of lignin, kappa and HexA were proposed presenting R-2 values of 0.99. Calibration models were used to predict pulp titers of 20 external samples in a validation set. The lignin concentration and kappa number in the range of 1.4-18% and 8-62, respectively, were predicted fairly accurately (standard error of prediction, SEP 1.1% for lignin and 2.9 for kappa). The HexA concentration (range of 5-71 mmol kg(-1) pulp) was more difficult to predict and the SEP was 7.0 mmol kg(-1) pulp in a model of HexA quantified by an ultraviolet (UV) technique and 6.1 mmol kg(-1) pulp in a model of HexA quantified by anion-exchange chromatography (AEC). Even in wet chemical procedures used for HexA determination, there is no good agreement between methods as demonstrated by the UV and AEC methods described in the present work. NIR spectroscopy did provide a rapid estimate of HexA content in kraft pulps prepared in routine cooking experiments.
Resumo:
The aim of this study is to quantify the mass transfer velocity using turbulence parameters from simultaneous measurements of oxygen concentration fields and velocity fields. The surface divergence model was considered in more detail, using data obtained for the lower range of beta (surface divergence). It is shown that the existing models that use the divergence concept furnish good predictions for the transfer velocity also for low values of beta, in the range of this study. Additionally, traditional conceptual models, such as the film model, the penetration-renewal model, and the large eddy model, were tested using the simultaneous information of concentration and velocity fields. It is shown that the film and the surface divergence models predicted the mass transfer velocity for all the range of the equipment Reynolds number used here. The velocity measurements showed viscosity effects close to the surface, which indicates that the surface was contaminated with some surfactant. Considering the results, this contamination can be considered slight for the mass transfer predictions. (C) 2009 American Institute of Chemical Engineers AIChE J, 56: 2005-2017; 2010
Resumo:
Dynamic experiments in a nonadiabatic packed bed were carried out to evaluate the response to disturbances in wall temperature and inlet airflow rate and temperature. A two-dimensional, pseudo-homogeneous, axially dispersed plug-flow model was numerically solved and used to interpret the results. The model parameters were fitted in distinct stages: effective radial thermal conductivity (K (r)) and wall heat transfer coefficient (h (w)) were estimated from steady-state data and the characteristic packed bed time constant (tau) from transient data. A new correlation for the K (r) in packed beds of cylindrical particles was proposed. It was experimentally proved that temperature measurements using radially inserted thermocouples and a ring-shaped sensor were not distorted by heat conduction across the thermocouple or by the thermal inertia effect of the temperature sensors.
Resumo:
Traditional field sampling approaches for ecological studies of restored habitat can only cover small areas in detail, con be time consuming, and are often invasive and destructive. Spatially extensive and non-invasive remotely sensed data can make field sampling more focused and efficient. The objective of this work was to investigate the feasibility and accuracy of hand-held and airborne remotely sensed data to estimate vegetation structural parameters for an indicator plant species in a restored wetland. High spatial resolution, digital, multispectral camera images were captured from an aircraft over Sweetwater Marsh (San Diego County, California) during each growing season between 1992-1996. Field data were collected concurrently, which included plant heights, proportional ground cover and canopy architecture type, and spectral radiometer measurements. Spartina foliosa (Pacific cordgrass) is the indicator species for the restoration monitoring. A conceptual model summarizing the controls on the spectral reflectance properties of Pacific cordgrass was established. Empirical models were developed relating the stem length, density, and canopy architecture of cordgrass to normalized-difference-vegetation-index values. The most promising results were obtained from empirical estimates of total ground cover using image data that had been stratified into high, middle, and low marsh zones. As part of on-going restoration monitoring activities, this model is being used to provide maps of estimated vegetation cover.
Resumo:
The experiment examined the influence of memory for prior instances on aircraft conflict detection. Participants saw pairs of similar aircraft repeatedly conflict with each other. Performance improvements suggest that participants credited the conflict status of familiar aircraft pairs to repeated static features such as speed, and dynamic features such as aircraft relative position. Participants missed conflicts when a conflict pair resembled a pair that had repeatedly passed safely. Participants either did not attend to, or interpret, the bearing of aircraft correctly as a result of false memory-based expectations. Implications for instance models and situational awareness in dynamic systems are discussed.
Resumo:
The magnitude of the basic reproduction ratio R(0) of an epidemic can be estimated in several ways, namely, from the final size of the epidemic, from the average age at first infection, or from the initial growth phase of the outbreak. In this paper, we discuss this last method for estimating R(0) for vector-borne infections. Implicit in these models is the assumption that there is an exponential phase of the outbreaks, which implies that in all cases R(0) > 1. We demonstrate that an outbreak is possible, even in cases where R(0) is less than one, provided that the vector-to-human component of R(0) is greater than one and that a certain number of infected vectors are introduced into the affected population. This theory is applied to two real epidemiological dengue situations in the southeastern part of Brazil, one where R(0) is less than one, and other one where R(0) is greater than one. In both cases, the model mirrors the real situations with reasonable accuracy.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.