994 resultados para Model calibration


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Far-field stresses are those present in a volume of rock prior to excavations being created. Estimates of the orientation and magnitude of far-field stresses, often used in mine design, are generally obtained by single-point measurements of stress, or large-scale, regional trends. Point measurements can be a poor representation of far-field stresses as a result of excavation-induced stresses and geological structures. For these reasons, far-field stress estimates can be associated with high levels of uncertainty. The purpose of this thesis is to investigate the practical feasibility, applications, and limitations of calibrating far-field stress estimates through tunnel deformation measurements captured using LiDAR imaging. A method that estimates the orientation and magnitude of excavation-induced principal stress changes through back-analysis of deformation measurements from LiDAR imaged tunnels was developed and tested using synthetic data. If excavation-induced stress change orientations and magnitudes can be accurately estimated, they can be used in the calibration of far-field stress input to numerical models. LiDAR point clouds have been proven to have a number of underground applications, thus it is desired to explore their use in numerical model calibration. The back-analysis method is founded on the superposition of stresses and requires a two-dimensional numerical model of the deforming tunnel. Principal stress changes of known orientation and magnitude are applied to the model to create calibration curves. Estimation can then be performed by minimizing squared differences between the measured tunnel and sets of calibration curve deformations. In addition to the back-analysis estimation method, a procedure consisting of previously existing techniques to measure tunnel deformation using LiDAR imaging was documented. Under ideal conditions, the back-analysis method estimated principal stress change orientations within ±5° and magnitudes within ±2 MPa. Results were comparable for four different tunnel profile shapes. Preliminary testing using plastic deformation, a rough tunnel profile, and profile occlusions suggests that the method can work under more realistic conditions. The results from this thesis set the groundwork for the continued development of a new, inexpensive, and efficient far-field stress estimate calibration method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is focused on the viscoelastic behavior of macro-synthetic fiber-reinforced concrete (MSFRC) with polypropylene studied numerically when subjected to temperature variations (-30 oC to +60 oC). LDPM (lattice discrete particle model), a meso-scale model for heterogeneous composites, is used. To reproduce the MSFRC structural behavior, an extended version of LDPM that includes fiber effects through fiber-concrete interface micromechanics, called LDPM-F, is applied. Model calibration is performed based on three-point bending, cube, and cylinder test for plain concrete and MSFRC. This is followed by a comprehensive literature study on the variation of mechanical properties with temperature for individual fibers and plain concrete. This literature study and past experimental test results constitute inputs for final numerical simulations. The numerical response of MSFRC three-point bending test is replicated and compared with the previously conducted experimental test results; finally, the conclusions were drawn. LDPM numerical model is successfully calibrated using experimental responses on plain concrete. Fiber-concrete interface micro-mechanical parameters are subsequently fixed and LDPM-F models are calibrated based on MSFRC three-point bending test at room temperature. Number of fibers contributing crack bridging mechanism is computed and found to be in good agreement with experimental counts. Temperature variations model for individual constituents of MSFRC, fibers and plain concrete, are implemented in LDPM-F. The model is validated for MSFRC three-point bending stress-CMOD (crack mouth opening) response reproduced at -30 oC, -15 oC, 0 oC, +20 oC, +40 oC and +60 oC. It is found that the model can well describe the temperature variation behavior of MSFRC. At positive temperatures, simulated responses are in good agreement. Slight disagreement in negative regimes suggests an in-depth study on fiber-matrix interface bond behavior with varying temperatures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The sensitivity of altitudinal and latitudinal tree-line ecotones to climate change, particularly that of temperature, has received much attention. To improve our understanding of the factors affecting tree-line position, we used the spatially explicit dynamic forest model TreeMig. Although well-suited because of its landscape dynamics functions, TreeMig features a parabolic temperature growth response curve, which has recently been questioned. and the species parameters are not specifically calibrated for cold temperatures. Our main goals were to improve the theoretical basis of the temperature growth response curve in the model and develop a method for deriving that curve's parameters from tree-ring data. We replaced the parabola with an asymptotic curve, calibrated for the main species at the subalpine (Swiss Alps: Pinus cembra, Larix decidua, Picea abies) and boreal (Fennoscandia: Pinus sylvestris, Betula pubescens, P. abies) tree-lines. After fitting new parameters, the growth curve matched observed tree-ring widths better. For the subalpine species, the minimum degree-day sum allowing, growth (kDDMin) was lowered by around 100 degree-days; in the case of Larix, the maximum potential ring-width was increased to 5.19 mm. At the boreal tree-line, the kDDMin for P. sylvestris was lowered by 210 degree-days and its maximum ring-width increased to 2.943 mm; for Betula (new in the model) kDDMin was set to 325 degree-days and the maximum ring-width to 2.51 mm; the values from the only boreal sample site for Picea were similar to the subalpine ones, so the same parameters were used. However, adjusting the growth response alone did not improve the model's output concerning species' distributions and their relative importance at tree-line. Minimum winter temperature (MinWiT, mean of the coldest winter month), which controls seedling establishment in TreeMig, proved more important for determining distribution. Picea, P. sylvestris and Betula did not previously have minimum winter temperature limits, so these values were set to the 95th percentile of each species' coldest MinWiT site (respectively -7, -11, -13). In a case study for the Alps, the original and newly calibrated versions of TreeMig were compared with biomass data from the National Forest Inventor), (NFI). Both models gave similar, reasonably realistic results. In conclusion, this method of deriving temperature responses from tree-rings works well. However, regeneration and its underlying factors seem more important for controlling species' distributions than previously thought. More research on regeneration ecology, especially at the upper limit of forests. is needed to improve predictions of tree-line responses to climate change further.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

During the winters of 1999 and 2000 large avalanches occurred in the ski resort of Las Leñas (Los Andes, Mendoza, Argentina). On 8 September 1999 an avalanche of new, dry snow ran over a path with a 1000 m vertical drop. On 30 June and on 1 July 2000 five avalanches of similar vertical drop, which start with new snow, entrained very wet snow during their descent, and evolved into dense snow avalanches. To use the MN2D dynamics model correctly, calibration of model parameters is necessary. Also, no previous works with the use of dynamics models exist in South America. The events used to calibrate the model occurred during the winters of 1999 and 2000 and are a good sample of the kind of avalanches which can occur in this area of the Andes range. By considering the slope morphology and topography, the snow and meteorological conditions and the results of the model simulations, it was estimated that these avalanches were not extreme events with a return period greater than one hundred years. This implies that, in natural conditions, bigger, extreme avalanches could happen. In this work, the MN2D dynamics model is calibrated with two different avalanches of the same magnitude: dry and wet. The importance of the topographic data in the simulation is evaluated. It is concluded that MN2D dynamics model can be used to simulate dry extreme avalanches in Argentinean Andes but not to simulate extreme wet avalanches, which are much more sensitive to the topography.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the previous study, moisture loss indices were developed based on the field measurements from one CIR-foam and one CIR-emulsion construction sites. To calibrate these moisture loss indices, additional CIR construction sites were monitored using embedded moisture and temperature sensors. In addition, to determine the optimum timing of an HMA overlay on the CIR layer, the potential of using the stiffness of CIR layer measured by geo-gauge instead of the moisture measurement by a nuclear gauge was explored. Based on the monitoring the moisture and stiffness from seven CIR project sites, the following conclusions are derived: 1. In some cases, the in-situ stiffness remained constant and, in other cases, despite some rainfalls, stiffness of the CIR layers steadily increased during the curing time. 2. The stiffness measured by geo-gauge was affected by a significant amount of rainfall. 3. The moisture indices developed for CIR sites can be used for predicting moisture level in a typical CIR project. The initial moisture content and temperature were the most significant factors in predicting the future moisture content in the CIR layer. 4. The stiffness of a CIR layer is an extremely useful tool for contractors to use for timing their HMA overlay. To determine the optimal timing of an HMA overlay, it is recommended that the moisture loss index should be used in conjunction with the stiffness of the CIR layer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main objective of this paper is to discuss maximum likelihood inference for the comparative structural calibration model (Barnett, in Biometrics 25:129-142, 1969), which is frequently used in the problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of n experimental units. We consider asymptotic tests to answer the outlined questions. The methodology is applied to a real data set and a small simulation study is presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.