973 resultados para model calibration
Resumo:
Car following (CF) and lane changing (LC) are two primary driving tasks observed in traffic flow, and are thus vital components of traffic flow theories, traffic operation and control. Over the past decades a large number of CF models have been developed in an attempt to describe CF behaviour under a wide range of traffic conditions. Although CF has been widely studied for many years, LC did not receive much attention until recently. Over the last decade, researchers have slowly but surely realized the critical role that LC plays in traffic operations and traffic safety; this realization has motivated significant attempts to model LC decision-making and its impact on traffic. Despite notable progresses in modelling CF and LC, our knowledge on these two important issues remains incomplete because of issues related to data, model calibration and validation, human factors, just to name a few. Thus, this special issue will focus on latest developments in modelling, calibrating, and validating two primary vehicular interactions observed in traffic flow: CF and LC.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Climate change would significantly affect many hydrologic systems, which in turn would affect the water availability, runoff, and the flow in rivers. This study evaluates the impacts of possible future climate change scenarios on the hydrology of the catchment area of the TungaBhadra River, upstream of the Tungabhadra dam. The Hydrologic Engineering Center's Hydrologic Modeling System version 3.4 (HEC-HMS 3.4) is used for the hydrological modelling of the study area. Linear-regression-based Statistical DownScaling Model version 4.2 (SDSM 4.2) is used to downscale the daily maximum and minimum temperature, and daily precipitation in the four sub-basins of the study area. The large-scale climate variables for the A2 and B2 scenarios obtained from the Hadley Centre Coupled Model version 3 are used. After model calibration and testing of the downscaling procedure, the hydrological model is run for the three future periods: 20112040, 20412070, and 20712099. The impacts of climate change on the basin hydrology are assessed by comparing the present and future streamflow and the evapotranspiration estimates. Results of the water balance study suggest increasing precipitation and runoff and decreasing actual evapotranspiration losses over the sub-basins in the study area.
Resumo:
Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.
Resumo:
p.125-132
Resumo:
This article describes an approach for quantifying microsphere deposition onto iron-oxide-coated sand under the influence of adsorbed Suwannee River Humic Acid (SRHA). The experimental technique involved a triple pulse injection of model latex microspheres (microspheres) in pulses of (1) microspheres, followed by (2) SRHA, and then (3) microspheres, into a column filled with iron-coated quartz sand as a water-saturated porous medium. A random sequential adsorption model (RSA) simulated the gradual rise in the first (microsphere) breakthrough curve (BTC). Using the same model calibration parameters a dramatic increase in concentration at the start of the second particle BTC, generated after SRHA injection, could be simulated by matching microsphere concentrations to extrapolated RSA output. RSA results and microsphere/SRHA recoveries showed that 1 mg of SRHA could block 5.90 plus or minus 0.14 x 10^9 microsphere deposition sites. This figure was consistent between experiments injecting different SRHA masses, despite contrasting microsphere deposition/release regimes generating the second microsphere BTC.
Integrating Multiple Point Statistics with Aerial Geophysical Data to assist Groundwater Flow Models
Resumo:
The process of accounting for heterogeneity has made significant advances in statistical research, primarily in the framework of stochastic analysis and the development of multiple-point statistics (MPS). Among MPS techniques, the direct sampling (DS) method is tested to determine its ability to delineate heterogeneity from aerial magnetics data in a regional sandstone aquifer intruded by low-permeability volcanic dykes in Northern Ireland, UK. The use of two two-dimensional bivariate training images aids in creating spatial probability distributions of heterogeneities of hydrogeological interest, despite relatively ‘noisy’ magnetics data (i.e. including hydrogeologically irrelevant urban noise and regional geologic effects). These distributions are incorporated into a hierarchy system where previously published density function and upscaling methods are applied to derive regional distributions of equivalent hydraulic conductivity tensor K. Several K models, as determined by several stochastic realisations of MPS dyke locations, are computed within groundwater flow models and evaluated by comparing modelled heads with field observations. Results show a significant improvement in model calibration when compared to a simplistic homogeneous and isotropic aquifer model that does not account for the dyke occurrence evidenced by airborne magnetic data. The best model is obtained when normal and reverse polarity dykes are computed separately within MPS simulations and when a probability threshold of 0.7 is applied. The presented stochastic approach also provides improvement when compared to a previously published deterministic anisotropic model based on the unprocessed (i.e. noisy) airborne magnetics. This demonstrates the potential of coupling MPS to airborne geophysical data for regional groundwater modelling.
Resumo:
Low-velocity impact damage can drastically reduce the residual strength of a composite structure even when the damage is barely visible. The ability to computationally predict the extent of damage and compression-after-impact (CAI) strength of a composite structure can potentially lead to the exploration of a larger design space without incurring significant time and cost penalties. A high-fidelity three-dimensional composite damage model, to predict both low-velocity impact damage and CAI strength of composite laminates, has been developed and implemented as a user material subroutine in the commercial finite element package, ABAQUS/Explicit. The intralaminar damage model component accounts for physically-based tensile and compressive failure mechanisms, of the fibres and matrix, when subjected to a three-dimensional stress state. Cohesive behaviour was employed to model the interlaminar failure between plies with a bi-linear traction–separation law for capturing damage onset and subsequent damage evolution. The virtual tests, set up in ABAQUS/Explicit, were executed in three steps, one to capture the impact damage, the second to stabilize the specimen by imposing new boundary conditions required for compression testing, and the third to predict the CAI strength. The observed intralaminar damage features, delamination damage area as well as residual strength are discussed. It is shown that the predicted results for impact damage and CAI strength correlated well with experimental testing without the need of model calibration which is often required with other damage models.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Vias de Comunicação e Transportes
Resumo:
INTRODUCTION: New scores have been developed and validated in the US for in-hospital mortality risk stratification in patients undergoing coronary angioplasty: the National Cardiovascular Data Registry (NCDR) risk score and the Mayo Clinic Risk Score (MCRS). We sought to validate these scores in a European population with acute coronary syndrome (ACS) and to compare their predictive accuracy with that of the GRACE risk score. METHODS: In a single-center ACS registry of patients undergoing coronary angioplasty, we used the area under the receiver operating characteristic curve (AUC), a graphical representation of observed vs. expected mortality, and net reclassification improvement (NRI)/integrated discrimination improvement (IDI) analysis to compare the scores. RESULTS: A total of 2148 consecutive patients were included, mean age 63 years (SD 13), 74% male and 71% with ST-segment elevation ACS. In-hospital mortality was 4.5%. The GRACE score showed the best AUC (0.94, 95% CI 0.91-0.96) compared with NCDR (0.87, 95% CI 0.83-0.91, p=0.0003) and MCRS (0.85, 95% CI 0.81-0.90, p=0.0003). In model calibration analysis, GRACE showed the best predictive power. With GRACE, patients were more often correctly classified than with MCRS (NRI 78.7, 95% CI 59.6-97.7; IDI 0.136, 95% CI 0.073-0.199) or NCDR (NRI 79.2, 95% CI 60.2-98.2; IDI 0.148, 95% CI 0.087-0.209). CONCLUSION: The NCDR and Mayo Clinic risk scores are useful for risk stratification of in-hospital mortality in a European population of patients with ACS undergoing coronary angioplasty. However, the GRACE score is still to be preferred.
Resumo:
Dissertation to obtain the degree of Master in Chemical and Biochemical Engineering
Resumo:
Cette thèse porte sur les questions d'évaluation et de couverture des options dans un modèle exponentiel-Lévy avec changements de régime. Un tel modèle est construit sur un processus additif markovien un peu comme le modèle de Black- Scholes est basé sur un mouvement Brownien. Du fait de l'existence de plusieurs sources d'aléa, nous sommes en présence d'un marché incomplet et ce fait rend inopérant les développements théoriques initiés par Black et Scholes et Merton dans le cadre d'un marché complet. Nous montrons dans cette thèse que l'utilisation de certains résultats de la théorie des processus additifs markoviens permet d'apporter des solutions aux problèmes d'évaluation et de couverture des options. Notamment, nous arrivons à caracté- riser la mesure martingale qui minimise l'entropie relative à la mesure de probabilit é historique ; aussi nous dérivons explicitement sous certaines conditions, le portefeuille optimal qui permet à un agent de minimiser localement le risque quadratique associé. Par ailleurs, dans une perspective plus pratique nous caract érisons le prix d'une option Européenne comme l'unique solution de viscosité d'un système d'équations intégro-di érentielles non-linéaires. Il s'agit là d'un premier pas pour la construction des schémas numériques pour approcher ledit prix.
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------
Resumo:
We are investigating how to program robots so that they learn from experience. Our goal is to develop principled methods of learning that can improve a robot's performance of a wide range of dynamic tasks. We have developed task-level learning that successfully improves a robot's performance of two complex tasks, ball-throwing and juggling. With task- level learning, a robot practices a task, monitors its own performance, and uses that experience to adjust its task-level commands. This learning method serves to complement other approaches, such as model calibration, for improving robot performance.
Resumo:
Se presenta el análisis de sensibilidad de un modelo de percepción de marca y ajuste de la inversión en marketing desarrollado en el Laboratorio de Simulación de la Universidad del Rosario. Este trabajo de grado consta de una introducción al tema de análisis de sensibilidad y su complementario el análisis de incertidumbre. Se pasa a mostrar ambos análisis usando un ejemplo simple de aplicación del modelo mediante la aplicación exhaustiva y rigurosa de los pasos descritos en la primera parte. Luego se hace una discusión de la problemática de medición de magnitudes que prueba ser el factor más complejo de la aplicación del modelo en el contexto práctico y finalmente se dan conclusiones sobre los resultados de los análisis.