956 resultados para Local composition models
Resumo:
It has been estimated that the entire Earth generates heat corresponding to about 40 TW (equivalent to 10,000 nuclear power plants) which is considered to originate mainly from the radioactive decay of elements like U, Th and K, deposited in the crust and mantle of the Earth. Radioactivity of these elements produce not only heat but also antineutrinos (called geo-antineutrinos) which can be observed by terrestrial detectors. We investigate the possibility of discriminating among Earth composition models predicting different total radiogenic heat generation, by observing such geo-antineutrinos at Kamioka and Gran Sasso, assuming KamLAND and Borexino (type) detectors, respectively, at these places. By simulating the future geo-antineutrino data as well as reactor antineutrino background contributions, we try to establish to which extent we can discriminate among Earth composition models for given exposures (in units of kt · yr) at these two sites on our planet. We use also information on neutrino mixing parameters coming from solar neutrino data as well as KamLAND reactor antineutrino data, in order to estimate the number of geo-antineutrino induced events. © SISSA/ISAS 2003.
Resumo:
In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.
Resumo:
Phase equilibrium data regression is an unavoidable task necessary to obtain the appropriate values for any model to be used in separation equipment design for chemical process simulation and optimization. The accuracy of this process depends on different factors such as the experimental data quality, the selected model and the calculation algorithm. The present paper summarizes the results and conclusions achieved in our research on the capabilities and limitations of the existing GE models and about strategies that can be included in the correlation algorithms to improve the convergence and avoid inconsistencies. The NRTL model has been selected as a representative local composition model. New capabilities of this model, but also several relevant limitations, have been identified and some examples of the application of a modified NRTL equation have been discussed. Furthermore, a regression algorithm has been developed that allows for the advisable simultaneous regression of all the condensed phase equilibrium regions that are present in ternary systems at constant T and P. It includes specific strategies designed to avoid some of the pitfalls frequently found in commercial regression tools for phase equilibrium calculations. Most of the proposed strategies are based on the geometrical interpretation of the lowest common tangent plane equilibrium criterion, which allows an unambiguous comprehension of the behavior of the mixtures. The paper aims to show all the work as a whole in order to reveal the necessary efforts that must be devoted to overcome the difficulties that still exist in the phase equilibrium data regression problem.
Resumo:
Mountain ecosystems will likely be affected by global warming during the 21st century, with substantial biodiversity loss predicted by species distribution models (SDMs). Depending on the geographic extent, elevation range and spatial resolution of data used in making these models, different rates of habitat loss have been predicted, with associated risk of species extinction. Few coordinated across-scale comparisons have been made using data of different resolution and geographic extent. Here, we assess whether climate-change induced habitat losses predicted at the European scale (10x10' grid cells) are also predicted from local scale data and modeling (25x25m grid cells) in two regions of the Swiss Alps. We show that local-scale models predict persistence of suitable habitats in up to 100% of species that were predicted by a European-scale model to lose all their suitable habitats in the area. Proportion of habitat loss depends on climate change scenario and study area. We find good agreement between the mismatch in predictions between scales and the fine-grain elevation range within 10x10' cells. The greatest prediction discrepancy for alpine species occurs in the area with the largest nival zone. Our results suggest elevation range as the main driver for the observed prediction discrepancies. Local scale projections may better reflect the possibility for species to track their climatic requirement toward higher elevations.
Resumo:
We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.
Resumo:
In this work we address the problem of finding formulas for efficient and reliable analytical approximation for the calculation of forward implied volatility in LSV models, a problem which is reduced to the calculation of option prices as an expansion of the price of the same financial asset in a Black-Scholes dynamic. Our approach involves an expansion of the differential operator, whose solution represents the price in local stochastic volatility dynamics. Further calculations then allow to obtain an expansion of the implied volatility without the aid of any special function or expensive from the computational point of view, in order to obtain explicit formulas fast to calculate but also as accurate as possible.
Resumo:
The importance of renewable energies for the European electricity market is growing rapidly. This presents transmission grids and the power market in general with new challenges which stem from the higher spatiotemporal variability of power generation. This uncertainty is due to the fact that renewable power production results from weather phenomena, thus making it difficult to plan and control. We present a sensitivity study of a total solar eclipse in central Europe in March. The weather in Germany and Europe was modeled using the German Weather Service's local area models COSMO-DE and COSMO-EU, respectively (http://www.cosmo-model.org/). The simulations were performed with and without considering a solar eclipse for the following 3 situations: 1. An idealized, clear-sky situation for the entire model area (Europe, COSMO-EU) 2. A real weather situation with mostly cloudy skies (Germany, COSMO-DE) 3. A real weather situation with mostly clear skies (Germany, COSMO-DE) The data should help to evaluate the effects of a total solar eclipse on the weather in the planetary boundary layer. The results show that a total solar eclipse has significant effects particularly on the main variables for renewable energy production, such as solar irradiation and temperature near the ground.
Resumo:
Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.
Resumo:
Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.
Resumo:
Prognostic procedures can be based on ranked linear models. Ranked regression type models are designed on the basis of feature vectors combined with set of relations defined on selected pairs of these vectors. Feature vectors are composed of numerical results of measurements on particular objects or events. Ranked relations defined on selected pairs of feature vectors represent additional knowledge and can reflect experts' opinion about considered objects. Ranked models have the form of linear transformations of feature vectors on a line which preserve a given set of relations in the best manner possible. Ranked models can be designed through the minimization of a special type of convex and piecewise linear (CPL) criterion functions. Some sets of ranked relations cannot be well represented by one ranked model. Decomposition of global model into a family of local ranked models could improve representation. A procedures of ranked models decomposition is described in this paper.
Resumo:
Chemical Stratigraphy, or the study of the variation of chemical elements within sedimentary sequences, has gradually become an experienced tool in the research and correlation of global geologic events. In this paper 87Sr/ 86Sr ratios of the Triassic marine carbonates (Muschelkalk facies) of southeast Iberian Ranges, Iberian Peninsula, are presented and the representative Sr-isotopic curve constructed for the upper Ladinian interval. The studied stratigraphic succession is 102 meters thick, continuous, and well preserved. Previous paleontological data from macro and micro, ammonites, bivalves, foraminifera, conodonts and palynological assemblages, suggest a Fassanian-Longobardian age (Late Ladinian). Although diagenetic minerals are present in small amounts, the elemental data content of bulk carbonate samples, especially Sr contents, show a major variation that probably reflects palaeoenvironmental changes. The 87Sr/86Sr ratios curve shows a rise from 0.707649 near the base of the section to 0.707741 and then declines rapidly to 0.707624, with a final values rise up to 0.70787 in the upper part. The data up to meter 80 in the studied succession is broadly concurrent with 87Sr/86Sr ratios of sequences of similar age and complements these data. Moreover, the sequence stratigraphic framework and its key surfaces, which are difficult to be recognised just based in the facies analysis, are characterised by combining variations of the Ca, Mg, Mn, Sr and CaCO3 contents
Resumo:
The inverse Weibull distribution has the ability to model failure rates which are quite common in reliability and biological studies. A three-parameter generalized inverse Weibull distribution with decreasing and unimodal failure rate is introduced and studied. We provide a comprehensive treatment of the mathematical properties of the new distribution including expressions for the moment generating function and the rth generalized moment. The mixture model of two generalized inverse Weibull distributions is investigated. The identifiability property of the mixture model is demonstrated. For the first time, we propose a location-scale regression model based on the log-generalized inverse Weibull distribution for modeling lifetime data. In addition, we develop some diagnostic tools for sensitivity analysis. Two applications of real data are given to illustrate the potentiality of the proposed regression model.
Resumo:
The aim of this paper is to develop models for experimental open-channel water delivery systems and assess the use of three data-driven modeling tools toward that end. Water delivery canals are nonlinear dynamical systems and thus should be modeled to meet given operational requirements while capturing all relevant dynamics, including transport delays. Typically, the derivation of first principle models for open-channel systems is based on the use of Saint-Venant equations for shallow water, which is a time-consuming task and demands for specific expertise. The present paper proposes and assesses the use of three data-driven modeling tools: artificial neural networks, composite local linear models and fuzzy systems. The canal from Hydraulics and Canal Control Nucleus (A parts per thousand vora University, Portugal) will be used as a benchmark: The models are identified using data collected from the experimental facility, and then their performances are assessed based on suitable validation criterion. The performance of all models is compared among each other and against the experimental data to show the effectiveness of such tools to capture all significant dynamics within the canal system and, therefore, provide accurate nonlinear models that can be used for simulation or control. The models are available upon request to the authors.
Resumo:
Conferência: CONTROLO’2012 - 16-18 July 2012 - Funchal