61 resultados para Fantôme de calibration
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We show how to calibrate CES production and utility functions when indirect taxation affecting inputs and consumption is present. These calibrated functions can then be used in computable general equilibrium models. Taxation modifies the standard calibration procedures since any taxed good has two associated prices and a choice of reference value units has to be made. We also provide an example of computer code to solve the calibration of CES utilities under two alternate normalizations. To our knowledge, this paper fills a methodological gap in the CGE literature.
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
This paper points out an empirical puzzle that arises when an RBC economy with a job matching function is used to model unemployment. The standard model can generate sufficiently large cyclical fluctuations in unemployment, or a sufficiently small response of unemployment to labor market policies, but it cannot do both. Variable search and separation, finite UI benefit duration, efficiency wages, and capital all fail to resolve this puzzle. However, both sticky wages and match-specific productivity shocks help the model reproduce the stylized facts: both make the firm's flow of surplus more procyclical, thus making hiring more procyclical too.
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.
Resumo:
This paper theoretically and empirically documents a puzzle that arises when an RBC economy with a job matching function is used to model unemployment. The standard model can generate sufficiently large cyclical fluctuations in unemployment, or a sufficiently small response of unemployment to labor market policies, but it cannot do both. Variable search and separation, finite UI benefit duration, efficiency wages, and capital all fail to resolve this puzzle. However, either sticky wages or match-specific productivity shocks can improve the model's performance by making the firm's flow of surplus more procyclical, which makes hiring more procyclical too.
Resumo:
Accomplish high quality of final products in pharmaceutical industry is a challenge that requires the control and supervision of all the manufacturing steps. This request created the necessity of developing fast and accurate analytical methods. Near infrared spectroscopy together with chemometrics, fulfill this growing demand. The high speed providing relevant information and the versatility of its application to different types of samples lead these combined techniques as one of the most appropriated. This study is focused on the development of a calibration model able to determine amounts of API from industrial granulates using NIR, chemometrics and process spectra methodology.
Resumo:
During the winters of 1999 and 2000 large avalanches occurred in the ski resort of Las Leñas (Los Andes, Mendoza, Argentina). On 8 September 1999 an avalanche of new, dry snow ran over a path with a 1000 m vertical drop. On 30 June and on 1 July 2000 five avalanches of similar vertical drop, which start with new snow, entrained very wet snow during their descent, and evolved into dense snow avalanches. To use the MN2D dynamics model correctly, calibration of model parameters is necessary. Also, no previous works with the use of dynamics models exist in South America. The events used to calibrate the model occurred during the winters of 1999 and 2000 and are a good sample of the kind of avalanches which can occur in this area of the Andes range. By considering the slope morphology and topography, the snow and meteorological conditions and the results of the model simulations, it was estimated that these avalanches were not extreme events with a return period greater than one hundred years. This implies that, in natural conditions, bigger, extreme avalanches could happen. In this work, the MN2D dynamics model is calibrated with two different avalanches of the same magnitude: dry and wet. The importance of the topographic data in the simulation is evaluated. It is concluded that MN2D dynamics model can be used to simulate dry extreme avalanches in Argentinean Andes but not to simulate extreme wet avalanches, which are much more sensitive to the topography.
Resumo:
Intravascular brachytherapy with beta sources has become a useful technique to prevent restenosis after cardiovascular intervention. In particular, the Beta-Cath high-dose-rate system, manufactured by Novoste Corporation, is a commercially available 90Sr 90Y source for intravascular brachytherapy that is achieving widespread use. Its dosimetric characterization has attracted considerable attention in recent years. Unfortunately, the short ranges of the emitted beta particles and the associated large dose gradients make experimental measurements particularly difficult. This circumstance has motivated the appearance of a number of papers addressing the characterization of this source by means of Monte Carlo simulation techniques.
Resumo:
Chemical analysis is a well-established procedure for the provenancing of archaeological ceramics. Various analytical techniques are routinely used and large amounts of data have been accumulated so far in data banks. However, in order to exchange results obtained by different laboratories, the respective analytical procedures need to be tested in terms of their inter-comparability. In this study, the schemes of analysis used in four laboratories that are involved in archaeological pottery studies on a routine basis were compared. The techniques investigated were neutron activation analysis (NAA), X-ray fluorescence analysis (XRF), inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS). For this comparison series of measurements on different geological standard reference materials (SRM) were carried out and the results were statistically evaluated. An attempt was also made towards the establishment of calibration factors between pairs of analytical setups in order to smooth the systematic differences among the results.
Resumo:
Recently there has been a renewed research interest in the properties of non survey updates of input-output tables and social accounting matrices (SAM). Along with the venerable and well known scaling RAS method, several alternative new procedures related to entropy minimization and other metrics have been suggested, tested and used in the literature. Whether these procedures will eventually substitute or merely complement the RAS approach is still an open question without a definite answer. The performance of many of the updating procedures has been tested using some kind of proximity or closeness measure to a reference input-output table or SAM. The first goal of this paper, in contrast, is the proposal of checking the operational performance of updating mechanisms by way of comparing the simulation results that ensue from adopting alternative databases for calibration of a reference applied general equilibrium model. The second goal is to introduce a new updatin! g procedure based on information retrieval principles. This new procedure is then compared as far as performance is concerned to two well-known updating approaches: RAS and cross-entropy. The rationale for the suggested cross validation is that the driving force for having more up to date databases is to be able to conduct more current, and hopefully more credible, policy analyses.
Resumo:
We show a standard model where the optimal tax reform is to cut labor taxes and leave capital taxes very high in the short and medium run. Only in the very long run would capital taxes be zero. Our model is a version of Chamley??s, with heterogeneous agents, without lump sum transfers, an upper bound on capital taxes, and a focus on Pareto improving plans. For our calibration labor taxes should be low for the first ten to twenty years, while capital taxes should be at their maximum. This policy ensures that all agents benefit from the tax reform and that capital grows quickly after when the reform begins. Therefore, the long run optimal tax mix is the opposite from the short and medium run tax mix. The initial labor tax cut is financed by deficits that lead to a positive long run level of government debt, reversing the standard prediction that government accumulates savings in models with optimal capital taxes. If labor supply is somewhat elastic benefits from tax reform are high and they can be shifted entirely to capitalists or workers by varying the length of the transition. With inelastic labor supply there is an increasing part of the equilibrium frontier, this means that the scope for benefitting the workers is limited and the total benefits from reforming taxes are much lower.
Resumo:
En este proyecto, lo primero que hemos hecho ha sido desarrollar un algoritmo en Matlab que implementara el método de calibración TRL, el funcionamiento del cual hemos comprobado en primera instancia mediante simulaciones y, posteriormente, mediante un ejemplo real. Posteriormente, hemos desarrollado otro algoritmo en Matlab para implementar el método de calibración LRM. Este algoritmo sólo lo hemos podido comprobar a nivel de simulación. A continuación, mediante los dos algoritmos, hemos realizado una comparación entre ambos sistemas de calibración a través de simulaciones. Finalmente, analizando los resultados de varias simulaciones calibradas con nuestro programa del método TRL, hemos buscado cuáles pueden ser los motivos para la aparición de picos indeseados y hemos encontrado uno de ellos.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la National Oceanography Centre of Southampton (NOCS), Gran Bretanya, entre maig i juliol del 2006. La possibilitat d’obtenir una estimació precissa de la salinitat marina (SSS) és important per a investigar i predir l’extensió del fenòmen del canvi climàtic. La missió Soil Moisture and Ocean Salinity (SMOS) va ser seleccionada per l’Agència Espacial Europea (ESA) per a obtenir mapes de salinitat de la superfície marina a escala global i amb un temps de revisita petit. Abans del llençament de SMOS es preveu l’anàlisi de la variabilitat horitzontal de la SSS i del potencial de les dades recuperades a partir de mesures de SMOS per a reproduir comportaments oceanogràfics coneguts. L’objectiu de tot plegat és emplenar el buit existent entre les fonts de dades d’entrada/auxiliars fiables i les eines desenvolupades per a simular i processar les dades adquirides segons la configuració de SMOS. El SMOS End-to-end Performance Simulator (SEPS) és un simulador adhoc desenvolupat per la Universitat Politècnica de Catalunya (UPC) per a generar dades segons la configuració de SMOS. Es va utilitzar dades d’entrada a SEPS procedents del projecte Ocean Circulation and Climate Advanced Modeling (OCCAM), utilitzat al NOCS, a diferents resolucions espacials. Modificant SEPS per a poder fer servir com a entrada les dades OCCAM es van obtenir dades de temperatura de brillantor simulades durant un mes amb diferents observacions ascendents que cobrien la zona seleccionada. Les tasques realitzades durant l’estada a NOCS tenien la finalitat de proporcionar una tècnica fiable per a realitzar la calibració externa i per tant cancel•lar el bias, una metodologia per a promitjar temporalment les diferents adquisicions durant les observacions ascendents, i determinar la millor configuració de la funció de cost abans d’explotar i investigar les posibiltats de les dades SEPS/OCCAM per a derivar la SSS recuperada amb patrons d’alta resolució.
Resumo:
Per tal de reconstruir i obtenir una evolució climàtica a partir de les temperatures superficials marines dels últims dos milions d’anys al Corrent de Benguela (costa oest sud-africana) s’han analitzat 60 mostres del testimoni amb ODP 175-1084. Per fer-ho, s’ha utilitzat la nova calibració de l’índex TEX86 (Kim et al., 2007) i s’han representat els resultats juntament amb els valors obtinguts amb altres índexs referents a les temperatures mitjanes anuals de l’aire (MAAT) i el grau d’aportació sedimentària d’origen continental als sediments marins (BIT). També s’han comparat amb els registres de temperatura d’altres estudis en la mateixa àrea obtinguts a partir de proxies diferents. Els resultats del TEX86 mostren certa concordança amb alguns dels valors obtinguts en altres estudis i proposen hipòtesis que relacionen de manera directa els tres índexs calculats.