983 resultados para Monte-Carlo Method
Resumo:
Most of distributed generation and smart grid research works are dedicated to network operation parameters studies, reliability, etc. However, many of these works normally uses traditional test systems, for instance, IEEE test systems. This paper proposes voltage magnitude and reliability studies in presence of fault conditions, considering realistic conditions found in countries like Brazil. The methodology considers a hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models and a remedial action algorithm which is based on optimal power flow. To illustrate the application of the proposed method, the paper includes a case study that considers a real 12-bus sub-transmission network.
Resumo:
This work is divided into two distinct parts. The first part consists of the study of the metal organic framework UiO-66Zr, where the aim was to determine the force field that best describes the adsorption equilibrium properties of two different gases, methane and carbon dioxide. The other part of the work focuses on the study of the single wall carbon nanotube topology for ethane adsorption; the aim was to simplify as much as possible the solid-fluid force field model to increase the computational efficiency of the Monte Carlo simulations. The choice of both adsorbents relies on their potential use in adsorption processes, such as the capture and storage of carbon dioxide, natural gas storage, separation of components of biogas, and olefin/paraffin separations. The adsorption studies on the two porous materials were performed by molecular simulation using the grand canonical Monte Carlo (μ,V,T) method, over the temperature range of 298-343 K and pressure range 0.06-70 bar. The calibration curves of pressure and density as a function of chemical potential and temperature for the three adsorbates under study, were obtained Monte Carlo simulation in the canonical ensemble (N,V,T); polynomial fit and interpolation of the obtained data allowed to determine the pressure and gas density at any chemical potential. The adsorption equilibria of methane and carbon dioxide in UiO-66Zr were simulated and compared with the experimental data obtained by Jasmina H. Cavka et al. The results show that the best force field for both gases is a chargeless united-atom force field based on the TraPPE model. Using this validated force field it was possible to estimate the isosteric heats of adsorption and the Henry constants. In the Grand-Canonical Monte Carlo simulations of carbon nanotubes, we conclude that the fastest type of run is obtained with a force field that approximates the nanotube as a smooth cylinder; this approximation gives execution times that are 1.6 times faster than the typical atomistic runs.
Resumo:
Tese de Doutoramento em Ciência e Engenharia de Polímeros e Compósitos
Resumo:
Extreme value theory (EVT) deals with the occurrence of extreme phenomena. The tail index is a very important parameter appearing in the estimation of the probability of rare events. Under a semiparametric framework, inference requires the choice of a number k of upper order statistics to be considered. This is the crux of the matter and there is no definite formula to do it, since a small k leads to high variance and large values of k tend to increase the bias. Several methodologies have emerged in literature, specially concerning the most popular Hill estimator (Hill, 1975). In this work we compare through simulation well-known procedures presented in Drees and Kaufmann (1998), Matthys and Beirlant (2000), Beirlant et al. (2002) and de Sousa and Michailidis (2004), with a heuristic scheme considered in Frahm et al. (2005) within the estimation of a different tail measure but with a similar context. We will see that the new method may be an interesting alternative.
Resumo:
Extreme value models are widely used in different areas. The Birnbaum–Saunders distribution is receiving considerable attention due to its physical arguments and its good properties. We propose a methodology based on extreme value Birnbaum–Saunders regression models, which includes model formulation, estimation, inference and checking. We further conduct a simulation study for evaluating its performance. A statistical analysis with real-world extreme value environmental data using the methodology is provided as illustration.
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.
Resumo:
Spatio-temporal clusters in 1997?2003 fire sequences of Tuscany region (central Italy) have been identified and analysed by using the scan statistic, a method which was devised to evidence clusters in epidemiology. Results showed that the method is reliable to find clusters of events and to evaluate their significance via Monte Carlo replication. The evaluation of the presence of spatial and temporal patterns in fire occurrence and their significance could have a great impact in forthcoming studies on fire occurrences prediction.
Resumo:
The recent developments in high magnetic field 13C magnetic resonance spectroscopy with improved localization and shimming techniques have led to important gains in sensitivity and spectral resolution of 13C in vivo spectra in the rodent brain, enabling the separation of several 13C isotopomers of glutamate and glutamine. In this context, the assumptions used in spectral quantification might have a significant impact on the determination of the 13C concentrations and the related metabolic fluxes. In this study, the time domain spectral quantification algorithm AMARES (advanced method for accurate, robust and efficient spectral fitting) was applied to 13 C magnetic resonance spectroscopy spectra acquired in the rat brain at 9.4 T, following infusion of [1,6-(13)C2 ] glucose. Using both Monte Carlo simulations and in vivo data, the goal of this work was: (1) to validate the quantification of in vivo 13C isotopomers using AMARES; (2) to assess the impact of the prior knowledge on the quantification of in vivo 13C isotopomers using AMARES; (3) to compare AMARES and LCModel (linear combination of model spectra) for the quantification of in vivo 13C spectra. AMARES led to accurate and reliable 13C spectral quantification similar to those obtained using LCModel, when the frequency shifts, J-coupling constants and phase patterns of the different 13C isotopomers were included as prior knowledge in the analysis.
Resumo:
As part of a project to use the long-lived (T(1/2)=1200a) (166m)Ho as reference source in its reference ionisation chamber, IRA standardised a commercially acquired solution of this nuclide using the 4pibeta-gamma coincidence and 4pigamma (NaI) methods. The (166m)Ho solution supplied by Isotope Product Laboratories was measured to have about 5% Europium impurities (3% (154)Eu, 0.94% (152)Eu and 0.9% (155)Eu). Holmium had therefore to be separated from europium, and this was carried out by means of ion-exchange chromatography. The holmium fractions were collected without europium contamination: 162h long HPGe gamma measurements indicated no europium impurity (detection limits of 0.01% for (152)Eu and (154)Eu, and 0.03% for (155)Eu). The primary measurement of the purified (166m)Ho solution with the 4pi (PC) beta-gamma coincidence technique was carried out at three gamma energy settings: a window around the 184.4keV peak and gamma thresholds at 121.8 and 637.3keV. The results show very good self-consistency, and the activity concentration of the solution was evaluated to be 45.640+/-0.098kBq/g (0.21% with k=1). The activity concentration of this solution was also measured by integral counting with a well-type 5''x5'' NaI(Tl) detector and efficiencies computed by Monte Carlo simulations using the GEANT code. These measurements were mutually consistent, while the resulting weighted average of the 4pi NaI(Tl) method was found to agree within 0.15% with the result of the 4pibeta-gamma coincidence technique. An ampoule of this solution and the measured value of the concentration were submitted to the BIPM as a contribution to the Système International de Référence.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
Since 1895, when X-rays were discovered, ionizing radiation became part of our life. Its use in medicine has brought significant health benefits to the population globally. The benefit of any diagnostic procedure is to reduce the uncertainty about the patient's health. However, there are potential detrimental effects of radiation exposure. Therefore, radiation protection authorities have become strict regarding the control of radiation risks.¦There are various situations where the radiation risk needs to be evaluated. International authority bodies point to the increasing number of radiologic procedures and recommend population surveys. These surveys provide valuable data to public health authorities which helps them to prioritize and focus on patient groups in the population that are most highly exposed. On the other hand, physicians need to be aware of radiation risks from diagnostic procedures in order to justify and optimize the procedure and inform the patient.¦The aim of this work was to examine the different aspects of radiation protection and investigate a new method to estimate patient radiation risks.¦The first part of this work concerned radiation risk assessment from the regulatory authority point of view. To do so, a population dose survey was performed to evaluate the annual population exposure. This survey determined the contribution of different imaging modalities to the total collective dose as well as the annual effective dose per caput. It was revealed that although interventional procedures are not so frequent, they significantly contribute to the collective dose. Among the main results of this work, it was shown that interventional cardiological procedures are dose-intensive and therefore more attention should be paid to optimize the exposure.¦The second part of the project was related to the patient and physician oriented risk assessment. In this part, interventional cardiology procedures were studied by means of Monte Carlo simulations. The organ radiation doses as well as effective doses were estimated. Cancer incidence risks for different organs were calculated for different sex and age-at-exposure using the lifetime attributable risks provided by the Biological Effects of Ionizing Radiations Report VII. Advantages and disadvantages of the latter results were examined as an alternative method to estimate radiation risks. The results show that this method is the most accurate, currently available, to estimate radiation risks. The conclusions of this work may guide future studies in the field of radiation protection in medicine.¦-¦Depuis la découverte des rayons X en 1895, ce type de rayonnement a joué un rôle important dans de nombreux domaines. Son utilisation en médecine a bénéficié à la population mondiale puisque l'avantage d'un examen diagnostique est de réduire les incertitudes sur l'état de santé du patient. Cependant, leur utilisation peut conduire à l'apparition de cancers radio-induits. Par conséquent, les autorités sanitaires sont strictes quant au contrôle du risque radiologique.¦Le risque lié aux radiations doit être estimé dans différentes situations pratiques, dont l'utilisation médicale des rayons X. Les autorités internationales de radioprotection indiquent que le nombre d'examens et de procédures radiologiques augmente et elles recommandent des enquêtes visant à déterminer les doses de radiation délivrées à la population. Ces enquêtes assurent que les groupes de patients les plus à risque soient prioritaires. D'un autre côté, les médecins ont également besoin de connaître le risque lié aux radiations afin de justifier et optimiser les procédures et informer les patients.¦Le présent travail a pour objectif d'examiner les différents aspects de la radioprotection et de proposer une manière efficace pour estimer le risque radiologique au patient.¦Premièrement, le risque a été évalué du point de vue des autorités sanitaires. Une enquête nationale a été réalisée pour déterminer la contribution des différentes modalités radiologiques et des divers types d'examens à la dose efficace collective due à l'application médicale des rayons X. Bien que les procédures interventionnelles soient rares, elles contribuent de façon significative à la dose délivrée à la population. Parmi les principaux résultats de ce travail, il a été montré que les procédures de cardiologie interventionnelle délivrent des doses élevées et devraient donc être optimisées en priorité.¦La seconde approche concerne l'évaluation du risque du point de vue du patient et du médecin. Dans cette partie, des procédures interventionnelles cardiaques ont été étudiées au moyen de simulations Monte Carlo. La dose délivrée aux organes ainsi que la dose efficace ont été estimées. Les risques de développer des cancers dans plusieurs organes ont été calculés en fonction du sexe et de l'âge en utilisant la méthode établie dans Biological Effects of Ionizing Radiations Report VII. Les avantages et inconvénients de cette nouvelle technique ont été examinés et comparés à ceux de la dose efficace. Les résultats ont montré que cette méthode est la plus précise actuellement disponible pour estimer le risque lié aux radiations. Les conclusions de ce travail pourront guider de futures études dans le domaine de la radioprotection en médicine.
Resumo:
This paper proposes a new methodology to compute Value at Risk (VaR) for quantifying losses in credit portfolios. We approximate the cumulative distribution of the loss function by a finite combination of Haar wavelet basis functions and calculate the coefficients of the approximation by inverting its Laplace transform. The Wavelet Approximation (WA) method is specially suitable for non-smooth distributions, often arising in small or concentrated portfolios, when the hypothesis of the Basel II formulas are violated. To test the methodology we consider the Vasicek one-factor portfolio credit loss model as our model framework. WA is an accurate, robust and fast method, allowing to estimate VaR much more quickly than with a Monte Carlo (MC) method at the same level of accuracy and reliability.
Credit risk contributions under the Vasicek one-factor model: a fast wavelet expansion approximation
Resumo:
To measure the contribution of individual transactions inside the total risk of a credit portfolio is a major issue in financial institutions. VaR Contributions (VaRC) and Expected Shortfall Contributions (ESC) have become two popular ways of quantifying the risks. However, the usual Monte Carlo (MC) approach is known to be a very time consuming method for computing these risk contributions. In this paper we consider the Wavelet Approximation (WA) method for Value at Risk (VaR) computation presented in [Mas10] in order to calculate the Expected Shortfall (ES) and the risk contributions under the Vasicek one-factor model framework. We decompose the VaR and the ES as a sum of sensitivities representing the marginal impact on the total portfolio risk. Moreover, we present technical improvements in the Wavelet Approximation (WA) that considerably reduce the computational effort in the approximation while, at the same time, the accuracy increases.