963 resultados para Monte-Carlo Simulation Method
Resumo:
The effects of structural breaks in dynamic panels are more complicated than in time series models as the bias can be either negative or positive. This paper focuses on the effects of mean shifts in otherwise stationary processes within an instrumental variable panel estimation framework. We show the sources of the bias and a Monte Carlo analysis calibrated on United States bank lending data demonstrates the size of the bias for a range of auto-regressive parameters. We also propose additional moment conditions that can be used to reduce the biases caused by shifts in the mean of the data.
Resumo:
This paper proposes a new methodology to compute Value at Risk (VaR) for quantifying losses in credit portfolios. We approximate the cumulative distribution of the loss function by a finite combination of Haar wavelet basis functions and calculate the coefficients of the approximation by inverting its Laplace transform. The Wavelet Approximation (WA) method is specially suitable for non-smooth distributions, often arising in small or concentrated portfolios, when the hypothesis of the Basel II formulas are violated. To test the methodology we consider the Vasicek one-factor portfolio credit loss model as our model framework. WA is an accurate, robust and fast method, allowing to estimate VaR much more quickly than with a Monte Carlo (MC) method at the same level of accuracy and reliability.
Resumo:
We explore in depth the validity of a recently proposed scaling law for earthquake inter-event time distributions in the case of the Southern California, using the waveform cross-correlation catalog of Shearer et al. Two statistical tests are used: on the one hand, the standard two-sample Kolmogorov-Smirnov test is in agreement with the scaling of the distributions. On the other hand, the one-sample Kolmogorov-Smirnov statistic complemented with Monte Carlo simulation of the inter-event times, as done by Clauset et al., supports the validity of the gamma distribution as a simple model of the scaling function appearing on the scaling law, for rescaled inter-event times above 0.01, except for the largest data set (magnitude greater than 2). A discussion of these results is provided.
Resumo:
Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.
Resumo:
Monte Carlo simulations were carried out to study the response of a thyroid monitor for measuring intake activities of (125)I and (131)I. The aim of the study was 3-fold: to cross-validate the Monte Carlo simulation programs, to study the response of the detector using different phantoms and to study the effects of anatomical variations. Simulations were performed using the Swiss reference phantom and several voxelised phantoms. Determining the position of the thyroid is crucial for an accurate determination of radiological risks. The detector response using the Swiss reference phantom was in fairly good agreement with the response obtained using adult voxelised phantoms for (131)I, but should be revised for a better calibration for (125)I and for any measurements taken on paediatric patients.
Resumo:
Although the histogram is the most widely used density estimator, itis well--known that the appearance of a constructed histogram for a given binwidth can change markedly for different choices of anchor position. In thispaper we construct a stability index $G$ that assesses the potential changesin the appearance of histograms for a given data set and bin width as theanchor position changes. If a particular bin width choice leads to an unstableappearance, the arbitrary choice of any one anchor position is dangerous, anda different bin width should be considered. The index is based on the statisticalroughness of the histogram estimate. We show via Monte Carlo simulation thatdensities with more structure are more likely to lead to histograms withunstable appearance. In addition, ignoring the precision to which the datavalues are provided when choosing the bin width leads to instability. We provideseveral real data examples to illustrate the properties of $G$. Applicationsto other binned density estimators are also discussed.
Resumo:
This paper investigates the comparative performance of five small areaestimators. We use Monte Carlo simulation in the context of boththeoretical and empirical populations. In addition to the direct andindirect estimators, we consider the optimal composite estimator withpopulation weights, and two composite estimators with estimatedweights: one that assumes homogeneity of within area variance andsquare bias, and another one that uses area specific estimates ofvariance and square bias. It is found that among the feasibleestimators, the best choice is the one that uses area specificestimates of variance and square bias.
Resumo:
In a recent paper [Phys. Rev. B 50, 3477 (1994)], P. Fratzl and O. Penrose present the results of the Monte Carlo simulation of the spinodal decomposition problem (phase separation) using the vacancy dynamics mechanism. They observe that the t1/3 growth regime is reached faster than when using the standard Kawasaki dynamics. In this Comment we provide a simple explanation for the phenomenon based on the role of interface diffusion, which they claim is irrelevant for the observed behavior.
Resumo:
PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.
Resumo:
We have analyzed a two-dimensional lattice-gas model of cylindrical molecules which can exhibit four possible orientations. The Hamiltonian of the model contains positional and orientational energy interaction terms. The ground state of the model has been investigated on the basis of Karl¿s theorem. Monte Carlo simulation results have confirmed the predicted ground state. The model is able to reproduce, with appropriate values of the Hamiltonian parameters, both, a smectic-nematic-like transition and a nematic-isotropic-like transition. We have also analyzed the phase diagram of the system by mean-field techniques and Monte Carlo simulations. Mean-field calculations agree well qualitatively with Monte Carlo results but overestimate transition temperatures.
Resumo:
The kinetic domain-growth exponent is studied by Monte Carlo simulation as a function of temperature for a nonconserved order-parameter model. In the limit of zero temperature, the model belongs to the n=(1/4 slow-growth unversality class. This is indicative of a temporal pinning in the domain-boundary network of mixed-, zero-, and finite-curvature boundaries. At finite temperature the growth kinetics is found to cross over to the Allen-Cahn exponent n=(1/2. We obtain that the pinning time of the zero-curvature boundary decreases rapidly with increasing temperature.
Resumo:
BACKGROUND: Anal condylomata acuminata (ACA) are caused by human papilloma virus (HPV) infection which is transmitted by close physical and sexual contact. The result of surgical treatment of ACA has an overall success rate of 71% to 93%, with a recurrence rate between 4% and 29%. The aim of this study was to assess a possible association between HPV type and ACA recurrence after surgical treatment. METHODS: We performed a retrospective analysis of 140 consecutive patients who underwent surgery for ACA from January 1990 to December 2005 at our tertiary University Hospital. We confirmed ACA by histopathological analysis and determined the HPV typing using the polymerase chain reaction. Patients gave consent for HPV testing and completed a questionnaire. We looked at the association of ACA, HPV typing, and HIV disease. We used chi, the Monte Carlo simulation, and Wilcoxon tests for statistical analysis. RESULTS: Among the 140 patients (123 M/17 F), HPV 6 and 11 were the most frequently encountered viruses (51% and 28%, respectively). Recurrence occurred in 35 (25%) patients. HPV 11 was present in 19 (41%) of these recurrences, which is statistically significant, when compared with other HPVs. There was no significant difference between recurrence rates in the 33 (24%) HIV-positive and the HIV-negative patients. CONCLUSIONS: HPV 11 is associated with higher recurrence rate of ACA. This makes routine clinical HPV typing questionable. Follow-up is required to identify recurrence and to treat it early, especially if HPV 11 has been identified.
Resumo:
A joint project between the Paul Scherrer Institut (PSI) and the Institute of Radiation Physics was initiated to characterise the PSI whole body counter in detail through measurements and Monte Carlo simulation. Accurate knowledge of the detector geometry is essential for reliable simulations of human body phantoms filled with known activity concentrations. Unfortunately, the technical drawings provided by the manufacturer are often not detailed enough and sometimes the specifications do not agree with the actual set-up. Therefore, the exact detector geometry and the position of the detector crystal inside the housing were determined through radiographic images. X-rays were used to analyse the structure of the detector, and (60)Co radiography was employed to measure the core of the germanium crystal. Moreover, the precise axial alignment of the detector within its housing was determined through a series of radiographic images with different incident angles. The hence obtained information enables us to optimise the Monte Carlo geometry model and to perform much more accurate and reliable simulations.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
Despite the considerable evidence showing that dispersal between habitat patches is often asymmetric, most of the metapopulation models assume symmetric dispersal. In this paper, we develop a Monte Carlo simulation model to quantify the effect of asymmetric dispersal on metapopulation persistence. Our results suggest that metapopulation extinctions are more likely when dispersal is asymmetric. Metapopulation viability in systems with symmetric dispersal mirrors results from a mean field approximation, where the system persists if the expected per patch colonization probability exceeds the expected per patch local extinction rate. For asymmetric cases, the mean field approximation underestimates the number of patches necessary for maintaining population persistence. If we use a model assuming symmetric dispersal when dispersal is actually asymmetric, the estimation of metapopulation persistence is wrong in more than 50% of the cases. Metapopulation viability depends on patch connectivity in symmetric systems, whereas in the asymmetric case the number of patches is more important. These results have important implications for managing spatially structured populations, when asymmetric dispersal may occur. Future metapopulation models should account for asymmetric dispersal, while empirical work is needed to quantify the patterns and the consequences of asymmetric dispersal in natural metapopulations.