991 resultados para Sequential Monte Carlo
Resumo:
Since 1895, when X-rays were discovered, ionizing radiation became part of our life. Its use in medicine has brought significant health benefits to the population globally. The benefit of any diagnostic procedure is to reduce the uncertainty about the patient's health. However, there are potential detrimental effects of radiation exposure. Therefore, radiation protection authorities have become strict regarding the control of radiation risks.¦There are various situations where the radiation risk needs to be evaluated. International authority bodies point to the increasing number of radiologic procedures and recommend population surveys. These surveys provide valuable data to public health authorities which helps them to prioritize and focus on patient groups in the population that are most highly exposed. On the other hand, physicians need to be aware of radiation risks from diagnostic procedures in order to justify and optimize the procedure and inform the patient.¦The aim of this work was to examine the different aspects of radiation protection and investigate a new method to estimate patient radiation risks.¦The first part of this work concerned radiation risk assessment from the regulatory authority point of view. To do so, a population dose survey was performed to evaluate the annual population exposure. This survey determined the contribution of different imaging modalities to the total collective dose as well as the annual effective dose per caput. It was revealed that although interventional procedures are not so frequent, they significantly contribute to the collective dose. Among the main results of this work, it was shown that interventional cardiological procedures are dose-intensive and therefore more attention should be paid to optimize the exposure.¦The second part of the project was related to the patient and physician oriented risk assessment. In this part, interventional cardiology procedures were studied by means of Monte Carlo simulations. The organ radiation doses as well as effective doses were estimated. Cancer incidence risks for different organs were calculated for different sex and age-at-exposure using the lifetime attributable risks provided by the Biological Effects of Ionizing Radiations Report VII. Advantages and disadvantages of the latter results were examined as an alternative method to estimate radiation risks. The results show that this method is the most accurate, currently available, to estimate radiation risks. The conclusions of this work may guide future studies in the field of radiation protection in medicine.¦-¦Depuis la découverte des rayons X en 1895, ce type de rayonnement a joué un rôle important dans de nombreux domaines. Son utilisation en médecine a bénéficié à la population mondiale puisque l'avantage d'un examen diagnostique est de réduire les incertitudes sur l'état de santé du patient. Cependant, leur utilisation peut conduire à l'apparition de cancers radio-induits. Par conséquent, les autorités sanitaires sont strictes quant au contrôle du risque radiologique.¦Le risque lié aux radiations doit être estimé dans différentes situations pratiques, dont l'utilisation médicale des rayons X. Les autorités internationales de radioprotection indiquent que le nombre d'examens et de procédures radiologiques augmente et elles recommandent des enquêtes visant à déterminer les doses de radiation délivrées à la population. Ces enquêtes assurent que les groupes de patients les plus à risque soient prioritaires. D'un autre côté, les médecins ont également besoin de connaître le risque lié aux radiations afin de justifier et optimiser les procédures et informer les patients.¦Le présent travail a pour objectif d'examiner les différents aspects de la radioprotection et de proposer une manière efficace pour estimer le risque radiologique au patient.¦Premièrement, le risque a été évalué du point de vue des autorités sanitaires. Une enquête nationale a été réalisée pour déterminer la contribution des différentes modalités radiologiques et des divers types d'examens à la dose efficace collective due à l'application médicale des rayons X. Bien que les procédures interventionnelles soient rares, elles contribuent de façon significative à la dose délivrée à la population. Parmi les principaux résultats de ce travail, il a été montré que les procédures de cardiologie interventionnelle délivrent des doses élevées et devraient donc être optimisées en priorité.¦La seconde approche concerne l'évaluation du risque du point de vue du patient et du médecin. Dans cette partie, des procédures interventionnelles cardiaques ont été étudiées au moyen de simulations Monte Carlo. La dose délivrée aux organes ainsi que la dose efficace ont été estimées. Les risques de développer des cancers dans plusieurs organes ont été calculés en fonction du sexe et de l'âge en utilisant la méthode établie dans Biological Effects of Ionizing Radiations Report VII. Les avantages et inconvénients de cette nouvelle technique ont été examinés et comparés à ceux de la dose efficace. Les résultats ont montré que cette méthode est la plus précise actuellement disponible pour estimer le risque lié aux radiations. Les conclusions de ce travail pourront guider de futures études dans le domaine de la radioprotection en médicine.
Resumo:
The purpose of this study was to develop a two-compartment metabolic model of brain metabolism to assess oxidative metabolism from [1-(11)C] acetate radiotracer experiments, using an approach previously applied in (13)C magnetic resonance spectroscopy (MRS), and compared with an one-tissue compartment model previously used in brain [1-(11)C] acetate studies. Compared with (13)C MRS studies, (11)C radiotracer measurements provide a single uptake curve representing the sum of all labeled metabolites, without chemical differentiation, but with higher temporal resolution. The reliability of the adjusted metabolic fluxes was analyzed with Monte-Carlo simulations using synthetic (11)C uptake curves, based on a typical arterial input function and previously published values of the neuroglial fluxes V(tca)(g), V(x), V(nt), and V(tca)(n) measured in dynamic (13)C MRS experiments. Assuming V(x)(g)=10 × V(tca)(g) and V(x)(n)=V(tca)(n), it was possible to assess the composite glial tricarboxylic acid (TCA) cycle flux V(gt)(g) (V(gt)(g)=V(x)(g) × V(tca)(g)/(V(x)(g)+V(tca)(g))) and the neurotransmission flux V(nt) from (11)C tissue-activity curves obtained within 30 minutes in the rat cortex with a beta-probe after a bolus infusion of [1-(11)C] acetate (n=9), resulting in V(gt)(g)=0.136±0.042 and V(nt)=0.170±0.103 μmol/g per minute (mean±s.d. of the group), in good agreement with (13)C MRS measurements.
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
This paper proposes a new methodology to compute Value at Risk (VaR) for quantifying losses in credit portfolios. We approximate the cumulative distribution of the loss function by a finite combination of Haar wavelet basis functions and calculate the coefficients of the approximation by inverting its Laplace transform. The Wavelet Approximation (WA) method is specially suitable for non-smooth distributions, often arising in small or concentrated portfolios, when the hypothesis of the Basel II formulas are violated. To test the methodology we consider the Vasicek one-factor portfolio credit loss model as our model framework. WA is an accurate, robust and fast method, allowing to estimate VaR much more quickly than with a Monte Carlo (MC) method at the same level of accuracy and reliability.
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Credit risk contributions under the Vasicek one-factor model: a fast wavelet expansion approximation
Resumo:
To measure the contribution of individual transactions inside the total risk of a credit portfolio is a major issue in financial institutions. VaR Contributions (VaRC) and Expected Shortfall Contributions (ESC) have become two popular ways of quantifying the risks. However, the usual Monte Carlo (MC) approach is known to be a very time consuming method for computing these risk contributions. In this paper we consider the Wavelet Approximation (WA) method for Value at Risk (VaR) computation presented in [Mas10] in order to calculate the Expected Shortfall (ES) and the risk contributions under the Vasicek one-factor model framework. We decompose the VaR and the ES as a sum of sensitivities representing the marginal impact on the total portfolio risk. Moreover, we present technical improvements in the Wavelet Approximation (WA) that considerably reduce the computational effort in the approximation while, at the same time, the accuracy increases.
Resumo:
This paper examines why a financial entity’s solvency capital estimation might be underestimated if the total amount required is obtained directly from a risk measurement. Using Monte Carlo simulation we show that, in some instances, a common risk measure such as Value-at-Risk is not subadditive when certain dependence structures are considered. Higher risk evaluations are obtained for independence between random variables than those obtained in the case of comonotonicity. The paper stresses, therefore, the relationship between dependence structures and capital estimation.
Resumo:
Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.
Resumo:
We explore in depth the validity of a recently proposed scaling law for earthquake inter-event time distributions in the case of the Southern California, using the waveform cross-correlation catalog of Shearer et al. Two statistical tests are used: on the one hand, the standard two-sample Kolmogorov-Smirnov test is in agreement with the scaling of the distributions. On the other hand, the one-sample Kolmogorov-Smirnov statistic complemented with Monte Carlo simulation of the inter-event times, as done by Clauset et al., supports the validity of the gamma distribution as a simple model of the scaling function appearing on the scaling law, for rescaled inter-event times above 0.01, except for the largest data set (magnitude greater than 2). A discussion of these results is provided.
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Resumo:
Chromosomes of eukaryotic organisms are composed of chromatin loops. Using Monte Carlo simulations we investigate how the topological exclusion between loops belonging to different chromosomes affects chromosome behaviour. We show that in a confined space the topological exclusion limiting catenation between loops belonging to different chromosomes entropically drives the formation of chromosomal territories. The same topological exclusion in a connection with interchromosomal binding via transcription factories explains why actively transcribed genes are found preferentially at the peripheries of their chromosomal territories. This paper is based in part on the results presented in J. Dorier and A. Stasiak, Nucl. Acids Res. 37 (2009), 6316 and 38 (2010), 7410.
Resumo:
Leaders must scan the internal and external environment, chart strategic and task objectives, and provide performance feedback. These instrumental leadership (IL) functions go beyond the motivational and quid-pro quo leader behaviors that comprise the full-range-transformational, transactional, and laissez faire-leadership model. In four studies we examined the construct validity of IL. We found evidence for a four-factor IL model that was highly prototypical of good leadership. IL predicted top-level leader emergence controlling for the full-range factors, initiating structure, and consideration. It also explained unique variance in outcomes beyond the full-range factors; the effects of transformational leadership were vastly overstated when IL was omitted from the model. We discuss the importance of a "fuller full-range" leadership theory for theory and practice. We also showcase our methodological contributions regarding corrections for common method variance (i.e., endogeneity) bias using two-stage least squares (2SLS) regression and Monte Carlo split-sample designs.
Resumo:
The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.
Resumo:
Axial deflection of DNA molecules in solution results from thermal motion and intrinsic curvature related to the DNA sequence. In order to measure directly the contribution of thermal motion we constructed intrinsically straight DNA molecules and measured their persistence length by cryo-electron microscopy. The persistence length of such intrinsically straight DNA molecules suspended in thin layers of cryo-vitrified solutions is about 80 nm. In order to test our experimental approach, we measured the apparent persistence length of DNA molecules with natural "random" sequences. The result of about 45 nm is consistent with the generally accepted value of the apparent persistence length of natural DNA sequences. By comparing the apparent persistence length to intrinsically straight DNA with that of natural DNA, it is possible to determine both the dynamic and the static contributions to the apparent persistence length.
Resumo:
MOTIVATION: Regulatory gene networks contain generic modules such as feedback loops that are essential for the regulation of many biological functions. The study of the stochastic mechanisms of gene regulation is instrumental for the understanding of how cells maintain their expression at levels commensurate with their biological role, as well as to engineer gene expression switches of appropriate behavior. The lack of precise knowledge on the steady-state distribution of gene expression requires the use of Gillespie algorithms and Monte-Carlo approximations. METHODOLOGY: In this study, we provide new exact formulas and efficient numerical algorithms for computing/modeling the steady-state of a class of self-regulated genes, and we use it to model/compute the stochastic expression of a gene of interest in an engineered network introduced in mammalian cells. The behavior of the genetic network is then analyzed experimentally in living cells. RESULTS: Stochastic models often reveal counter-intuitive experimental behaviors, and we find that this genetic architecture displays a unimodal behavior in mammalian cells, which was unexpected given its known bimodal response in unicellular organisms. We provide a molecular rationale for this behavior, and we implement it in the mathematical picture to explain the experimental results obtained from this network.