992 resultados para Monte Carlo.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Any electoral system has an electoral formula that converts voteproportions into parliamentary seats. Pre-electoral polls usually focuson estimating vote proportions and then applying the electoral formulato give a forecast of the parliament's composition. We here describe theproblems arising from this approach: there is always a bias in theforecast. We study the origin of the bias and some methods to evaluateand to reduce it. We propose some rules to compute the sample sizerequired for a given forecast accuracy. We show by Monte Carlo simulationthe performance of the proposed methods using data from Spanish electionsin last years. We also propose graphical methods to visualize how electoralformulae and parliamentary forecasts work (or fail).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the comparative performance of five small areaestimators. We use Monte Carlo simulation in the context of boththeoretical and empirical populations. In addition to the direct andindirect estimators, we consider the optimal composite estimator withpopulation weights, and two composite estimators with estimatedweights: one that assumes homogeneity of within area variance andsquare bias, and another one that uses area specific estimates ofvariance and square bias. It is found that among the feasibleestimators, the best choice is the one that uses area specificestimates of variance and square bias.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mammalian sex chromosomes have undergone profound changes since evolving from ancestral autosomes. By examining retroposed genes in the human and mouse genomes, we demonstrate that, during evolution, the mammalian X chromosome has generated and recruited a disproportionately high number of functional retroposed genes, whereas the autosomes experienced lower gene turnover. Most autosomal copies originating from X-linked genes exhibited testis-biased expression. Such export is incompatible with mutational bias and is likely driven by natural selection to attain male germline function. However, the excess recruitment is consistent with a combination of both natural selection and mutational bias.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Second cancer risk assessment for radiotherapy is controversial due to the large uncertainties of the dose-response relationship. This could be improved by a better assessment of the peripheral doses to healthy organs in future epidemiological studies. In this framework, we developed a simple Monte Carlo (MC) model of the Siemens Primus 6 MV linac for both open and wedged fields that we then validated with dose profiles measured in a water tank up to 30 cm from the central axis. The differences between the measured and calculated doses were comparable to other more complex MC models and never exceeded 50%. We then compared our simple MC model with the peripheral dose profiles of five different linacs with different collimation systems. We found that the peripheral dose between two linacs could differ up to a factor of 9 for small fields (5 × 5 cm(2)) and up to a factor of 10 for wedged fields. Considering that an uncertainty of 50% in dose estimation could be acceptable in the context of risk assessment, the MC model can be used as a generic model for large open fields (≥10 × 10 cm(2)) only. The uncertainties in peripheral doses should be considered in future epidemiological studies when designing the width of the dose bins to stratify the risk as a function of the dose.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: Few studies compare the variabilities that characterize environmental (EM) and biological monitoring (BM) data. Indeed, comparing their respective variabilities can help to identify the best strategy for evaluating occupational exposure. The objective of this study is to quantify the biological variability associated with 18 bio-indicators currently used in work environments. METHOD: Intra-individual (BV(intra)), inter-individual (BV(inter)), and total biological variability (BV(total)) were quantified using validated physiologically based toxicokinetic (PBTK) models coupled with Monte Carlo simulations. Two environmental exposure profiles with different levels of variability were considered (GSD of 1.5 and 2.0). RESULTS: PBTK models coupled with Monte Carlo simulations were successfully used to predict the biological variability of biological exposure indicators. The predicted values follow a lognormal distribution, characterized by GSD ranging from 1.1 to 2.3. Our results show that there is a link between biological variability and the half-life of bio-indicators, since BV(intra) and BV(total) both decrease as the biological indicator half-lives increase. BV(intra) is always lower than the variability in the air concentrations. On an individual basis, this means that the variability associated with the measurement of biological indicators is always lower than the variability characterizing airborne levels of contaminants. For a group of workers, BM is less variable than EM for bio-indicators with half-lives longer than 10-15 h. CONCLUSION: The variability data obtained in the present study can be useful in the development of BM strategies for exposure assessment and can be used to calculate the number of samples required for guiding industrial hygienists or medical doctors in decision-making.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biological monitoring of occupational exposure is characterized by important variability, due both to variability in the environment and to biological differences between workers. A quantitative description and understanding of this variability is important for a dependable application of biological monitoring. This work describes this variability,using a toxicokinetic model, for a large range of chemicals for which reference biological reference values exist. A toxicokinetic compartmental model describing both the parent compound and its metabolites was used. For each chemical, compartments were given physiological meaning. Models were elaborated based on physiological, physicochemical, and biochemical data when available, and on half-lives and central compartment concentrations when not available. Fourteen chemicals were studied (arsenic, cadmium, carbon monoxide, chromium, cobalt, ethylbenzene, ethyleneglycol monomethylether, fluorides, lead, mercury, methyl isobutyl ketone, penthachlorophenol, phenol, and toluene), representing 20 biological indicators. Occupational exposures were simulated using Monte Carlo techniques with realistic distributions of both individual physiological parameters and exposure conditions. Resulting biological indicator levels were then analyzed to identify the contribution of environmental and biological variability to total variability. Comparison of predicted biological indicator levels with biological exposure limits showed a high correlation with the model for 19 out of 20 indicators. Variability associated with changes in exposure levels (GSD of 1.5 and 2.0) is shown to be mainly influenced by the kinetics of the biological indicator. Thus, with regard to variability, we can conclude that, for the 14 chemicals modeled, biological monitoring would be preferable to air monitoring. For short half-lives (less than 7 hr), this is very similar to the environmental variability. However, for longer half-lives, estimated variability decreased. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resource: tables detailing the CBTK models for all 14 chemicals and the symbol nomenclature that was used.] [Authors]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent experiments showed that the linear double-stranded DNA in bacteriophage capsids is both highly knotted and neatly structured. What is the physical basis of this organization? Here we show evidence from stochastic simulation techniques that suggests that a key element is the tendency of contacting DNA strands to order, as in cholesteric liquid crystals. This interaction favors their preferential juxtaposition at a small twist angle, thus promoting an approximately nematic (and apolar) local order. The ordering effect dramatically impacts the geometry and topology of DNA inside phages. Accounting for this local potential allows us to reproduce the main experimental data on DNA organization in phages, including the cryo-EM observations and detailed features of the spectrum of DNA knots formed inside viral capsids. The DNA knots we observe are strongly delocalized and, intriguingly, this is shown not to interfere with genome ejection out of the phage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using Monte Carlo simulations and reanalyzing the data of a validation study of the AEIM emotional intelligence test, we demonstrated that an atheoretical approach and the use of weak statistical procedures can result in biased validity estimates. These procedures included stepwise regression-and the general case of failing to include important theoretical controls-extreme scores analysis, and ignoring heteroscedasticity as well as measurement error. The authors of the AEIM test responded by offering more complete information about their analyses, allowing us to further examine the perils of ignoring theory and correct statistical procedures. In this paper we show with extended analyses that the AEIM test is invalid.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The activity of radiopharmaceuticals in nuclear medicine is measured before patient injection with radionuclide calibrators. In Switzerland, the general requirements for quality controls are defined in a federal ordinance and a directive of the Federal Office of Metrology (METAS) which require each instrument to be verified. A set of three gamma sources (Co-57, Cs-137 and Co-60) is used to verify the response of radionuclide calibrators in the gamma energy range of their use. A beta source, a mixture of (90)Sr and (90)Y in secular equilibrium, is used as well. Manufacturers are responsible for the calibration factors. The main goal of the study was to monitor the validity of the calibration factors by using two sources: a (90)Sr/(90)Y source and a (18)F source. The three types of commercial radionuclide calibrators tested do not have a calibration factor for the mixture but only for (90)Y. Activity measurements of a (90)Sr/(90)Y source with the (90)Y calibration factor are performed in order to correct for the extra-contribution of (90)Sr. The value of the correction factor was found to be 1.113 whereas Monte Carlo simulations of the radionuclide calibrators estimate the correction factor to be 1.117. Measurements with (18)F sources in a specific geometry are also performed. Since this radionuclide is widely used in Swiss hospitals equipped with PET and PET-CT, the metrology of the (18)F is very important. The (18)F response normalized to the (137)Cs response shows that the difference with a reference value does not exceed 3% for the three types of radionuclide calibrators.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we analyze how patchy distributions of CO2 and brine within sand reservoirs may lead to significant attenuation and velocity dispersion effects, which in turn may have a profound impact on surface seismic data. The ultimate goal of this paper is to contribute to the understanding of these processes within the framework of the seismic monitoring of CO2 sequestration, a key strategy to mitigate global warming. We first carry out a Monte Carlo analysis to study the statistical behavior of attenuation and velocity dispersion of compressional waves traveling through rocks with properties similar to those at the Utsira Sand, Sleipner field, containing quasi-fractal patchy distributions of CO2 and brine. These results show that the mean patch size and CO2 saturation play key roles in the observed wave-induced fluid flow effects. The latter can be remarkably important when CO2 concentrations are low and mean patch sizes are relatively large. To analyze these effects on the corresponding surface seismic data, we perform numerical simulations of wave propagation considering reservoir models and CO2 accumulation patterns similar to the CO2 injection site in the Sleipner field. These numerical experiments suggest that wave-induced fluid flow effects may produce changes in the reservoir's seismic response, modifying significantly the main seismic attributes usually employed in the characterization of these environments. Consequently, the determination of the nature of the fluid distributions as well as the proper modeling of the seismic data constitute important aspects that should not be ignored in the seismic monitoring of CO2 sequestration problems.