93 resultados para Monte Carlo.
Resumo:
Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.
Resumo:
BACKGROUND: Anal condylomata acuminata (ACA) are caused by human papilloma virus (HPV) infection which is transmitted by close physical and sexual contact. The result of surgical treatment of ACA has an overall success rate of 71% to 93%, with a recurrence rate between 4% and 29%. The aim of this study was to assess a possible association between HPV type and ACA recurrence after surgical treatment. METHODS: We performed a retrospective analysis of 140 consecutive patients who underwent surgery for ACA from January 1990 to December 2005 at our tertiary University Hospital. We confirmed ACA by histopathological analysis and determined the HPV typing using the polymerase chain reaction. Patients gave consent for HPV testing and completed a questionnaire. We looked at the association of ACA, HPV typing, and HIV disease. We used chi, the Monte Carlo simulation, and Wilcoxon tests for statistical analysis. RESULTS: Among the 140 patients (123 M/17 F), HPV 6 and 11 were the most frequently encountered viruses (51% and 28%, respectively). Recurrence occurred in 35 (25%) patients. HPV 11 was present in 19 (41%) of these recurrences, which is statistically significant, when compared with other HPVs. There was no significant difference between recurrence rates in the 33 (24%) HIV-positive and the HIV-negative patients. CONCLUSIONS: HPV 11 is associated with higher recurrence rate of ACA. This makes routine clinical HPV typing questionable. Follow-up is required to identify recurrence and to treat it early, especially if HPV 11 has been identified.
Resumo:
A joint project between the Paul Scherrer Institut (PSI) and the Institute of Radiation Physics was initiated to characterise the PSI whole body counter in detail through measurements and Monte Carlo simulation. Accurate knowledge of the detector geometry is essential for reliable simulations of human body phantoms filled with known activity concentrations. Unfortunately, the technical drawings provided by the manufacturer are often not detailed enough and sometimes the specifications do not agree with the actual set-up. Therefore, the exact detector geometry and the position of the detector crystal inside the housing were determined through radiographic images. X-rays were used to analyse the structure of the detector, and (60)Co radiography was employed to measure the core of the germanium crystal. Moreover, the precise axial alignment of the detector within its housing was determined through a series of radiographic images with different incident angles. The hence obtained information enables us to optimise the Monte Carlo geometry model and to perform much more accurate and reliable simulations.
Resumo:
A number of geophysical methods, such as ground-penetrating radar (GPR), have the potential to provide valuable information on hydrological properties in the unsaturated zone. In particular, the stochastic inversion of such data within a coupled geophysical-hydrological framework may allow for the effective estimation of vadose zone hydraulic parameters and their corresponding uncertainties. A critical issue in stochastic inversion is choosing prior parameter probability distributions from which potential model configurations are drawn and tested against observed data. A well chosen prior should reflect as honestly as possible the initial state of knowledge regarding the parameters and be neither overly specific nor too conservative. In a Bayesian context, combining the prior with available data yields a posterior state of knowledge about the parameters, which can then be used statistically for predictions and risk assessment. Here we investigate the influence of prior information regarding the van Genuchten-Mualem (VGM) parameters, which describe vadose zone hydraulic properties, on the stochastic inversion of crosshole GPR data collected under steady state, natural-loading conditions. We do this using a Bayesian Markov chain Monte Carlo (MCMC) inversion approach, considering first noninformative uniform prior distributions and then more informative priors derived from soil property databases. For the informative priors, we further explore the effect of including information regarding parameter correlation. Analysis of both synthetic and field data indicates that the geophysical data alone contain valuable information regarding the VGM parameters. However, significantly better results are obtained when we combine these data with a realistic, informative prior.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
Geophysical methods have the potential to provide valuable information on hydrological properties in the unsaturated zone. In particular, time-lapse geophysical data, when coupled with a hydrological model and inverted stochastically, may allow for the effective estimation of subsurface hydraulic parameters and their corresponding uncertainties. In this study, we use a Bayesian Markov-chain-Monte-Carlo (MCMC) inversion approach to investigate how much information regarding vadose zone hydraulic properties can be retrieved from time-lapse crosshole GPR data collected at the Arrenaes field site in Denmark during a forced infiltration experiment.
Resumo:
Compartmental and physiologically based toxicokinetic modeling coupled with Monte Carlo simulation were used to quantify the impact of biological variability (physiological, biochemical, and anatomic parameters) on the values of a series of bio-indicators of metal and organic industrial chemical exposures. A variability extent index and the main parameters affecting biological indicators were identified. Results show a large diversity in interindividual variability for the different categories of biological indicators examined. Measurement of the unchanged substance in blood, alveolar air, or urine is much less variable than the measurement of metabolites, both in blood and urine. In most cases, the alveolar flow and cardiac output were identified as the prime parameters determining biological variability, thus suggesting the importance of workload intensity on absorbed dose for inhaled chemicals.
Resumo:
Despite the considerable evidence showing that dispersal between habitat patches is often asymmetric, most of the metapopulation models assume symmetric dispersal. In this paper, we develop a Monte Carlo simulation model to quantify the effect of asymmetric dispersal on metapopulation persistence. Our results suggest that metapopulation extinctions are more likely when dispersal is asymmetric. Metapopulation viability in systems with symmetric dispersal mirrors results from a mean field approximation, where the system persists if the expected per patch colonization probability exceeds the expected per patch local extinction rate. For asymmetric cases, the mean field approximation underestimates the number of patches necessary for maintaining population persistence. If we use a model assuming symmetric dispersal when dispersal is actually asymmetric, the estimation of metapopulation persistence is wrong in more than 50% of the cases. Metapopulation viability depends on patch connectivity in symmetric systems, whereas in the asymmetric case the number of patches is more important. These results have important implications for managing spatially structured populations, when asymmetric dispersal may occur. Future metapopulation models should account for asymmetric dispersal, while empirical work is needed to quantify the patterns and the consequences of asymmetric dispersal in natural metapopulations.
Resumo:
Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.
Resumo:
Tumors in non-Hodgkin lymphoma (NHL) patients are often proximal to the major blood vessels in the abdomen or neck. In external-beam radiotherapy, these tumors present a challenge because imaging resolution prevents the beam from being targeted to the tumor lesion without also irradiating the artery wall. This problem has led to potentially life-threatening delayed toxicity. Because radioimmunotherapy has resulted in long-term survival of NHL patients, we investigated whether the absorbed dose (AD) to the artery wall in radioimmunotherapy of NHL is of potential concern for delayed toxicity. SPECT resolution is not sufficient to enable dosimetric analysis of anatomic features of the thickness of the aortic wall. Therefore, we present a model of aortic wall toxicity based on data from 4 patients treated with (131)I-tositumomab. METHODS: Four NHL patients with periaortic tumors were administered pretherapeutic (131)I-tositumomab. Abdominal SPECT and whole-body planar images were obtained at 48, 72, and 144 h after tracer administration. Blood-pool activity concentrations were obtained from regions of interest drawn on the heart on the planar images. Tumor and blood activity concentrations, scaled to therapeutic administered activities-both standard and myeloablative-were input into a geometry and tracking model (GEANT, version 4) of the aorta. The simulated energy deposited in the arterial walls was collected and fitted, and the AD and biologic effective dose values to the aortic wall and tumors were obtained for standard therapeutic and hypothetical myeloablative administered activities. RESULTS: Arterial wall ADs from standard therapy were lower (0.6-3.7 Gy) than those typical from external-beam therapy, as were the tumor ADs (1.4-10.5 Gy). The ratios of tumor AD to arterial wall AD were greater for radioimmunotherapy by a factor of 1.9-4.0. For myeloablative therapy, artery wall ADs were in general less than those typical for external-beam therapy (9.4-11.4 Gy for 3 of 4 patients) but comparable for 1 patient (32.6 Gy). CONCLUSION: Blood vessel radiation dose can be estimated using the software package 3D-RD combined with GEANT modeling. The dosimetry analysis suggested that arterial wall toxicity is highly unlikely in standard dose radioimmunotherapy but should be considered a potential concern and limiting factor in myeloablative therapy.
Resumo:
PURPOSE: To implement and characterize an isotropic three-dimensional cardiac T2 mapping technique. METHODS: A self-navigated three-dimensional radial segmented balanced steady-state free precession pulse sequence with an isotropic 1.7-mm spatial resolution was implemented at 3T with a variable T2 preparation module. Bloch equation and Monte Carlo simulations were performed to determine the influence of the heart rate, B1 inhomogeneity and noise on the T2 fitting accuracy. In a phantom study, the accuracy of the pulse sequence was studied through comparison with a gold-standard spin-echo T2 mapping method. The robustness and homogeneity of the technique were ascertained in a study of 10 healthy adult human volunteers, while first results obtained in patients are reported. RESULTS: The numerical simulations demonstrated that the heart rate and B1 inhomogeneity cause only minor deviations in the T2 fitting, whereas the phantom study showed good agreement of the technique with the gold standard. The volunteer study demonstrated an average myocardial T2 of 40.5 ± 3.3 ms and a <15% T2 gradient in the base-apex and anterior-inferior direction. In three patients, elevated T2 values were measured in regions with expected edema. CONCLUSION: This respiratory self-navigated isotropic three-dimensional technique allows for accurate and robust in vitro and in vivo T2 quantification. Magn Reson Med 73:1549-1554, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic inversion approaches, probabilistic inversion provides the full posterior probability density function of the saturation field and accounts for the uncertainties inherent in the petrophysical parameters relating the resistivity to saturation. In this study, the data are from benchtop ERT experiments conducted during gas injection into a quasi-2D brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. The saturation fields are estimated by Markov chain Monte Carlo inversion of the measured data and compared to independent saturation measurements from light transmission through the chamber. Different model parameterizations are evaluated in terms of the recovered saturation and petrophysical parameter values. The saturation field is parameterized (1) in Cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values in structural elements whose shape and location is assumed known or represented by an arbitrary Gaussian Bell structure. Results show that the estimated saturation fields are in overall agreement with saturations measured by light transmission, but differ strongly in terms of parameter estimates, parameter uncertainties and computational intensity. Discretization in the frequency domain (as in the discrete cosine transform parameterization) provides more accurate models at a lower computational cost compared to spatially discretized (Cartesian) models. A priori knowledge about the expected geologic structures allows for non-discretized model descriptions with markedly reduced degrees of freedom. Constraining the solutions to the known injected gas volume improved estimates of saturation and parameter values of the petrophysical relationship. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
A thorough literature review about the current situation on the implementation of eye lens monitoring has been performed in order to provide recommendations regarding dosemeter types, calibration procedures and practical aspects of eye lens monitoring for interventional radiology personnel. Most relevant data and recommendations from about 100 papers have been analysed and classified in the following topics: challenges of today in eye lens monitoring; conversion coefficients, phantoms and calibration procedures for eye lens dose evaluation; correction factors and dosemeters for eye lens dose measurements; dosemeter position and influence of protective devices. The major findings of the review can be summarised as follows: the recommended operational quantity for the eye lens monitoring is H p (3). At present, several dosemeters are available for eye lens monitoring and calibration procedures are being developed. However, in practice, very often, alternative methods are used to assess the dose to the eye lens. A summary of correction factors found in the literature for the assessment of the eye lens dose is provided. These factors can give an estimation of the eye lens dose when alternative methods, such as the use of a whole body dosemeter, are used. A wide range of values is found, thus indicating the large uncertainty associated with these simplified methods. Reduction factors from most common protective devices obtained experimentally and using Monte Carlo calculations are presented. The paper concludes that the use of a dosemeter placed at collar level outside the lead apron can provide a useful first estimate of the eye lens exposure. However, for workplaces with estimated annual equivalent dose to the eye lens close to the dose limit, specific eye lens monitoring should be performed. Finally, training of the involved medical staff on the risks of ionising radiation for the eye lens and on the correct use of protective systems is strongly recommended.