54 resultados para Monte-Carlo analysis
Resumo:
In this paper, we report on an optical tolerance analysis of the submillimeter atmospheric multi-beam limb sounder, STEAMR. Physical optics and ray-tracing methods were used to quantify and separate errors in beam pointing and distortion due to reflector misalignment and primary reflector surface deformations. Simulations were performed concurrently with the manufacturing of a multi-beam demonstrator of the relay optical system which shapes and images the beams to their corresponding receiver feed horns. Results from Monte Carlo simulations show that the inserts used for reflector mounting should be positioned with an overall accuracy better than 100 μm (~ 1/10 wavelength). Analyses of primary reflector surface deformations show that a deviation of magnitude 100 μm can be tolerable before deployment, whereas the corresponding variations should be less than 30 μm during operation. The most sensitive optical elements in terms of misalignments are found near the focal plane. This localized sensitivity is attributed to the off-axis nature of the beams at this location. Post-assembly mechanical measurements of the reflectors in the demonstrator show that alignment better than 50 μm could be obtained.
Resumo:
Excess adiposity is associated with increased risks of developing adult malignancies. To inform public health policy and guide further research, the incident cancer burden attributable to excess body mass index (BMI >or= 25 kg/m(2)) across 30 European countries were estimated. Population attributable risks (PARs) were calculated using European- and gender-specific risk estimates from a published meta-analysis and gender-specific mean BMI estimates from a World Health Organization Global Infobase. Country-specific numbers of new cancers were derived from Globocan2002. A ten-year lag-period between risk exposure and cancer incidence was assumed and 95% confidence intervals (CI) were estimated in Monte Carlo simulations. In 2002, there were 2,171,351 new all cancer diagnoses in the 30 countries of Europe. Estimated PARs were 2.5% (95% CI 1.5-3.6%) in men and 4.1% (2.3-5.9%) in women. These collectively corresponded to 70,288 (95% CI 40,069-100,668) new cases. Sensitivity analyses revealed estimates were most influenced by the assumed shape of the BMI distribution in the population and cancer-specific risk estimates. In a scenario analysis of a plausible contemporary (2008) population, the estimated PARs increased to 3.2% (2.1-4.3%) and 8.6% (5.6-11.5%), respectively, in men and women. Endometrial, post-menopausal breast and colorectal cancers accounted for 65% of these cancers. This analysis quantifies the burden of incident cancers attributable to excess BMI in Europe. The estimates reported here provide a baseline for future modelling, and underline the need for research into interventions to control weight in the context of endometrial, breast and colorectal cancer.
Resumo:
Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.
Resumo:
Objectives To examine the extent of multiplicity of data in trial reports and to assess the impact of multiplicity on meta-analysis results. Design Empirical study on a cohort of Cochrane systematic reviews. Data sources All Cochrane systematic reviews published from issue 3 in 2006 to issue 2 in 2007 that presented a result as a standardised mean difference (SMD). We retrieved trial reports contributing to the first SMD result in each review, and downloaded review protocols. We used these SMDs to identify a specific outcome for each meta-analysis from its protocol. Review methods Reviews were eligible if SMD results were based on two to ten randomised trials and if protocols described the outcome. We excluded reviews if they only presented results of subgroup analyses. Based on review protocols and index outcomes, two observers independently extracted the data necessary to calculate SMDs from the original trial reports for any intervention group, time point, or outcome measure compatible with the protocol. From the extracted data, we used Monte Carlo simulations to calculate all possible SMDs for every meta-analysis. Results We identified 19 eligible meta-analyses (including 83 trials). Published review protocols often lacked information about which data to choose. Twenty-four (29%) trials reported data for multiple intervention groups, 30 (36%) reported data for multiple time points, and 29 (35%) reported the index outcome measured on multiple scales. In 18 meta-analyses, we found multiplicity of data in at least one trial report; the median difference between the smallest and largest SMD results within a meta-analysis was 0.40 standard deviation units (range 0.04 to 0.91). Conclusions Multiplicity of data can affect the findings of systematic reviews and meta-analyses. To reduce the risk of bias, reviews and meta-analyses should comply with prespecified protocols that clearly identify time points, intervention groups, and scales of interest.
Resumo:
A dynamic deterministic simulation model was developed to assess the impact of different putative control strategies on the seroprevalence of Neospora caninum in female Swiss dairy cattle. The model structure comprised compartments of "susceptible" and "infected" animals (SI-model) and the cattle population was divided into 12 age classes. A reference model (Model 1) was developed to simulate the current (status quo) situation (present seroprevalence in Switzerland 12%), taking into account available demographic and seroprevalence data of Switzerland. Model 1 was modified to represent four putative control strategies: testing and culling of seropositive animals (Model 2), discontinued breeding with offspring from seropositive cows (Model 3), chemotherapeutic treatment of calves from seropositive cows (Model 4), and vaccination of susceptible and infected animals (Model 5). Models 2-4 considered different sub-scenarios with regard to the frequency of diagnostic testing. Multivariable Monte Carlo sensitivity analysis was used to assess the impact of uncertainty in input parameters. A policy of annual testing and culling of all seropositive cattle in the population reduced the seroprevalence effectively and rapidly from 12% to <1% in the first year of simulation. The control strategies with discontinued breeding with offspring from all seropositive cows, chemotherapy of calves and vaccination of all cattle reduced the prevalence more slowly than culling but were still very effective (reduction of prevalence below 2% within 11, 23 and 3 years of simulation, respectively). However, sensitivity analyses revealed that the effectiveness of these strategies depended strongly on the quality of the input parameters used, such as the horizontal and vertical transmission factors, the sensitivity of the diagnostic test and the efficacy of medication and vaccination. Finally, all models confirmed that it was not possible to completely eradicate N. caninum as long as the horizontal transmission process was not interrupted.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
The Genesis mission Solar Wind Concentrator was built to enhance fluences of solar wind by an average of 20x over the 2.3 years that the mission exposed substrates to the solar wind. The Concentrator targets survived the hard landing upon return to Earth and were used to determine the isotopic composition of solar-wind—and hence solar—oxygen and nitrogen. Here we report on the flight operation of the instrument and on simulations of its performance. Concentration and fractionation patterns obtained from simulations are given for He, Li, N, O, Ne, Mg, Si, S, and Ar in SiC targets, and are compared with measured concentrations and isotope ratios for the noble gases. Carbon is also modeled for a Si target. Predicted differences in instrumental fractionation between elements are discussed. Additionally, as the Concentrator was designed only for ions ≤22 AMU, implications of analyzing elements as heavy as argon are discussed. Post-flight simulations of instrumental fractionation as a function of radial position on the targets incorporate solar-wind velocity and angular distributions measured in flight, and predict fractionation patterns for various elements and isotopes of interest. A tighter angular distribution, mostly due to better spacecraft spin stability than assumed in pre-flight modeling, results in a steeper isotopic fractionation gradient between the center and the perimeter of the targets. Using the distribution of solar-wind velocities encountered during flight, which are higher than those used in pre-flight modeling, results in elemental abundance patterns slightly less peaked at the center. Mean fractionations trend with atomic mass, with differences relative to the measured isotopes of neon of +4.1±0.9 ‰/amu for Li, between -0.4 and +2.8 ‰/amu for C, +1.9±0.7‰/amu for N, +1.3±0.4 ‰/amu for O, -7.5±0.4 ‰/amu for Mg, -8.9±0.6 ‰/amu for Si, and -22.0±0.7 ‰/amu for S (uncertainties reflect Monte Carlo statistics). The slopes of the fractionation trends depend to first order only on the relative differential mass ratio, Δ m/ m. This article and a companion paper (Reisenfeld et al. 2012, this issue) provide post-flight information necessary for the analysis of the Genesis solar wind samples, and thus serve to complement the Space Science Review volume, The Genesis Mission (v. 105, 2003).
Resumo:
A high-resolution α, x-ray, and γ-ray coincidence spectroscopy experiment was conducted at the GSI Helmholtzzentrum für Schwerionenforschung. Thirty correlated α-decay chains were detected following the fusion-evaporation reaction Ca48+Am243. The observations are consistent with previous assignments of similar decay chains to originate from element Z=115. For the first time, precise spectroscopy allows the derivation of excitation schemes of isotopes along the decay chains starting with elements Z>112. Comprehensive Monte Carlo simulations accompany the data analysis. Nuclear structure models provide a first level interpretation.
Resumo:
One of the main problems of flood hazard assessment in ungauged or poorly gauged basins is the lack of runoff data. In an attempt to overcome this problem we have combined archival records, dendrogeomorphic time series and instrumental data (daily rainfall and discharge) from four ungauged and poorly gauged mountain basins in Central Spain with the aim of reconstructing and compiling information on 41 flash flood events since the end of the 19th century. Estimation of historical discharge and the incorporation of uncertainty for the at-site and regional flood frequency analysis were performed with an empirical rainfall–runoff assessment as well as stochastic and Bayesian Markov Chain Monte Carlo (MCMC) approaches. Results for each of the ungauged basins include flood frequency, severity, seasonality and triggers (synoptic meteorological situations). The reconstructed data series clearly demonstrates how uncertainty can be reduced by including historical information, but also points to the considerable influence of different approaches on quantile estimation. This uncertainty should be taken into account when these data are used for flood risk management.