64 resultados para Monte-Carlo Simulation Method
Resumo:
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here we present a graphics processor unit (GPU) based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to auto-regressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and 4 times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a 7-day high-resolution ECG is computed within less than 3 seconds. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.
Resumo:
Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
The decomposition technique introduced by Blinder (1973) and Oaxaca (1973) is widely used to study outcome differences between groups. For example, the technique is commonly applied to the analysis of the gender wage gap. However, despite the procedure's frequent use, very little attention has been paid to the issue of estimating the sampling variances of the decomposition components. We therefore suggest an approach that introduces consistent variance estimators for several variants of the decomposition. The accuracy of the new estimators under ideal conditions is illustrated with the results of a Monte Carlo simulation. As a second check, the estimators are compared to bootstrap results obtained using real data. In contrast to previously proposed statistics, the new method takes into account the extra variation imposed by stochastic regressors.
Resumo:
The available results of deep imaging searches for planetary companions around nearby stars provide us useful constraints on the frequencies of giant planets in very wide orbits. Here we present some preliminary results of the Monte Carlo simulation which compare the published detection limits with the generated planetary masses and orbital parameters. This allow us to consider the impications of the null detection, which comes from the direct imaging techniques, on the distributions of mass and semimajor axis derived from the results of the other search techniques and also to check the agreement of the observations with the available planetary formation models.
Resumo:
Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.
Resumo:
The present study was conducted to estimate the direct losses due to Neospora caninum in Swiss dairy cattle and to assess the costs and benefits of different potential control strategies. A Monte Carlo simulation spreadsheet module was developed to estimate the direct costs caused by N. caninum, with and without control strategies, and to estimate the costs of these control strategies in a financial analysis. The control strategies considered were "testing and culling of seropositive female cattle", "discontinued breeding with offspring from seropositive cows", "chemotherapeutical treatment of female offspring" and "vaccination of all female cattle". Each parameter in the module that was considered to be uncertain, was described using probability distributions. The simulations were run with 20,000 iterations over a time period of 25 years. The median annual losses due to N. caninum in the Swiss dairy cow population were estimated to be euro 9.7 million euros. All control strategies that required yearly serological testing of all cattle in the population produced high costs and thus were not financially profitable. Among the other control strategies, two showed benefit-cost ratios (BCR) >1 and positive net present values (NPV): "Discontinued breeding with offspring from seropositive cows" (BCR=1.29, NPV=25 million euros ) and "chemotherapeutical treatment of all female offspring" (BCR=2.95, NPV=59 million euros). In economic terms, the best control strategy currently available would therefore be "discontinued breeding with offspring from seropositive cows".
Resumo:
The aim of our study was to develop a modeling framework suitable to quantify the incidence, absolute number and economic impact of osteoporosis-attributable hip, vertebral and distal forearm fractures, with a particular focus on change over time, and with application to the situation in Switzerland from 2000 to 2020. A Markov process model was developed and analyzed by Monte Carlo simulation. A demographic scenario provided by the Swiss Federal Statistical Office and various Swiss and international data sources were used as model inputs. Demographic and epidemiologic input parameters were reproduced correctly, confirming the internal validity of the model. The proportion of the Swiss population aged 50 years or over will rise from 33.3% in 2000 to 41.3% in 2020. At the total population level, osteoporosis-attributable incidence will rise from 1.16 to 1.54 per 1,000 person-years in the case of hip fracture, from 3.28 to 4.18 per 1,000 person-years in the case of radiographic vertebral fracture, and from 0.59 to 0.70 per 1,000 person-years in the case of distal forearm fracture. Osteoporosis-attributable hip fracture numbers will rise from 8,375 to 11,353, vertebral fracture numbers will rise from 23,584 to 30,883, and distal forearm fracture numbers will rise from 4,209 to 5,186. Population-level osteoporosis-related direct medical inpatient costs per year will rise from 713.4 million Swiss francs (CHF) to CHF946.2 million. These figures correspond to 1.6% and 2.2% of Swiss health care expenditures in 2000. The modeling framework described can be applied to a wide variety of settings. It can be used to assess the impact of new prevention, diagnostic and treatment strategies. In Switzerland incidences of osteoporotic hip, vertebral and distal forearm fracture will rise by 33%, 27%, and 19%, respectively, between 2000 and 2020, if current prevention and treatment patterns are maintained. Corresponding absolute fracture numbers will rise by 36%, 31%, and 23%. Related direct medical inpatient costs are predicted to increase by 33%; however, this estimate is subject to uncertainty due to limited availability of input data.
Resumo:
Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.
Resumo:
Oxygen-isotope variations were analyzed on bulk samples of shallow-water lake marl from Gerzensee, Switzerland, in order to evaluate major and minor climatic oscillations during the late-glacial. To highlight the overall signature of the Gerzensee δ18O record, δ18O records of four parallel sediment cores were first correlated by synchronizing major isotope shifts and pollen abundances. Then the records were stacked with a weighting depending on the differing sampling resolution. To develop a precise chronology, the δ18O-stack was then correlated with the NGRIP δ18O record applying a Monte Carlo simulation, relying on the assumption that the shifts in δ18O were climate-driven and synchronous in both archives. The established chronology on the GICC05 time scale is the basis for (1) comparing the δ18O changes recorded in Gerzensee with observed climatic and environmental fluctuations over the whole North Atlantic region, and (2) comparing sedimentological and biological changes during the rapid warming with smaller climatic variations during the Bølling/Allerød period. The δ18O record of Gerzensee is characterized by two major isotope shifts at the onset and at the termination of the Bølling/Allerød warm period, as well as four intervening negative shifts labeled GI-1e2, d, c2, and b, which show a shift of one third to one fourth of the major δ18O shifts at the beginning and end of the Bølling/Allerød. Despite some inconsistency in terminology, these oscillations can be observed in various climatic proxies over wide regions in the North Atlantic region, especially in reconstructed colder temperatures, and they seem to be caused by hemispheric climatic variations.
Resumo:
The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of root s = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K-s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5 % for central isolated hadrons and 1-3 % for the final calorimeter jet energy scale.
Resumo:
The inclusive jet cross-section has been measured in proton-proton collisions at root s = 2.76 TeV in a dataset corresponding to an integrated luminosity of 0.20 pb(-1) collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-k(t) algorithm with two radius parameters of 0.4 and 0.6. The inclusive jet double-differential cross-section is presented as a function of the jet transverse momentum p(T) and jet rapidity y, covering a range of 20 <= p(T) < 430 GeV and vertical bar y vertical bar < 4.4. The ratio of the cross-section to the inclusive jet cross-section measurement at root s = 7 TeV, published by the ATLAS Collaboration, is calculated as a function of both transverse momentum and the dimensionless quantity x(T) = 2p(T)/root s, in bins of jet rapidity. The systematic uncertainties on the ratios are significantly reduced due to the cancellation of correlated uncertainties in the two measurements. Results are compared to the prediction from next-to-leading order perturbative QCD calculations corrected for non-perturbative effects, and next-to-leading order Monte Carlo simulation. Furthermore, the ATLAS jet cross-section measurements at root s = 2.76 TeV and root s = 7 TeV are analysed within a framework of next-to-leading order perturbative QCD calculations to determine parton distribution functions of the proton, taking into account the correlations between the measurements.
Resumo:
The measurement of the jet energy resolution is presented using data recorded with the ATLAS detector in proton-proton collisions at root s = 7 TeV. The sample corresponds to an integrated luminosity of 35 pb(-1). Jets are reconstructed from energy deposits measured by the calorimeters and calibrated using different jet calibration schemes. The jet energy resolution is measured with two different in situ methods which are found to be in agreement within uncertainties. The total uncertainties on these measurements range from 20 % to 10 % for jets within vertical bar y vertical bar < 2.8 and with transverse momenta increasing from 30 GeV to 500 GeV. Overall, the Monte Carlo simulation of the jet energy resolution agrees with the data within 10 %.