973 resultados para Monte Carlo study
Resumo:
The risk of second malignant neoplasms (SMNs) following prostate radiotherapy is a concern due to the large population of survivors and decreasing age at diagnosis. It is known that parallel-opposed beam proton therapy carries a lower risk than photon IMRT. However, a comparison of SMN risk following proton and photon arc therapies has not previously been reported. The purpose of this study was to predict the ratio of excess relative risk (RRR) of SMN incidence following proton arc therapy to that after volumetric modulated arc therapy (VMAT). Additionally, we investigated the impact of margin size and the effect of risk-minimized proton beam weighting on predicted RRR. Physician-approved treatment plans were created for both modalities for three patients. Therapeutic dose was obtained with differential dose-volume histograms from the treatment planning system, and stray dose was estimated from the literature or calculated with Monte Carlo simulations. Then, various risk models were applied to the total dose. Additional treatment plans were also investigated with varying margin size and risk-minimized proton beam weighting. The mean RRR ranged from 0.74 to 0.99, depending on risk model. The additional treatment plans revealed that the RRR remained approximately constant with varying margin size, and that the predicted RRR was reduced by 12% using a risk-minimized proton arc therapy planning technique. In conclusion, proton arc therapy was found to provide an advantage over VMAT in regard to predicted risk of SMN following prostate radiotherapy. This advantage was independent of margin size and was amplified with risk-optimized proton beam weighting.
Resumo:
Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.
Resumo:
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
Resumo:
This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^
Resumo:
The decomposition of soil organic matter (SOM) is temperature dependent, but its response to a future warmer climate remains equivocal. Enhanced rates of decomposition of SOM under increased global temperatures might cause higher CO2 emissions to the atmosphere, and could therefore constitute a strong positive feedback. The magnitude of this feedback however remains poorly understood, primarily because of the difficulty in quantifying the temperature sensitivity of stored, recalcitrant carbon that comprises the bulk (>90%) of SOM in most soils. In this study we investigated the effects of climatic conditions on soil carbon dynamics using the attenuation of the 14C ‘bomb’ pulse as recorded in selected modern European speleothems. These new data were combined with published results to further examine soil carbon dynamics, and to explore the sensitivity of labile and recalcitrant organic matter decomposition to different climatic conditions. Temporal changes in 14C activity inferred from each speleothem was modelled using a three pool soil carbon inverse model (applying a Monte Carlo method) to constrain soil carbon turnover rates at each site. Speleothems from sites that are characterised by semi-arid conditions, sparse vegetation, thin soil cover and high mean annual air temperatures (MAATs), exhibit weak attenuation of atmospheric 14C ‘bomb’ peak (a low damping effect, D in the range: 55–77%) and low modelled mean respired carbon ages (MRCA), indicating that decomposition is dominated by young, recently fixed soil carbon. By contrast, humid and high MAAT sites that are characterised by a thick soil cover and dense, well developed vegetation, display the highest damping effect (D = c. 90%), and the highest MRCA values (in the range from 350 ± 126 years to 571 ± 128 years). This suggests that carbon incorporated into these stalagmites originates predominantly from decomposition of old, recalcitrant organic matter. SOM turnover rates cannot be ascribed to a single climate variable, e.g. (MAAT) but instead reflect a complex interplay of climate (e.g. MAAT and moisture budget) and vegetation development.
Resumo:
We show that exotic phases arise in generalized lattice gauge theories known as quantum link models in which classical gauge fields are replaced by quantum operators. While these quantum models with discrete variables have a finite-dimensional Hilbert space per link, the continuous gauge symmetry is still exact. An efficient cluster algorithm is used to study these exotic phases. The (2+1)-d system is confining at zero temperature with a spontaneously broken translation symmetry. A crystalline phase exhibits confinement via multi stranded strings between chargeanti-charge pairs. A phase transition between two distinct confined phases is weakly first order and has an emergent spontaneously broken approximate SO(2) global symmetry. The low-energy physics is described by a (2 + 1)-d RP(1) effective field theory, perturbed by a dangerously irrelevant SO(2) breaking operator, which prevents the interpretation of the emergent pseudo-Goldstone boson as a dual photon. This model is an ideal candidate to be implemented in quantum simulators to study phenomena that are not accessible using Monte Carlo simulations such as the real-time evolution of the confining string and the real-time dynamics of the pseudo-Goldstone boson.
Resumo:
Photopolymerized hydrogels are commonly used for a broad range of biomedical applications. As long as the polymer volume is accessible, gels can easily be hardened using light illumination. However, in clinics, especially for minimally invasive surgery, it becomes highly challenging to control photopolymerization. The ratios between polymerization- volume and radiating-surface-area are several orders of magnitude higher than for ex-vivo settings. Also tissue scattering occurs and influences the reaction. We developed a Monte Carlo model for photopolymerization, which takes into account the solid/liquid phase changes, moving solid/liquid-boundaries and refraction on these boundaries as well as tissue scattering in arbitrarily designable tissue cavities. The model provides a tool to tailor both the light probe and the scattering/absorption properties of the photopolymer for applications such as medical implants or tissue replacements. Based on the simulations, we have previously shown that by adding scattering additives to the liquid monomer, the photopolymerized volume was considerably increased. In this study, we have used bovine intervertebral disc cavities, as a model for spinal degeneration, to study photopolymerization in-vitro. The cavity is created by enzyme digestion. Using a custom designed probe, hydrogels were injected and photopolymerized. Magnetic resonance imaging (MRI) and visual inspection tools were employed to investigate the successful photopolymerization outcomes. The results provide insights for the development of novel endoscopic light-scattering polymerization probes paving the way for a new generation of implantable hydrogels.
Resumo:
Caregiving for individuals with Alzheimer's disease is associated with chronic stress and elevated symptoms of depression. Placement of the care receiver (CR) into a long-term care setting may be associated with improved caregiver well-being; however, the psychological mechanisms underlying this relationship are unclear. This study evaluated whether decreases in activity restriction and increases in personal mastery mediated placement-related reductions in caregiver depressive symptoms. In a 5-year longitudinal study of 126 spousal Alzheimer's disease caregivers, we used multilevel models to evaluate placement-related changes in depressive symptoms (short form of the Center for Epidemiologic Studies Depression scale), activity restriction (Activity Restriction Scale), and personal mastery (Pearlin Mastery Scale) in 44 caregivers who placed their spouses into long-term care relative to caregivers who never placed their CRs. The Monte Carlo method for assessing mediation was used to evaluate the significance of the indirect effect of activity restriction and personal mastery on postplacement changes in depressive symptoms. Placement of the CR was associated with significant reductions in depressive symptoms and activity restriction and was also associated with increased personal mastery. Lower activity restriction and higher personal mastery were associated with reduced depressive symptoms. Furthermore, both variables significantly mediated the effect of placement on depressive symptoms. Placement-related reductions in activity restriction and increases in personal mastery are important psychological factors that help explain postplacement reductions in depressive symptoms. The implications for clinical care provided to caregivers are discussed.
Resumo:
The sensitivity of the gas flow field to changes in different initial conditions has been studied for the case of a highly simplified cometary nucleus model. The nucleus model simulated a homogeneously outgassing sphere with a more active ring around an axis of symmetry. The varied initial conditions were the number density of the homogeneous region, the surface temperature, and the composition of the flow (varying amounts of H2O and CO2) from the active ring. The sensitivity analysis was performed using the Polynomial Chaos Expansion (PCE) method. Direct Simulation Monte Carlo (DSMC) was used for the flow, thereby allowing strong deviations from local thermal equilibrium. The PCE approach can be used to produce a sensitivity analysis with only four runs per modified input parameter and allows one to study and quantify non-linear responses of measurable parameters to linear changes in the input over a wide range. Hence the PCE allows one to obtain a functional relationship between the flow field properties at every point in the inner coma and the input conditions. It is for example shown that the velocity and the temperature of the background gas are not simply linear functions of the initial number density at the source. As probably expected, the main influence on the resulting flow field parameter is the corresponding initial parameter (i.e. the initial number density determines the background number density, the temperature of the surface determines the flow field temperature, etc.). However, the velocity of the flow field is also influenced by the surface temperature while the number density is not sensitive to the surface temperature at all in our model set-up. Another example is the change in the composition of the flow over the active area. Such changes can be seen in the velocity but again not in the number density. Although this study uses only a simple test case, we suggest that the approach, when applied to a real case in 3D, should assist in identifying the sensitivity of gas parameters measured in situ by, for example, the Rosetta spacecraft to the surface boundary conditions and vice versa.
Resumo:
(31)P MRS magnetization transfer ((31)P-MT) experiments allow the estimation of exchange rates of biochemical reactions, such as the creatine kinase equilibrium and adenosine triphosphate (ATP) synthesis. Although various (31)P-MT methods have been successfully used on isolated organs or animals, their application on humans in clinical scanners poses specific challenges. This study compared two major (31)P-MT methods on a clinical MR system using heteronuclear surface coils. Although saturation transfer (ST) is the most commonly used (31)P-MT method, sequences such as inversion transfer (IT) with short pulses might be better suited for the specific hardware and software limitations of a clinical scanner. In addition, small NMR-undetectable metabolite pools can transfer MT to NMR-visible pools during long saturation pulses, which is prevented with short pulses. (31)P-MT sequences were adapted for limited pulse length, for heteronuclear transmit-receive surface coils with inhomogeneous B1 , for the need for volume selection and for the inherently low signal-to-noise ratio (SNR) on a clinical 3-T MR system. The ST and IT sequences were applied to skeletal muscle and liver in 10 healthy volunteers. Monte-Carlo simulations were used to evaluate the behavior of the IT measurements with increasing imperfections. In skeletal muscle of the thigh, ATP synthesis resulted in forward reaction constants (k) of 0.074 ± 0.022 s(-1) (ST) and 0.137 ± 0.042 s(-1) (IT), whereas the creatine kinase reaction yielded 0.459 ± 0.089 s(-1) (IT). In the liver, ATP synthesis resulted in k = 0.267 ± 0.106 s(-1) (ST), whereas the IT experiment yielded no consistent results. ST results were close to literature values; however, the IT results were either much larger than the corresponding ST values and/or were widely scattered. To summarize, ST and IT experiments can both be implemented on a clinical body scanner with heteronuclear transmit-receive surface coils; however, ST results are much more robust against experimental imperfections than the current implementation of IT.
Resumo:
Distributions sensitive to the underlying event in QCD jet events have been measured with the ATLAS detector at the LHC, based on 37 pb−1 of proton–proton collision data collected at a centre-of-mass energy of 7 TeV. Chargedparticle mean pT and densities of all-particle ET and chargedparticle multiplicity and pT have been measured in regions azimuthally transverse to the hardest jet in each event. These are presented both as one-dimensional distributions and with their mean values as functions of the leading-jet transverse momentum from 20 to 800 GeV. The correlation of chargedparticle mean pT with charged-particle multiplicity is also studied, and the ET densities include the forward rapidity region; these features provide extra data constraints for Monte Carlo modelling of colour reconnection and beamremnant effects respectively. For the first time, underlying event observables have been computed separately for inclusive jet and exclusive dijet event selections, allowing more detailed study of the interplay of multiple partonic scattering and QCD radiation contributions to the underlying event. Comparisonsto the predictions of different Monte Carlo models show a need for further model tuning, but the standard approach is found to generally reproduce the features of the underlying event in both types of event selection.
Resumo:
This paper presents a study of the performance of the muon reconstruction in the analysis of proton–proton collisions at √s = 7TeV at theLHC, recorded by the ATLAS detector in 2010. This performance is described in terms of reconstruction and isolation efficiencies and momentum resolutions for different classes of reconstructed muons. The results are obtained from an analysis of J/ψ meson and Z boson decays to dimuons, reconstructed from a data sample corresponding to an integrated luminosity of 40 pb−1. The measured performance is compared to Monte Carlo predictions and deviations from the predicted performance are discussed.
Resumo:
XENON is a dark matter direct detection project, consisting of a time projection chamber (TPC) filled with liquid xenon as detection medium. The construction of the next generation detector, XENON1T, is presently taking place at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. It aims at a sensitivity to spin-independent cross sections of 2 10-47 c 2 for WIMP masses around 50 GeV2, which requires a background reduction by two orders of magnitude compared to XENON100, the current generation detector. An active system that is able to tag muons and muon-induced backgrounds is critical for this goal. A water Cherenkov detector of ~ 10 m height and diameter has been therefore developed, equipped with 8 inch photomultipliers and cladded by a reflective foil. We present the design and optimization study for this detector, which has been carried out with a series of Monte Carlo simulations. The muon veto will reach very high detection efficiencies for muons (>99.5%) and showers of secondary particles from muon interactions in the rock (>70%). Similar efficiencies will be obtained for XENONnT, the upgrade of XENON1T, which will later improve the WIMP sensitivity by another order of magnitude. With the Cherenkov water shield studied here, the background from muon-induced neutrons in XENON1T is negligible.