978 resultados para simulations de Monte-Carlo


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clays and claystones are used as backfill and barrier materials in the design of waste repositories, because they act as hydraulic barriers and retain contaminants. Transport through such barriers occurs mainly by molecular diffusion. There is thus an interest to relate the diffusion properties of clays to their structural properties. In previous work, we have developed a concept for up-scaling pore-scale molecular diffusion coefficients using a grid-based model for the sample pore structure. Here we present an operational algorithm which can generate such model pore structures of polymineral materials. The obtained pore maps match the rock’s mineralogical components and its macroscopic properties such as porosity, grain and pore size distributions. Representative ensembles of grains in 2D or 3D are created by a lattice Monte Carlo (MC) method, which minimizes the interfacial energy of grains starting from an initial grain distribution. Pores are generated at grain boundaries and/or within grains. The method is general and allows to generate anisotropic structures with grains of approximately predetermined shapes, or with mixtures of different grain types. A specific focus of this study was on the simulation of clay-like materials. The generated clay pore maps were then used to derive upscaled effective diffusion coefficients for non-sorbing tracers using a homogenization technique. The large number of generated maps allowed to check the relations between micro-structural features of clays and their effective transport parameters, as is required to explain and extrapolate experimental diffusion results. As examples, we present a set of 2D and 3D simulations and investigated the effects of nanopores within particles (interlayer pores) and micropores between particles. Archie’s simple power law is followed in systems with only micropores. When nanopores are present, additional parameters are required; the data reveal that effective diffusion coefficients could be described by a sum of two power functions, related to the micro- and nanoporosity. We further used the model to investigate the relationships between particle orientation and effective transport properties of the sample.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Radiocarbon production, solar activity, total solar irradiance (TSI) and solar-induced climate change are reconstructed for the Holocene (10 to 0 kyr BP), and TSI is predicted for the next centuries. The IntCal09/SHCal04 radiocarbon and ice core CO2 records, reconstructions of the geomagnetic dipole, and instrumental data of solar activity are applied in the Bern3D-LPJ, a fully featured Earth system model of intermediate complexity including a 3-D dynamic ocean, ocean sediments, and a dynamic vegetation model, and in formulations linking radiocarbon production, the solar modulation potential, and TSI. Uncertainties are assessed using Monte Carlo simulations and bounding scenarios. Transient climate simulations span the past 21 thousand years, thereby considering the time lags and uncertainties associated with the last glacial termination. Our carbon-cycle-based modern estimate of radiocarbon production of 1.7 atoms cm−2 s−1 is lower than previously reported for the cosmogenic nuclide production model by Masarik and Beer (2009) and is more in-line with Kovaltsov et al. (2012). In contrast to earlier studies, periods of high solar activity were quite common not only in recent millennia, but throughout the Holocene. Notable deviations compared to earlier reconstructions are also found on decadal to centennial timescales. We show that earlier Holocene reconstructions, not accounting for the interhemispheric gradients in radiocarbon, are biased low. Solar activity is during 28% of the time higher than the modern average (650 MeV), but the absolute values remain weakly constrained due to uncertainties in the normalisation of the solar modulation to instrumental data. A recently published solar activity–TSI relationship yields small changes in Holocene TSI of the order of 1 W m−2 with a Maunder Minimum irradiance reduction of 0.85 ± 0.16 W m−2. Related solar-induced variations in global mean surface air temperature are simulated to be within 0.1 K. Autoregressive modelling suggests a declining trend of solar activity in the 21st century towards average Holocene conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The risk of second malignant neoplasms (SMNs) following prostate radiotherapy is a concern due to the large population of survivors and decreasing age at diagnosis. It is known that parallel-opposed beam proton therapy carries a lower risk than photon IMRT. However, a comparison of SMN risk following proton and photon arc therapies has not previously been reported. The purpose of this study was to predict the ratio of excess relative risk (RRR) of SMN incidence following proton arc therapy to that after volumetric modulated arc therapy (VMAT). Additionally, we investigated the impact of margin size and the effect of risk-minimized proton beam weighting on predicted RRR. Physician-approved treatment plans were created for both modalities for three patients. Therapeutic dose was obtained with differential dose-volume histograms from the treatment planning system, and stray dose was estimated from the literature or calculated with Monte Carlo simulations. Then, various risk models were applied to the total dose. Additional treatment plans were also investigated with varying margin size and risk-minimized proton beam weighting. The mean RRR ranged from 0.74 to 0.99, depending on risk model. The additional treatment plans revealed that the RRR remained approximately constant with varying margin size, and that the predicted RRR was reduced by 12% using a risk-minimized proton arc therapy planning technique. In conclusion, proton arc therapy was found to provide an advantage over VMAT in regard to predicted risk of SMN following prostate radiotherapy. This advantage was independent of margin size and was amplified with risk-optimized proton beam weighting.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main goal of the AEgIS experiment at CERN is to test the weak equivalence principle for antimatter. AEgIS will measure the free-fall of an antihydrogen beam traversing a moir'e deflectometer. The goal is to determine the gravitational acceleration with an initial relative accuracy of 1% by using an emulsion detector combined with a silicon μ-strip detector to measure the time of flight. Nuclear emulsions can measure the annihilation vertex of antihydrogen atoms with a precision of ~ 1–2 μm r.m.s. We present here results for emulsion detectors operated in vacuum using low energy antiprotons from the CERN antiproton decelerator. We compare with Monte Carlo simulations, and discuss the impact on the AEgIS project.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 38 inverse pb. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0.4 or R=0.6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pt > 20 GeV and pseudorapidities eta<4.5. The JES systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams. The JES uncertainty is less than 2.5% in the central calorimeter region (eta<0.8) for jets with 60 < pt < 800 GeV, and is maximally 14% for pt < 30 GeV in the most forward region 3.2 50 GeV after a dedicated correction for this effect. The JES is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pt, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pt jets recoiling against a high-pt jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, providing an improved jet energy resolution and a reduced flavour dependence of the jet response. The JES systematic uncertainty determined from a combination of in situ techniques are consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pt jets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this contribution, a first look at simulations using maximally twisted mass Wilson fermions at the physical point is presented. A lattice action including clover and twisted mass terms is presented and the Monte Carlo histories of one run with two mass-degenerate flavours at a single lattice spacing are shown. Measurements from the light and heavy-light pseudoscalar sectors are compared to previous Nf = 2 results and their phenomenological values. Finally, the strategy for extending simulations to Nf = 2+1+1 is outlined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We show that exotic phases arise in generalized lattice gauge theories known as quantum link models in which classical gauge fields are replaced by quantum operators. While these quantum models with discrete variables have a finite-dimensional Hilbert space per link, the continuous gauge symmetry is still exact. An efficient cluster algorithm is used to study these exotic phases. The (2+1)-d system is confining at zero temperature with a spontaneously broken translation symmetry. A crystalline phase exhibits confinement via multi stranded strings between chargeanti-charge pairs. A phase transition between two distinct confined phases is weakly first order and has an emergent spontaneously broken approximate SO(2) global symmetry. The low-energy physics is described by a (2 + 1)-d RP(1) effective field theory, perturbed by a dangerously irrelevant SO(2) breaking operator, which prevents the interpretation of the emergent pseudo-Goldstone boson as a dual photon. This model is an ideal candidate to be implemented in quantum simulators to study phenomena that are not accessible using Monte Carlo simulations such as the real-time evolution of the confining string and the real-time dynamics of the pseudo-Goldstone boson.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Photopolymerized hydrogels are commonly used for a broad range of biomedical applications. As long as the polymer volume is accessible, gels can easily be hardened using light illumination. However, in clinics, especially for minimally invasive surgery, it becomes highly challenging to control photopolymerization. The ratios between polymerization- volume and radiating-surface-area are several orders of magnitude higher than for ex-vivo settings. Also tissue scattering occurs and influences the reaction. We developed a Monte Carlo model for photopolymerization, which takes into account the solid/liquid phase changes, moving solid/liquid-boundaries and refraction on these boundaries as well as tissue scattering in arbitrarily designable tissue cavities. The model provides a tool to tailor both the light probe and the scattering/absorption properties of the photopolymer for applications such as medical implants or tissue replacements. Based on the simulations, we have previously shown that by adding scattering additives to the liquid monomer, the photopolymerized volume was considerably increased. In this study, we have used bovine intervertebral disc cavities, as a model for spinal degeneration, to study photopolymerization in-vitro. The cavity is created by enzyme digestion. Using a custom designed probe, hydrogels were injected and photopolymerized. Magnetic resonance imaging (MRI) and visual inspection tools were employed to investigate the successful photopolymerization outcomes. The results provide insights for the development of novel endoscopic light-scattering polymerization probes paving the way for a new generation of implantable hydrogels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we report on an optical tolerance analysis of the submillimeter atmospheric multi-beam limb sounder, STEAMR. Physical optics and ray-tracing methods were used to quantify and separate errors in beam pointing and distortion due to reflector misalignment and primary reflector surface deformations. Simulations were performed concurrently with the manufacturing of a multi-beam demonstrator of the relay optical system which shapes and images the beams to their corresponding receiver feed horns. Results from Monte Carlo simulations show that the inserts used for reflector mounting should be positioned with an overall accuracy better than 100 μm (~ 1/10 wavelength). Analyses of primary reflector surface deformations show that a deviation of magnitude 100 μm can be tolerable before deployment, whereas the corresponding variations should be less than 30 μm during operation. The most sensitive optical elements in terms of misalignments are found near the focal plane. This localized sensitivity is attributed to the off-axis nature of the beams at this location. Post-assembly mechanical measurements of the reflectors in the demonstrator show that alignment better than 50 μm could be obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

(31)P MRS magnetization transfer ((31)P-MT) experiments allow the estimation of exchange rates of biochemical reactions, such as the creatine kinase equilibrium and adenosine triphosphate (ATP) synthesis. Although various (31)P-MT methods have been successfully used on isolated organs or animals, their application on humans in clinical scanners poses specific challenges. This study compared two major (31)P-MT methods on a clinical MR system using heteronuclear surface coils. Although saturation transfer (ST) is the most commonly used (31)P-MT method, sequences such as inversion transfer (IT) with short pulses might be better suited for the specific hardware and software limitations of a clinical scanner. In addition, small NMR-undetectable metabolite pools can transfer MT to NMR-visible pools during long saturation pulses, which is prevented with short pulses. (31)P-MT sequences were adapted for limited pulse length, for heteronuclear transmit-receive surface coils with inhomogeneous B1 , for the need for volume selection and for the inherently low signal-to-noise ratio (SNR) on a clinical 3-T MR system. The ST and IT sequences were applied to skeletal muscle and liver in 10 healthy volunteers. Monte-Carlo simulations were used to evaluate the behavior of the IT measurements with increasing imperfections. In skeletal muscle of the thigh, ATP synthesis resulted in forward reaction constants (k) of 0.074 ± 0.022 s(-1) (ST) and 0.137 ± 0.042 s(-1) (IT), whereas the creatine kinase reaction yielded 0.459 ± 0.089 s(-1) (IT). In the liver, ATP synthesis resulted in k = 0.267 ± 0.106 s(-1) (ST), whereas the IT experiment yielded no consistent results. ST results were close to literature values; however, the IT results were either much larger than the corresponding ST values and/or were widely scattered. To summarize, ST and IT experiments can both be implemented on a clinical body scanner with heteronuclear transmit-receive surface coils; however, ST results are much more robust against experimental imperfections than the current implementation of IT.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Antihydrogen holds the promise to test, for the first time, the universality of freefall with a system composed entirely of antiparticles. The AEgIS experiment at CERN’s antiproton decelerator aims to measure the gravitational interaction between matter and antimatter by measuring the deflection of a beam of antihydrogen in the Earths gravitational field (g). The principle of the experiment is as follows: cold antihydrogen atoms are synthesized in a Penning-Malberg trap and are Stark accelerated towards a moir´e deflectometer, the classical counterpart of an atom interferometer, and annihilate on a position sensitive detector. Crucial to the success of the experiment is the spatial precision of the position sensitive detector.We propose a novel free-fall detector based on a hybrid of two technologies: emulsion detectors, which have an intrinsic spatial resolution of 50 nm but no temporal information, and a silicon strip / scintillating fiber tracker to provide timing and positional information. In 2012 we tested emulsion films in vacuum with antiprotons from CERN’s antiproton decelerator. The annihilation vertices could be observed directly on the emulsion surface using the microscope facility available at the University of Bern. The annihilation vertices were successfully reconstructed with a resolution of 1–2 μmon the impact parameter. If such a precision can be realized in the final detector, Monte Carlo simulations suggest of order 500 antihydrogen annihilations will be sufficient to determine gwith a 1 % accuracy. This paper presents current research towards the development of this technology for use in the AEgIS apparatus and prospects for the realization of the final detector.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the performance of the ATLAS muon reconstruction during the LHC run with pp collisions at √s = 7–8 TeV in 2011–2012, focusing mainly on data collected in 2012. Measurements of the reconstruction efficiency and of the momentum scale and resolution, based on large reference samples of J/ψ → μμ, Z → μμ and ϒ → μμ decays, are presented and compared to Monte Carlo simulations. Corrections to the simulation, to be used in physics analysis, are provided. Over most of the covered phase space (muon |η| < 2.7 and 5 ≲ pT ≲ 100 GeV) the efficiency is above 99% and is measured with per-mille precision. The momentum resolution ranges from 1.7% at central rapidity and for transverse momentum pT ≅ 10 GeV, to 4% at large rapidity and pT ≅ 100 GeV. The momentum scale is known with an uncertainty of 0.05% to 0.2% depending on rapidity. A method for the recovery of final state radiation from the muons is also presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

XENON is a dark matter direct detection project, consisting of a time projection chamber (TPC) filled with liquid xenon as detection medium. The construction of the next generation detector, XENON1T, is presently taking place at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. It aims at a sensitivity to spin-independent cross sections of 2 10-47 c 2 for WIMP masses around 50 GeV2, which requires a background reduction by two orders of magnitude compared to XENON100, the current generation detector. An active system that is able to tag muons and muon-induced backgrounds is critical for this goal. A water Cherenkov detector of ~ 10 m height and diameter has been therefore developed, equipped with 8 inch photomultipliers and cladded by a reflective foil. We present the design and optimization study for this detector, which has been carried out with a series of Monte Carlo simulations. The muon veto will reach very high detection efficiencies for muons (>99.5%) and showers of secondary particles from muon interactions in the rock (>70%). Similar efficiencies will be obtained for XENONnT, the upgrade of XENON1T, which will later improve the WIMP sensitivity by another order of magnitude. With the Cherenkov water shield studied here, the background from muon-induced neutrons in XENON1T is negligible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the sensitivity of large-scale xenon detectors to low-energy solar neutrinos, to coherent neutrino-nucleus scattering and to neutrinoless double beta decay. As a concrete example, we consider the xenon part of the proposed DARWIN (Dark Matter WIMP Search with Noble Liquids) experiment. We perform detailed Monte Carlo simulations of the expected backgrounds, considering realistic energy resolutions and thresholds in the detector. In a low-energy window of 2–30 keV, where the sensitivity to solar pp and 7Be-neutrinos is highest, an integrated pp-neutrino rate of 5900 events can be reached in a fiducial mass of 14 tons of natural xenon, after 5 years of data. The pp-neutrino flux could thus be measured with a statistical uncertainty around 1%, reaching the precision of solar model predictions. These low-energy solar neutrinos will be the limiting background to the dark matter search channel for WIMP-nucleon cross sections below ~2X 10-48 cm2 and WIMP masses around 50 GeV c 2, for an assumed 99.5% rejection of electronic recoils due to elastic neutrino-electron scatters. Nuclear recoils from coherent scattering of solar neutrinos will limit the sensitivity to WIMP masses below ~6 GeV c-2 to cross sections above ~4X10-45cm2. DARWIN could reach a competitive half-life sensitivity of 5.6X1026 y to the neutrinoless double beta decay of 136Xe after 5 years of data, using 6 tons of natural xenon in the central detector region.