973 resultados para Monte-carlo Calculations


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A measurement of the cross section for the production of isolated prompt photons in pp collisions at a center-of-mass energy s √ =7  TeV is presented. The results are based on an integrated luminosity of 4.6  fb −1 collected with the ATLAS detector at the LHC. The cross section is measured as a function of photon pseudorapidity η γ and transverse energy E γ T in the kinematic range 100≤E γ T <1000  GeV and in the regions |η γ |<1.37 and 1.52≤|η γ |<2.37 . The results are compared to leading-order parton-shower Monte Carlo models and next-to-leading-order perturbative QCD calculations. Next-to-leading-order perturbative QCD calculations agree well with the measured cross sections as a function of E γ T and η γ .

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ab initio calculations of Afρ are presented using Mie scattering theory and a Direct Simulation Monte Carlo (DSMC) dust outflow model in support of the Rosetta mission and its target 67P/Churyumov-Gerasimenko (CG). These calculations are performed for particle sizes ranging from 0.010 μm to 1.0 cm. The present status of our knowledge of various differential particle size distributions is reviewed and a variety of particle size distributions is used to explore their effect on Afρ , and the dust mass production View the MathML sourcem˙. A new simple two parameter particle size distribution that curtails the effect of particles below 1 μm is developed. The contributions of all particle sizes are summed to get a resulting overall Afρ. The resultant Afρ could not easily be predicted a priori and turned out to be considerably more constraining regarding the mass loss rate than expected. It is found that a proper calculation of Afρ combined with a good Afρ measurement can constrain the dust/gas ratio in the coma of comets as well as other methods presently available. Phase curves of Afρ versus scattering angle are calculated and produce good agreement with observational data. The major conclusions of our calculations are: – The original definition of A in Afρ is problematical and Afρ should be: qsca(n,λ)×p(g)×f×ρqsca(n,λ)×p(g)×f×ρ. Nevertheless, we keep the present nomenclature of Afρ as a measured quantity for an ensemble of coma particles.– The ratio between Afρ and the dust mass loss rate View the MathML sourcem˙ is dominated by the particle size distribution. – For most particle size distributions presently in use, small particles in the range from 0.10 to 1.0 μm contribute a large fraction to Afρ. – Simplifying the calculation of Afρ by considering only large particles and approximating qsca does not represent a realistic model. Mie scattering theory or if necessary, more complex scattering calculations must be used. – For the commonly used particle size distribution, dn/da ∼ a−3.5 to a−4, there is a natural cut off in Afρ contribution for both small and large particles. – The scattering phase function must be taken into account for each particle size; otherwise the contribution of large particles can be over-estimated by a factor of 10. – Using an imaginary index of refraction of i = 0.10 does not produce sufficient backscattering to match observational data. – A mixture of dark particles with i ⩾ 0.10 and brighter silicate particles with i ⩽ 0.04 matches the observed phase curves quite well. – Using current observational constraints, we find the dust/gas mass-production ratio of CG at 1.3 AU is confined to a range of 0.03–0.5 with a reasonably likely value around 0.1.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Grand Canonical Monte Carlo simulations are used to reproduce the N₂/CO ratio ranging between 1.7 x 10⁻³ and 1.6 x 10⁻² observed in situ in the Jupiter-family comet 67 P/Churyumov-Gerasimenko (67 P) by the ROSINA mass spectrometer on board the Rosetta spacecraft. By assuming that this body has been agglomerated from clathrates in the protosolar nebula (PSN), simulations are developed using elaborated interatomic potentials for investigating the temperature dependence of the trapping within a multiple-guest clathrate formed from a gas mixture of CO and N₂ in proportions corresponding to those expected for the PSN. By assuming that 67 P agglomerated from clathrates, our calculations suggest the cometary grains must have been formed at temperatures ranging between ~ 31.8 and 69.9 K in the PSN to match the N₂/CO ratio measured by the ROSINA mass spectrometer. The presence of clathrates in Jupiter-family comets could then explain the potential N₂ depletion (factor of up to ~ 87 compared to the protosolar value) measured in 67 P/Churyumov-Gerasimenko.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current standard treatment for head and neck cancer at our institution uses intensity-modulated x-ray therapy (IMRT), which improves target coverage and sparing of critical structures by delivering complex fluence patterns from a variety of beam directions to conform dose distributions to the shape of the target volume. The standard treatment for breast patients is field-in-field forward-planned IMRT, with initial tangential fields and additional reduced-weight tangents with blocking to minimize hot spots. For these treatment sites, the addition of electrons has the potential of improving target coverage and sparing of critical structures due to rapid dose falloff with depth and reduced exit dose. In this work, the use of mixed-beam therapy (MBT), i.e., combined intensity-modulated electron and x-ray beams using the x-ray multi-leaf collimator (MLC), was explored. The hypothesis of this study was that addition of intensity-modulated electron beams to existing clinical IMRT plans would produce MBT plans that were superior to the original IMRT plans for at least 50% of selected head and neck and 50% of breast cases. Dose calculations for electron beams collimated by the MLC were performed with Monte Carlo methods. An automation system was created to facilitate communication between the dose calculation engine and the treatment planning system. Energy and intensity modulation of the electron beams was accomplished by dividing the electron beams into 2x2-cm2 beamlets, which were then beam-weight optimized along with intensity-modulated x-ray beams. Treatment plans were optimized to obtain equivalent target dose coverage, and then compared with the original treatment plans. MBT treatment plans were evaluated by participating physicians with respect to target coverage, normal structure dose, and overall plan quality in comparison with original clinical plans. The physician evaluations did not support the hypothesis for either site, with MBT selected as superior in 1 out of the 15 head and neck cases (p=1) and 6 out of 18 breast cases (p=0.95). While MBT was not shown to be superior to IMRT, reductions were observed in doses to critical structures distal to the target along the electron beam direction and to non-target tissues, at the expense of target coverage and dose homogeneity. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

External beam radiation therapy is used to treat nearly half of the more than 200,000 new cases of prostate cancer diagnosed in the United States each year. During a radiation therapy treatment, healthy tissues in the path of the therapeutic beam are exposed to high doses. In addition, the whole body is exposed to a low-dose bath of unwanted scatter radiation from the pelvis and leakage radiation from the treatment unit. As a result, survivors of radiation therapy for prostate cancer face an elevated risk of developing a radiogenic second cancer. Recently, proton therapy has been shown to reduce the dose delivered by the therapeutic beam to normal tissues during treatment compared to intensity modulated x-ray therapy (IMXT, the current standard of care). However, the magnitude of stray radiation doses from proton therapy, and their impact on this incidence of radiogenic second cancers, was not known. ^ The risk of a radiogenic second cancer following proton therapy for prostate cancer relative to IMXT was determined for 3 patients of large, median, and small anatomical stature. Doses delivered to healthy tissues from the therapeutic beam were obtained from treatment planning system calculations. Stray doses from IMXT were taken from the literature, while stray doses from proton therapy were simulated using a Monte Carlo model of a passive scattering treatment unit and an anthropomorphic phantom. Baseline risk models were taken from the Biological Effects of Ionizing Radiation VII report. A sensitivity analysis was conducted to characterize the uncertainty of risk calculations to uncertainties in the risk model, the relative biological effectiveness (RBE) of neutrons for carcinogenesis, and inter-patient anatomical variations. ^ The risk projections revealed that proton therapy carries a lower risk for radiogenic second cancer incidence following prostate irradiation compared to IMXT. The sensitivity analysis revealed that the results of the risk analysis depended only weakly on uncertainties in the risk model and inter-patient variations. Second cancer risks were sensitive to changes in the RBE of neutrons. However, the findings of the study were qualitatively consistent for all patient sizes and risk models considered, and for all neutron RBE values less than 100. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los estudios realizados hasta el momento para la determinación de la calidad de medida del instrumental geodésico han estado dirigidos, fundamentalmente, a las medidas angulares y de distancias. Sin embargo, en los últimos años se ha impuesto la tendencia generalizada de utilizar equipos GNSS (Global Navigation Satellite System) en el campo de las aplicaciones geomáticas sin que se haya establecido una metodología que permita obtener la corrección de calibración y su incertidumbre para estos equipos. La finalidad de esta Tesis es establecer los requisitos que debe satisfacer una red para ser considerada Red Patrón con trazabilidad metrológica, así como la metodología para la verificación y calibración de instrumental GNSS en redes patrón. Para ello, se ha diseñado y elaborado un procedimiento técnico de calibración de equipos GNSS en el que se han definido las contribuciones a la incertidumbre de medida. El procedimiento, que se ha aplicado en diferentes redes para distintos equipos, ha permitido obtener la incertidumbre expandida de dichos equipos siguiendo las recomendaciones de la Guide to the Expression of Uncertainty in Measurement del Joint Committee for Guides in Metrology. Asimismo, se han determinado mediante técnicas de observación por satélite las coordenadas tridimensionales de las bases que conforman las redes consideradas en la investigación, y se han desarrollado simulaciones en función de diversos valores de las desviaciones típicas experimentales de los puntos fijos que se han utilizado en el ajuste mínimo cuadrático de los vectores o líneas base. Los resultados obtenidos han puesto de manifiesto la importancia que tiene el conocimiento de las desviaciones típicas experimentales en el cálculo de incertidumbres de las coordenadas tridimensionales de las bases. Basándose en estudios y observaciones de gran calidad técnica, llevados a cabo en estas redes con anterioridad, se ha realizado un exhaustivo análisis que ha permitido determinar las condiciones que debe satisfacer una red patrón. Además, se han diseñado procedimientos técnicos de calibración que permiten calcular la incertidumbre expandida de medida de los instrumentos geodésicos que proporcionan ángulos y distancias obtenidas por métodos electromagnéticos, ya que dichos instrumentos son los que van a permitir la diseminación de la trazabilidad metrológica a las redes patrón para la verificación y calibración de los equipos GNSS. De este modo, ha sido posible la determinación de las correcciones de calibración local de equipos GNSS de alta exactitud en las redes patrón. En esta Tesis se ha obtenido la incertidumbre de la corrección de calibración mediante dos metodologías diferentes; en la primera se ha aplicado la propagación de incertidumbres, mientras que en la segunda se ha aplicado el método de Monte Carlo de simulación de variables aleatorias. El análisis de los resultados obtenidos confirma la validez de ambas metodologías para la determinación de la incertidumbre de calibración de instrumental GNSS. ABSTRACT The studies carried out so far for the determination of the quality of measurement of geodetic instruments have been aimed, primarily, to measure angles and distances. However, in recent years it has been accepted to use GNSS (Global Navigation Satellite System) equipment in the field of Geomatic applications, for data capture, without establishing a methodology that allows obtaining the calibration correction and its uncertainty. The purpose of this Thesis is to establish the requirements that a network must meet to be considered a StandardNetwork with metrological traceability, as well as the methodology for the verification and calibration of GNSS instrumental in those standard networks. To do this, a technical calibration procedure has been designed, developed and defined for GNSS equipment determining the contributions to the uncertainty of measurement. The procedure, which has been applied in different networks for different equipment, has alloweddetermining the expanded uncertainty of such equipment following the recommendations of the Guide to the Expression of Uncertainty in Measurement of the Joint Committee for Guides in Metrology. In addition, the three-dimensional coordinates of the bases which constitute the networks considered in the investigationhave been determined by satellite-based techniques. There have been several developed simulations based on different values of experimental standard deviations of the fixed points that have been used in the least squares vectors or base lines calculations. The results have shown the importance that the knowledge of experimental standard deviations has in the calculation of uncertainties of the three-dimensional coordinates of the bases. Based on high technical quality studies and observations carried out in these networks previously, it has been possible to make an exhaustive analysis that has allowed determining the requirements that a standard network must meet. In addition, technical calibration procedures have been developed to allow the uncertainty estimation of measurement carried outby geodetic instruments that provide angles and distances obtained by electromagnetic methods. These instruments provide the metrological traceability to standard networks used for verification and calibration of GNSS equipment. As a result, it has been possible the estimation of local calibration corrections for high accuracy GNSS equipment in standardnetworks. In this Thesis, the uncertainty of calibration correction has been calculated using two different methodologies: the first one by applying the law of propagation of uncertainty, while the second has applied the propagation of distributions using the Monte Carlo method. The analysis of the obtained results confirms the validity of both methodologies for estimating the calibration uncertainty of GNSS equipment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study characterises the abatement effect of large dams with fixed-crest spillways under extreme design flood conditions. In contrast to previous studies using specific hydrographs for flow into the reservoir and simplifications to obtain analytical solutions, an automated tool was designed for calculations based on a Monte Carlo simulation environment, which integrates models that represent the different physical processes in watersheds with areas of 150?2000 km2. The tool was applied to 21 sites that were uniformly distributed throughout continental Spain, with 105 fixed-crest dam configurations. This tool allowed a set of hydrographs to be obtained as an approximation for the hydrological forcing of a dam and the characterisation of the response of the dam to this forcing. For all cases studied, we obtained a strong linear correlation between the peak flow entering the reservoir and the peak flow discharged by the dam, and a simple general procedure was proposed to characterise the peak-flow attenuation behaviour of the reservoir. Additionally, two dimensionless coefficients were defined to relate the variables governing both the generation of the flood and its abatement in the reservoir. Using these coefficients, a model was defined to allow for the estimation of the flood abatement effect of a reservoir based on the available information. This model should be useful in the hydrological design of spillways and the evaluation of the hydrological safety of dams. Finally, the proposed procedure and model were evaluated and representative applications were presented

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Monte Carlo (MC) method can accurately compute the dose produced by medical linear accelerators. However, these calculations require a reliable description of the electron and/or photon beams delivering the dose, the phase space (PHSP), which is not usually available. A method to derive a phase space model from reference measurements that does not heavily rely on a detailed model of the accelerator head is presented. The iterative optimization process extracts the characteristics of the particle beams which best explains the reference dose measurements in water and air, given a set of constrains

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The advantages of fast-spectrum reactors consist not only of an efficient use of fuel through the breeding of fissile material and the use of natural or depleted uranium, but also of the potential reduction of the amount of actinides such as americium and neptunium contained in the irradiated fuel. The first aspect means a guaranteed future nuclear fuel supply. The second fact is key for high-level radioactive waste management, because these elements are the main responsible for the radioactivity of the irradiated fuel in the long term. The present study aims to analyze the hypothetical deployment of a Gen-IV Sodium Fast Reactor (SFR) fleet in Spain. A nuclear fleet of fast reactors would enable a fuel cycle strategy different than the open cycle, currently adopted by most of the countries with nuclear power. A transition from the current Gen-II to Gen-IV fleet is envisaged through an intermediate deployment of Gen-III reactors. Fuel reprocessing from the Gen-II and Gen-III Light Water Reactors (LWR) has been considered. In the so-called advanced fuel cycle, the reprocessed fuel used to produce energy will breed new fissile fuel and transmute minor actinides at the same time. A reference case scenario has been postulated and further sensitivity studies have been performed to analyze the impact of the different parameters on the required reactor fleet. The potential capability of Spain to supply the required fleet for the reference scenario using national resources has been verified. Finally, some consequences on irradiated final fuel inventory are assessed. Calculations are performed with the Monte Carlo transport-coupled depletion code SERPENT together with post-processing tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cation-π interactions are important forces in molecular recognition by biological receptors, enzyme catalysis, and crystal engineering. We have harnessed these interactions in designing molecular systems with circular arrangement of benzene units that are capable of acting as ionophores and models for biological receptors. [n]Collarenes are promising candidates with high selectivity for a specific cation, depending on n, because of their structural rigidity and well-defined cavity size. The interaction energies of [n]collarenes with cations have been evaluated by using ab initio calculations. The selectivity of these [n]collarenes in aqueous solution was revealed by using statistical perturbation theory in conjunction with Monte Carlo and molecular dynamics simulations. It has been observed that in [n]collarenes the ratio of the interaction energies of a cation with it and the cation with the basic building unit (benzene) can be correlated to its ion selectivity. We find that collarenes are excellent and efficient ionophores that bind cations through cation-π interactions. [6]Collarene is found to be a selective host for Li+ and Mg2+, [8]collarene for K+ and Sr2+, and [10]collarene for Cs+ and Ba2+. This finding indicates that [10]collarene and [8]collarene could be used for effective separation of highly radioactive isotopes, 137Cs and 90Sr, which are major constituents of nuclear wastes. More interestingly, collarenes of larger cavity size can be useful in capturing organic cations. [12]Collarene exhibits a pronounced affinity for tetramethylammonium cation and acetylcholine, which implies that it could serve as a model for acetylcholinestrase. Thus, collarenes can prove to be novel and effective ionophores/model-receptors capable of heralding a new direction in molecular recognition and host-guest chemistry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We have carried out conformational energy calculations on alanine-based copolymers with the sequence Ac-AAAAAXAAAA-NH2 in water, where X stands for lysine or glutamine, to identify the underlying source of stability of alanine-based polypeptides containing charged or highly soluble polar residues in the absence of charge–charge interactions. The results indicate that ionizable or neutral polar residues introduced into the sequence to make them soluble sequester the water away from the CO and NH groups of the backbone, thereby enabling them to form internal hydrogen bonds. This solvation effect dictates the conformational preference and, hence, modifies the conformational propensity of alanine residues. Even though we carried out simulations for specific amino acid sequences, our results provide an understanding of some of the basic principles that govern the process of folding of these short sequences independently of the kind of residues introduced to make them soluble. In addition, we have investigated through simulations the effect of the bulk dielectric constant on the conformational preferences of these peptides. Extensive conformational Monte Carlo searches on terminally blocked 10-mer and 16-mer homopolymers of alanine in the absence of salt were carried out assuming values for the dielectric constant of the solvent ɛ of 80, 40, and 2. Our simulations show a clear tendency of these oligopeptides to augment the α-helix content as the bulk dielectric constant of the solvent is lowered. This behavior is due mainly to a loss of exposure of the CO and NH groups to the aqueous solvent. Experimental evidence indicates that the helical propensity of the amino acids in water shows a dramatic increase on addition of certain alcohols, such us trifluoroethanol. Our results provide a possible explanation of the mechanism by which alcohol/water mixtures affect the free energy of helical alanine oligopeptides relative to nonhelical ones.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we report the results of ab initio calculations on the energetics and kinetics of oxygen-driven carbon gasification reactions using a small model cluster, with full characterisation of the stationary points on the reaction paths. We show that previously unconsidered pathways present significantly reduced barriers to reaction and must be considered as alternative viable paths. At least two electronic spin states of the model cluster must be considered for a complete description. (C) 2004 Elsevier Ltd. All rights reserved.