989 resultados para Noncommutative phase space
Resumo:
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
Resumo:
One limitation to the widespread implementation of Monte Carlo (MC) patient dose-calculation algorithms for radiotherapy is the lack of a general and accurate source model of the accelerator radiation source. Our aim in this work is to investigate the sensitivity of the photon-beam subsource distributions in a MC source model (with target, primary collimator, and flattening filter photon subsources and an electron subsource) for 6- and 18-MV photon beams when the energy and radial distributions of initial electrons striking a linac target change. For this purpose, phase-space data (PSD) was calculated for various mean electron energies striking the target, various normally distributed electron energy spread, and various normally distributed electron radial intensity distributions. All PSD was analyzed in terms of energy, fluence, and energy fluence distributions, which were compared between the different parameter sets. The energy spread was found to have a negligible influence on the subsource distributions. The mean energy and radial intensity significantly changed the target subsource distribution shapes and intensities. For the primary collimator and flattening filter subsources, the distribution shapes of the fluence and energy fluence changed little for different mean electron energies striking the target, however, their relative intensity compared with the target subsource change, which can be accounted for by a scaling factor. This study indicates that adjustments to MC source models can likely be limited to adjusting the target subsource in conjunction with scaling the relative intensity and energy spectrum of the primary collimator, flattening filter, and electron subsources when the energy and radial distributions of the initial electron-beam change.
Resumo:
A major barrier to widespread clinical implementation of Monte Carlo dose calculation is the difficulty in characterizing the radiation source within a generalized source model. This work aims to develop a generalized three-component source model (target, primary collimator, flattening filter) for 6- and 18-MV photon beams that match full phase-space data (PSD). Subsource by subsource comparison of dose distributions, using either source PSD or the source model as input, allows accurate source characterization and has the potential to ease the commissioning procedure, since it is possible to obtain information about which subsource needs to be tuned. This source model is unique in that, compared to previous source models, it retains additional correlations among PS variables, which improves accuracy at nonstandard source-to-surface distances (SSDs). In our study, three-dimensional (3D) dose calculations were performed for SSDs ranging from 50 to 200 cm and for field sizes from 1 x 1 to 30 x 30 cm2 as well as a 10 x 10 cm2 field 5 cm off axis in each direction. The 3D dose distributions, using either full PSD or the source model as input, were compared in terms of dose-difference and distance-to-agreement. With this model, over 99% of the voxels agreed within +/-1% or 1 mm for the target, within 2% or 2 mm for the primary collimator, and within +/-2.5% or 2 mm for the flattening filter in all cases studied. For the dose distributions, 99% of the dose voxels agreed within 1% or 1 mm when the combined source model-including a charged particle source and the full PSD as input-was used. The accurate and general characterization of each photon source and knowledge of the subsource dose distributions should facilitate source model commissioning procedures by allowing scaling the histogram distributions representing the subsources to be tuned.
Resumo:
A multiple source model (MSM) for the 6 MV beam of a Varian Clinac 2300 C/D was developed by simulating radiation transport through the accelerator head for a set of square fields using the GEANT Monte Carlo (MC) code. The corresponding phase space (PS) data enabled the characterization of 12 sources representing the main components of the beam defining system. By parametrizing the source characteristics and by evaluating the dependence of the parameters on field size, it was possible to extend the validity of the model to arbitrary rectangular fields which include the central 3 x 3 cm2 field without additional precalculated PS data. Finally, a sampling procedure was developed in order to reproduce the PS data. To validate the MSM, the fluence, energy fluence and mean energy distributions determined from the original and the reproduced PS data were compared and showed very good agreement. In addition, the MC calculated primary energy spectrum was verified by an energy spectrum derived from transmission measurements. Comparisons of MC calculated depth dose curves and profiles, using original and PS data reproduced by the MSM, agree within 1% and 1 mm. Deviations from measured dose distributions are within 1.5% and 1 mm. However, the real beam leads to some larger deviations outside the geometrical beam area for large fields. Calculated output factors in 10 cm water depth agree within 1.5% with experimentally determined data. In conclusion, the MSM produces accurate PS data for MC photon dose calculations for the rectangular fields specified.
Resumo:
Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4.
Resumo:
BEAMnrc, a code for simulating medical linear accelerators based on EGSnrc, has been bench-marked and used extensively in the scientific literature and is therefore often considered to be the gold standard for Monte Carlo simulations for radiotherapy applications. However, its long computation times make it too slow for the clinical routine and often even for research purposes without a large investment in computing resources. VMC++ is a much faster code thanks to the intensive use of variance reduction techniques and a much faster implementation of the condensed history technique for charged particle transport. A research version of this code is also capable of simulating the full head of linear accelerators operated in photon mode (excluding multileaf collimators, hard and dynamic wedges). In this work, a validation of the full head simulation at 6 and 18 MV is performed, simulating with VMC++ and BEAMnrc the addition of one head component at a time and comparing the resulting phase space files. For the comparison, photon and electron fluence, photon energy fluence, mean energy, and photon spectra are considered. The largest absolute differences are found in the energy fluences. For all the simulations of the different head components, a very good agreement (differences in energy fluences between VMC++ and BEAMnrc <1%) is obtained. Only a particular case at 6 MV shows a somewhat larger energy fluence difference of 1.4%. Dosimetrically, these phase space differences imply an agreement between both codes at the <1% level, making VMC++ head module suitable for full head simulations with considerable gain in efficiency and without loss of accuracy.
Resumo:
This dissertation presents a detailed study in exploring quantum correlations of lights in macroscopic environments. We have explored quantum correlations of single photons, weak coherent states, and polarization-correlated/polarization-entangled photons in macroscopic environments. These included macroscopic mirrors, macroscopic photon number, spatially separated observers, noisy photons source and propagation medium with loss or disturbances. We proposed a measurement scheme for observing quantum correlations and entanglement in the spatial properties of two macroscopic mirrors using single photons spatial compass state. We explored the phase space distribution features of spatial compass states, such as chessboard pattern by using the Wigner function. The displacement and tilt correlations of the two mirrors were manifested through the propensities of the compass states. This technique can be used to extract Einstein-Podolsky-Rosen correlations (EPR) of the two mirrors. We then formulated the discrete-like property of the propensity Pb(m,n), which can be used to explore environmental perturbed quantum jumps of the EPR correlations in phase space. With single photons spatial compass state, the variances in position and momentum are much smaller than standard quantum limit when using a Gaussian TEM00 beam. We observed intrinsic quantum correlations of weak coherent states between two parties through balanced homodyne detection. Our scheme can be used as a supplement to decoy-state BB84 protocol and differential phase-shift QKD protocol. We prepared four types of bipartite correlations ±cos2(θ12) that shared between two parties. We also demonstrated bits correlations between two parties separated by 10 km optical fiber. The bits information will be protected by the large quantum phase fluctuation of weak coherent states, adding another physical layer of security to these protocols for quantum key distribution. Using 10 m of highly nonlinear fiber (HNLF) at 77 K, we observed coincidence to accidental-coincidence ratio of 130±5 for correlated photon-pair and Two-Photon Interference visibility >98% entangled photon-pair. We also verified the non-local behavior of polarization-entangled photon pair by violating Clauser-Horne-Shimony-Holt Bell’s inequality by more than 12 standard deviations. With the HNLF at 300 K (77 K), photon-pair production rate about factor 3(2) higher than a 300 m dispersion-shifted fiber is observed. Then, we studied quantum correlation and interference of photon-pairs; with one photon of the photon-air experiencing multiple scattering in a random medium. We observed that depolarization noise photon in multiple scattering degrading the purity of photon-pair, and the existence of Raman noise photon in a photon-pair source will contribute to the depolarization affect. We found that quantum correlation of polarization-entangled photon-pair is better preserved than polarization-correlated photon-pair as one photon of the photon-pair scattered through a random medium. Our findings showed that high purity polarization-entangled photon-pair is better candidate for long distance quantum key distribution.
Resumo:
he physics program of the NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) experiment at the CERN SPS consists of three subjects. In the first stage of data taking (2007-2009) measurements of hadron production in hadron-nucleus interactions needed for neutrino (T2K) and cosmic-ray (Pierre Auger and KASCADE) experiments will be performed. In the second stage (2009-2010) hadron production in proton-proton and proton-nucleus interactions needed as reference data for a better understanding of nucleus-nucleus reactions will be studied. In the third stage (2009-2013) energy dependence of hadron production properties will be measured in p+p, p+Pb interactions and nucleus-nucleus collisions, with the aim to identify the properties of the onset of deconfinement and find evidence for the critical point of strongly interacting matter. The NA61 experiment was approved at CERN in June 2007. The first pilot run was performed during October 2007. Calibrations of all detector components have been performed successfully and preliminary uncorrected spectra have been obtained. High quality of track reconstruction and particle identification similar to NA49 has been achieved. The data and new detailed simulations confirm that the NA61 detector acceptance and particle identification capabilities cover the phase space required by the T2K experiment. This document reports on the progress made in the calibration and analysis of the 2007 data.
Resumo:
A measurement of the ZZ production cross section in proton-proton collisions at root s = 7 TeV using data recorded by the ATLAS experiment at the Large Hadron Collider is presented. In a data sample corresponding to an integrated luminosity of 4.6 fb(-1) collected in 2011, events are selected that are consistent either with two Z bosons decaying to electrons or muons or with one Z boson decaying to electrons or muons and a second Z boson decaying to neutrinos. The ZZ((*)) -> l(+)l(-)l'(+)l'(-) and ZZ -> l(+)l(-) nu(nu) over bar cross sections are measured in restricted phase-space regions. These results are then used to derive the total cross section for ZZ events produced with both Z bosons in the mass range 66 to 116 GeV, sigma(tot)(ZZ) = 6.7 +/- 0.7 (stat.) (+0.4)(-0.3) (syst.) +/- 0.3 (lumi.) pb, which is consistent with the Standard Model prediction of 5.89(-0.18)(+0.22) pb calculated at next-to-leading order in QCD. The normalized differential cross sections in bins of various kinematic variables are presented. Finally, the differential event yield as a function of the transverse momentum of the leading Z boson is used to set limits on anomalous neutral triple gauge boson couplings in ZZ production.
Resumo:
Measurements of fiducial and differential cross sections of Higgs boson production in the H →ZZ* → 4ℓ decay channel are presented. The cross sections are determined within a fiducial phase space and corrected for detection efficiency and resolution effects. They are based on 20.3 fb−1 of pp collision data, produced at √s = 8 TeV centre-of-mass energy at the LHC and recorded by the ATLAS detector. The differential measurements are performed in bins of transverse momentum and rapidity of the four-lepton system, the invariant mass of the subleading lepton pair and the decay angle of the leading lepton pair with respect to the beam line in the four-lepton rest frame, as well as the number of jets and the transverse momentum of the leading jet. The measured cross sections are compared to selected theoretical calculations of the Standard Model expectations. No significant deviation from any of the tested predictions is found. c
Resumo:
Additional jet activity in dijet events is measured using pp collisions at ATLAS at a centre-of-mass energy of 7 TeV, for jets reconstructed using the anti-kt algorithm with radius parameter R=0.6. This is done using variables such as the fraction of dijet events without an additional jet in the rapidity interval bounded by the dijet subsystem and correlations between the azimuthal angles of the dijets. They are presented, both with and without a veto on additional jet activity in the rapidity interval, as a function of the mean transverse momentum of the dijets and of the rapidity interval size. The double differential dijet cross section is also measured as a function of the interval size and the azimuthal angle between the dijets. These variables probe differences in the approach to resummation of large logarithms when performing QCD calculations. The data are compared to POWHEG, interfaced to the PYTHIA 8 and HERWIG parton shower generators, as well as to HEJ with and without interfacing it to the ARIADNE parton shower generator. None of the theoretical predictions agree with the data across the full phase-space considered; however, POWHEG+PYTHIA 8 and HEJ+ARIADNE are found to provide the best agreement with the data.These measurements use the full data sample collected with the ATLAS detector in 7 TeV pp collisions at the LHC and correspond to integrated luminosities of 36.1 pb−1 and 4.5 fb−1 for data collected during 2010 and 2011 respectively.
Resumo:
Measurements of fiducial and differential cross sections are presented for Higgs boson production in proton-proton collisions at a centre-of-mass energy of √s = 8TeV. The analysis is performed in the H → γγ decay channel using 20.3 fb−1 of data recorded by the ATLAS experiment at the CERN Large Hadron Collider. The signal is extracted using a fit to the diphoton invariant mass spectrum assuming that the width of the resonance is much smaller than the experimental resolution. The signal yields are corrected for the effects of detector inefficiency and resolution. The pp → H → γγ fiducial cross section is measured to be 43.2 ±9.4 (stat.) +3.2 −2.9 (syst.) ±1.2 (lumi) fb for a Higgs boson of mass 125.4 GeV decaying to two isolated photons that have transverse momentum greater than 35% and 25% of the diphoton invariant mass and each with absolute pseudorapidity less than 2.37. Four additional fiducial cross sections and two cross-section limits are presented in phase space regions that test the theoretical modelling of different Higgs boson production mechanisms, or are sensitive to physics beyond the Standard Model. Differential cross sections are also presented, as a function of variables related to the diphoton kinematics and the jet activity produced in the Higgs boson events. The observed spectra are statistically limited but broadly in line with the theoretical expectations.
Resumo:
This paper presents the performance of the ATLAS muon reconstruction during the LHC run with pp collisions at √s = 7–8 TeV in 2011–2012, focusing mainly on data collected in 2012. Measurements of the reconstruction efficiency and of the momentum scale and resolution, based on large reference samples of J/ψ → μμ, Z → μμ and ϒ → μμ decays, are presented and compared to Monte Carlo simulations. Corrections to the simulation, to be used in physics analysis, are provided. Over most of the covered phase space (muon |η| < 2.7 and 5 ≲ pT ≲ 100 GeV) the efficiency is above 99% and is measured with per-mille precision. The momentum resolution ranges from 1.7% at central rapidity and for transverse momentum pT ≅ 10 GeV, to 4% at large rapidity and pT ≅ 100 GeV. The momentum scale is known with an uncertainty of 0.05% to 0.2% depending on rapidity. A method for the recovery of final state radiation from the muons is also presented.
Resumo:
The next generation neutrino observatory proposed by the LBNO collaboration will address fundamental questions in particle and astroparticle physics. The experiment consists of a far detector, in its first stage a 20 kt LAr double phase TPC and a magnetised iron calorimeter, situated at 2300 km from CERN and a near detector based on a highpressure argon gas TPC. The long baseline provides a unique opportunity to study neutrino flavour oscillations over their 1st and 2nd oscillation maxima exploring the L/E behaviour, and distinguishing effects arising from δCP and matter. In this paper we have reevaluated the physics potential of this setup for determining the mass hierarchy (MH) and discovering CP-violation (CPV), using a conventional neutrino beam from the CERN SPS with a power of 750 kW. We use conservative assumptions on the knowledge of oscillation parameter priors and systematic uncertainties. The impact of each systematic error and the precision of oscillation prior is shown. We demonstrate that the first stage of LBNO can determine unambiguously the MH to > 5δ C.L. over the whole phase space. We show that the statistical treatment of the experiment is of very high importance, resulting in the conclusion that LBNO has ~ 100% probability to determine the MH in at most 4-5 years of running. Since the knowledge of MH is indispensable to extract δCP from the data, the first LBNO phase can convincingly give evidence for CPV on the 3δ C.L. using today’s knowledge on oscillation parameters and realistic assumptions on the systematic uncertainties.
Resumo:
Transition state theory is a central cornerstone in reaction dynamics. Its key step is the identification of a dividing surface that is crossed only once by all reactive trajectories. This assumption is often badly violated, especially when the reactive system is coupled to an environment. The calculations made in this way then overestimate the reaction rate and the results depend critically on the choice of the dividing surface. In this Communication, we study the phase space of a stochastically driven system close to an energetic barrier in order to identify the geometric structure unambiguously determining the reactive trajectories, which is then incorporated in a simple rate formula for reactions in condensed phase that is both independent of the dividing surface and exact.