989 resultados para Phase-space Methods
Resumo:
Der semileptonische Zerfall K^±→π^0 μ^± υ ist ein geeigneter Kanal zur Be-stimmung des CKM-Matrixelementes 〖|V〗_us |. Das hadronische Matrixelement dieses Zerfalls wird durch zwei dimensionslose Formfaktoren f_± (t) beschrieben. Diese sind abhängig vom Impulsübertrag t=〖(p_K-p_π)〗^2 auf das Leptonpaar. Zur Bestimmung von 〖|V〗_us | dienen die Formfaktoren als wichtige Parameter zur Berechnung des Phasenraumintegrals dieses Zerfalls. Eine präzise Messung der Formfaktoren ist zusätzlich dadurch motiviert, dass das Resultat des NA48-Experimentes von den übrigen Messungen der Experimente KLOE, KTeV und ISTRA+ abweicht. Die Daten einer Messperiode des NA48/2 -Experimentes mit offenem Trigger aus dem Jahre 2004 wurden analysiert. Daraus wählte ich 1.8 Millionen K_μ3^±-Zerfallskandidaten mit einem Untergrundanteil von weniger als 0.1% aus. Zur Bestimmung der Formfaktoren diente die zweidimensionale Dalitz-Verteilung der Daten, nachdem sie auf Akzeptanz des Detektors und auf radiative Effekte korrigiert war. An diese Verteilung wurde die theoretische Parameter-abhängige Funktion mit einer Chiquadrat-Methode angepasst. Es ergeben sich für quadratische, Pol- und dispersive Parametrisierung folgende Formfaktoren: λ_0=(14.82±〖1.67〗_stat±〖0.62〗_sys )×〖10〗^(-3) λ_+^'=(25.53±〖3.51〗_stat±〖1.90〗_sys )×〖10〗^(-3) λ_+^''=( 1.40±〖1.30〗_stat±〖0.48〗_sys )×〖10〗^(-3) m_S=1204.8±〖32.0〗_stat±〖11.4〗_(sys ) MeV/c^2 m_V=(877.4±〖11.1〗_stat±〖11.2〗_(sys ) MeV/c^2 LnC=0.1871±〖0.0088〗_stat±〖0.0031〗_(sys )±=〖0.0056〗_ext Λ_+=(25.42±〖0.73〗_stat±〖0.73〗_(sys )±=〖1.52〗_ext )×〖10〗^(-3) Die Resultate stimmen mit den Messungen der Experimente KLOE, KTeV und ISTRA+ gut überein, und ermöglichen eine Verbesserung des globalen Fits der Formfaktoren. Mit Hilfe der dispersiven Parametrisierung der Formfaktoren, unter Verwendung des Callan-Treiman-Theorems, ist es möglich, einen Wert für f_± (0) zu bestimmen. Das Resultat lautet: f_+ (0)=0.987±〖0.011〗_(NA48/2)±〖0.008〗_(ext ) Der für f_+ (0) berechnete Wert stimmt im Fehler gut mit den vorherigen Messungen von KTeV, KLOE und ISTRA+ überein, weicht jedoch um knapp zwei Standardabweichungen von der theoretischen Vorhersage ab.
Resumo:
Topologische Beschränkungen beeinflussen die Eigenschaften von Polymeren. Im Rahmen dieser Arbeit wird mit Hilfe von Computersimulationen im Detail untersucht, inwieweit sich die statischen Eigenschaften von kollabierten Polymerringen, Polymerringen in konzentrierten Lösungen und aus Polymerringen aufgebauten Bürsten mit topologischen Beschränkungen von solchen ohne topologische Beschränkungen unterscheiden. Des Weiteren wird analysiert, welchen Einfluss geometrische Beschränkungen auf die topologischen Eigenschaften von einzelnen Polymerketten besitzen. Im ersten Teil der Arbeit geht es um den Einfluss der Topologie auf die Eigenschaften einzelner Polymerketten in verschiedenen Situationen. Da allerdings gerade die effiziente Durchführung von Monte-Carlo-Simulationen von kollabierten Polymerketten eine große Herausforderung darstellt, werden zunächst drei Bridging-Monte-Carlo-Schritte für Gitter- auf Kontinuumsmodelle übertragen. Eine Messung der Effizienz dieser Schritte ergibt einen Beschleunigungsfaktor von bis zu 100 im Vergleich zum herkömmlichen Slithering-Snake-Algorithmus. Darauf folgt die Analyse einer einzelnen, vergröberten Polystyrolkette in sphärischer Geometrie hinsichtlich Verschlaufungen und Knoten. Es wird gezeigt, dass eine signifikante Verknotung der Polystrolkette erst eintritt, wenn der Radius des umgebenden Kapsids kleiner als der Gyrationsradius der Kette ist. Des Weiteren werden sowohl Monte-Carlo- als auch Molekulardynamiksimulationen sehr großer Ringe mit bis zu einer Million Monomeren im kollabierten Zustand durchgeführt. Während die Konfigurationen aus den Monte-Carlo-Simulationen aufgrund der Verwendung der Bridging-Schritte sehr stark verknotet sind, bleiben die Konfigurationen aus den Molekulardynamiksimulationen unverknotet. Hierbei zeigen sich signifikante Unterschiede sowohl in der lokalen als auch in der globalen Struktur der Ringpolymere. Im zweiten Teil der Arbeit wird das Skalierungsverhalten des Gyrationsradius der einzelnen Polymerringe in einer konzentrierten Lösung aus völlig flexiblen Polymerringen im Kontinuum untersucht. Dabei wird der Anfang des asymptotischen Skalierungsverhaltens, welches mit dem Modell des “fractal globules“ konsistent ist, erreicht. Im abschließenden, dritten Teil dieser Arbeit wird das Verhalten von Bürsten aus linearen Polymeren mit dem von Ringpolymerbürsten verglichen. Dabei zeigt sich, dass die Struktur und das Skalierungsverhalten beider Systeme mit identischem Dichteprofil parallel zum Substrat deutlich voneinander abweichen, obwohl die Eigenschaften beider Systeme in Richtung senkrecht zum Substrat übereinstimmen. Der Vergleich des Relaxationsverhaltens einzelner Ketten in herkömmlichen Polymerbürsten und Ringbürsten liefert keine gravierenden Unterschiede. Es stellt sich aber auch heraus, dass die bisher verwendeten Erklärungen zur Relaxationsverhalten von herkömmlichen Bürsten nicht ausreichen, da diese lediglich den anfänglichen Zerfall der Korrelationsfunktion berücksichtigen. Bei der Untersuchung der Dynamik einzelner Monomere in einer herkömmlichen Bürste aus offenen Ketten vom Substrat hin zum offenen Ende zeigt sich, dass die Monomere in der Mitte der Kette die langsamste Relaxation besitzen, obwohl ihre mittlere Verrückung deutlich kleiner als die der freien Endmonomere ist.
Resumo:
In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.
Resumo:
Gegenstand dieser Arbeit ist die nummerische Berechnung von Schleifenintegralen welche in höheren Ordnungen der Störungstheorie auftreten.rnAnalog zur reellen Emission kann man auch in den virtuellen Beiträgen Subtraktionsterme einführen, welche die kollinearen und soften Divergenzen des Schleifenintegrals entfernen. Die Phasenraumintegration und die Schleifenintegration können dann in einer einzigen Monte Carlo Integration durchgeführt werden. In dieser Arbeit zeigen wir wie eine solche numerische Integration unter zu Hilfenahme einer Kontourdeformation durchgeführt werden kann. Ausserdem zeigen wir wie man die benötigeten Integranden mit Rekursionsformeln berechnen kann.
Resumo:
Measurements of the self coupling between bosons are important to test the electroweak sector of the Standard Model (SM). The production of pairs of Z bosons through the s-channel is forbidden in the SM. The presence of physics, beyond the SM, could lead to a deviation of the expected production cross section of pairs of Z bosons due to the so called anomalous Triple Gauge Couplings (aTGC). Proton-proton data collisions at the Large Hadron Collider (LHC) recorded by the ATLAS detector at a center of mass energy of 8 TeV were analyzed corresponding to an integrated luminosity of 20.3 fb-1. Pairs of Z bosons decaying into two electron-positron pairs are searched for in the data sample. The effect of the inclusion of detector regions corresponding to high values of the pseudorapidity was studied to enlarge the phase space available for the measurement of the ZZ production. The number of ZZ candidates was determined and the ZZ production cross section was measured to be: rn7.3±1.0(Stat.)±0.4(Sys.)±0.2(lumi.)pb, which is consistent with the SM expectation value of 7.2±0.3pb. Limits on the aTGCs were derived using the observed yield, which are twice as stringent as previous limits obtained by ATLAS at a center of mass energy of 7 TeV.
Resumo:
In this thesis I present a new coarse-grained model suitable to investigate the phase behavior of rod-coil block copolymers on mesoscopic length scales. In this model the rods are represented by hard spherocylinders, whereas the coil block consists of interconnected beads. The interactions between the constituents are based on local densities. This facilitates an efficient Monte-Carlo sampling of the phase space. I verify the applicability of the model and the simulation approach by means of several examples. I treat pure rod systems and mixtures of rod and coil polymers. Then I append coils to the rods and investigate the role of the different model parameters. Furthermore, I compare different implementations of the model. I prove the capability of the rod-coil block copolymers in our model to exhibit typical micro-phase separated configurations as well as extraordinary phases, such as the wavy lamellar state, percolating structuresrnand clusters. Additionally, I demonstrate the metastability of the observed zigzag phase in our model. A central point of this thesis is the examination of the phase behavior of the rod-coil block copolymers in dependence of different chain lengths and interaction strengths between rods and coil. The observations of these studies are summarized in a phase diagram for rod-coil block copolymers. Furthermore, I validate a stabilization of the smectic phase with increasing coil fraction.rnIn the second part of this work I present a side project in which I derive a model permitting the simulation of tetrapods with and without grafted semiconducting block copolymers. The effect of these polymers is added in an implicit manner by effective interactions between the tetrapods. While the depletion interaction is described in an approximate manner within the Asakura-Oosawa model, the free energy penalty for the brush compression is calculated within the Alexander-de Gennes model. Recent experiments with CdSe tetrapods show that grafted tetrapods are clearly much better dispersed in the polymer matrix than bare tetrapods. My simulations confirm that bare tetrapods tend to aggregate in the matrix of excess polymers, while clustering is significantly reduced after grafting polymer chains to the tetrapods. Finally, I propose a possible extension enabling the simulation of a system with fluctuating volume and demonstrate its basic functionality. This study is originated in a cooperation with an experimental group with the goal to analyze the morphology of these systems in order to find the ideal morphology for hybrid solar cells.
Resumo:
Monte Carlo (MC) based dose calculations can compute dose distributions with an accuracy surpassing that of conventional algorithms used in radiotherapy, especially in regions of tissue inhomogeneities and surface discontinuities. The Swiss Monte Carlo Plan (SMCP) is a GUI-based framework for photon MC treatment planning (MCTP) interfaced to the Eclipse treatment planning system (TPS). As for any dose calculation algorithm, also the MCTP needs to be commissioned and validated before using the algorithm for clinical cases. Aim of this study is the investigation of a 6 MV beam for clinical situations within the framework of the SMCP. In this respect, all parts i.e. open fields and all the clinically available beam modifiers have to be configured so that the calculated dose distributions match the corresponding measurements. Dose distributions for the 6 MV beam were simulated in a water phantom using a phase space source above the beam modifiers. The VMC++ code was used for the radiation transport through the beam modifiers (jaws, wedges, block and multileaf collimator (MLC)) as well as for the calculation of the dose distributions within the phantom. The voxel size of the dose distributions was 2mm in all directions. The statistical uncertainty of the calculated dose distributions was below 0.4%. Simulated depth dose curves and dose profiles in terms of [Gy/MU] for static and dynamic fields were compared with the corresponding measurements using dose difference and γ analysis. For the dose difference criterion of ±1% of D(max) and the distance to agreement criterion of ±1 mm, the γ analysis showed an excellent agreement between measurements and simulations for all static open and MLC fields. The tuning of the density and the thickness for all hard wedges lead to an agreement with the corresponding measurements within 1% or 1mm. Similar results have been achieved for the block. For the validation of the tuned hard wedges, a very good agreement between calculated and measured dose distributions was achieved using a 1%/1mm criteria for the γ analysis. The calculated dose distributions of the enhanced dynamic wedges (10°, 15°, 20°, 25°, 30°, 45° and 60°) met the criteria of 1%/1mm when compared with the measurements for all situations considered. For the IMRT fields all compared measured dose values agreed with the calculated dose values within a 2% dose difference or within 1 mm distance. The SMCP has been successfully validated for a static and dynamic 6 MV photon beam, thus resulting in accurate dose calculations suitable for applications in clinical cases.
Resumo:
When high-energy single-hadron production takes place inside an identified jet, there are important correlations between the fragmentation and phase-space cuts. For example, when one-hadron yields are measured in on-resonance B-factory data, a cut on the thrust event shape T is required to remove the large b-quark contribution. This leads to a dijet final-state restriction for the light-quark fragmentation process. Here, we complete our analysis of unpolarized fragmentation of (light) quarks and gluons to a light hadron h with energy fraction z in e+e−→dijet+h at the center-of-mass energy Q=10.58 GeV. In addition to the next-to-next-to-leading order resummation of the logarithms of 1−T, we include the next-to-leading order nonsingular
Resumo:
One limitation to the widespread implementation of Monte Carlo (MC) patient dose-calculation algorithms for radiotherapy is the lack of a general and accurate source model of the accelerator radiation source. Our aim in this work is to investigate the sensitivity of the photon-beam subsource distributions in a MC source model (with target, primary collimator, and flattening filter photon subsources and an electron subsource) for 6- and 18-MV photon beams when the energy and radial distributions of initial electrons striking a linac target change. For this purpose, phase-space data (PSD) was calculated for various mean electron energies striking the target, various normally distributed electron energy spread, and various normally distributed electron radial intensity distributions. All PSD was analyzed in terms of energy, fluence, and energy fluence distributions, which were compared between the different parameter sets. The energy spread was found to have a negligible influence on the subsource distributions. The mean energy and radial intensity significantly changed the target subsource distribution shapes and intensities. For the primary collimator and flattening filter subsources, the distribution shapes of the fluence and energy fluence changed little for different mean electron energies striking the target, however, their relative intensity compared with the target subsource change, which can be accounted for by a scaling factor. This study indicates that adjustments to MC source models can likely be limited to adjusting the target subsource in conjunction with scaling the relative intensity and energy spectrum of the primary collimator, flattening filter, and electron subsources when the energy and radial distributions of the initial electron-beam change.
Resumo:
A major barrier to widespread clinical implementation of Monte Carlo dose calculation is the difficulty in characterizing the radiation source within a generalized source model. This work aims to develop a generalized three-component source model (target, primary collimator, flattening filter) for 6- and 18-MV photon beams that match full phase-space data (PSD). Subsource by subsource comparison of dose distributions, using either source PSD or the source model as input, allows accurate source characterization and has the potential to ease the commissioning procedure, since it is possible to obtain information about which subsource needs to be tuned. This source model is unique in that, compared to previous source models, it retains additional correlations among PS variables, which improves accuracy at nonstandard source-to-surface distances (SSDs). In our study, three-dimensional (3D) dose calculations were performed for SSDs ranging from 50 to 200 cm and for field sizes from 1 x 1 to 30 x 30 cm2 as well as a 10 x 10 cm2 field 5 cm off axis in each direction. The 3D dose distributions, using either full PSD or the source model as input, were compared in terms of dose-difference and distance-to-agreement. With this model, over 99% of the voxels agreed within +/-1% or 1 mm for the target, within 2% or 2 mm for the primary collimator, and within +/-2.5% or 2 mm for the flattening filter in all cases studied. For the dose distributions, 99% of the dose voxels agreed within 1% or 1 mm when the combined source model-including a charged particle source and the full PSD as input-was used. The accurate and general characterization of each photon source and knowledge of the subsource dose distributions should facilitate source model commissioning procedures by allowing scaling the histogram distributions representing the subsources to be tuned.
Resumo:
A multiple source model (MSM) for the 6 MV beam of a Varian Clinac 2300 C/D was developed by simulating radiation transport through the accelerator head for a set of square fields using the GEANT Monte Carlo (MC) code. The corresponding phase space (PS) data enabled the characterization of 12 sources representing the main components of the beam defining system. By parametrizing the source characteristics and by evaluating the dependence of the parameters on field size, it was possible to extend the validity of the model to arbitrary rectangular fields which include the central 3 x 3 cm2 field without additional precalculated PS data. Finally, a sampling procedure was developed in order to reproduce the PS data. To validate the MSM, the fluence, energy fluence and mean energy distributions determined from the original and the reproduced PS data were compared and showed very good agreement. In addition, the MC calculated primary energy spectrum was verified by an energy spectrum derived from transmission measurements. Comparisons of MC calculated depth dose curves and profiles, using original and PS data reproduced by the MSM, agree within 1% and 1 mm. Deviations from measured dose distributions are within 1.5% and 1 mm. However, the real beam leads to some larger deviations outside the geometrical beam area for large fields. Calculated output factors in 10 cm water depth agree within 1.5% with experimentally determined data. In conclusion, the MSM produces accurate PS data for MC photon dose calculations for the rectangular fields specified.
Resumo:
Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4.
Resumo:
BACKGROUND: During paravertebral block, the anterolateral limit of the paravertebral space, which consists of the pleura, should preferably not be perforated. Also it is possible that, during the block, the constant superior costotransverse ligament can be missed in the loss-of-resistance technique. We therefore aimed to develop a new technique for an ultrasound-guided puncture of the paravertebral space. METHODS: We performed 20 punctures and catheter placements in 10 human cadavers. A sonographic view showing the pleura and the superior costotransverse ligament was obtained with a slightly oblique scan using a curved array transducer. After inline approach, injection of 10 ml normal saline confirmed the correct position of the needle tip, distended the space, and enabled catheter insertion. The spread of contrast dye injected through the catheters was assessed by CT scans. RESULTS: The superior costotransverse ligament and the paravertebral space were easy to identify. The needle tip reached the paravertebral space without problems under visualization. In contrast, the introduction of the catheter was difficult. The CT scan revealed a correct paravertebral spread of contrast in 11 cases. Out of the remaining, one catheter was found in the pleural space, in six cases there was an epidural, and in two cases there was a prevertebral spread of contrast dye. CONCLUSIONS: We successfully developed a technique for an accurate ultrasound-guided puncture of the paravertebral space. We also showed that when a catheter is introduced through the needle with the tip lying in the paravertebral space, there is a high probability of catheter misplacement into the epidural, mediastinal, or pleural spaces.
Resumo:
BEAMnrc, a code for simulating medical linear accelerators based on EGSnrc, has been bench-marked and used extensively in the scientific literature and is therefore often considered to be the gold standard for Monte Carlo simulations for radiotherapy applications. However, its long computation times make it too slow for the clinical routine and often even for research purposes without a large investment in computing resources. VMC++ is a much faster code thanks to the intensive use of variance reduction techniques and a much faster implementation of the condensed history technique for charged particle transport. A research version of this code is also capable of simulating the full head of linear accelerators operated in photon mode (excluding multileaf collimators, hard and dynamic wedges). In this work, a validation of the full head simulation at 6 and 18 MV is performed, simulating with VMC++ and BEAMnrc the addition of one head component at a time and comparing the resulting phase space files. For the comparison, photon and electron fluence, photon energy fluence, mean energy, and photon spectra are considered. The largest absolute differences are found in the energy fluences. For all the simulations of the different head components, a very good agreement (differences in energy fluences between VMC++ and BEAMnrc <1%) is obtained. Only a particular case at 6 MV shows a somewhat larger energy fluence difference of 1.4%. Dosimetrically, these phase space differences imply an agreement between both codes at the <1% level, making VMC++ head module suitable for full head simulations with considerable gain in efficiency and without loss of accuracy.
Resumo:
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.