982 resultados para Direct Simulation Monte Carlo Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Am Mainzer Mikrotron können Lambda-Hyperkerne in (e,e'K^+)-Reaktionen erzeugt werden. Durch den Nachweis des erzeugten Kaons im KAOS-Spektrometer lassen sich Reaktionen markieren, bei denen ein Hyperon erzeugt wurde. Die Spektroskopie geladener Pionen, die aus schwachen Zweikörperzerfällen leichter Hyperkerne stammen, erlaubt es die Bindungsenergie des Hyperons im Kern mit hoher Präzision zu bestimmen. Neben der direkten Produktion von Hyperkernen ist auch die Erzeugung durch die Fragmentierung eines hoch angeregten Kontinuumszustands möglich. Dadurch können unterschiedliche Hyperkerne in einem Experiment untersucht werden. Für die Spektroskopie der Zerfallspionen stehen hochauflösende Magnetspektrometer zur Verfügung. Um die Grundzustandsmasse der Hyperkerne aus dem Pionimpuls zu berechnen, ist es erforderlich, dass das Hyperfragment vor dem Zerfall im Target abgebremst wird. Basierend auf dem bekannten Wirkungsquerschnitt der elementaren Kaon-Photoproduktion wurde eine Berechnung der zu erwartenden Ereignisrate vorgenommen. Es wurde eine Monte-Carlo-Simulation entwickelt, die den Fragmentierungsprozess und das Abbremsen der Hyperfragmente im Target beinhaltet. Diese nutzt ein statistisches Aufbruchsmodell zur Beschreibung der Fragmentierung. Dieser Ansatz ermöglicht für Wasserstoff-4-Lambda-Hyperkerne eine Vorhersage der zu erwartenden Zählrate an Zerfallspionen. In einem Pilotexperiment im Jahr 2011 wurde erstmalig an MAMI der Nachweis von Hadronen mit dem KAOS-Spektrometer unter einem Streuwinkel von 0° demonstriert, und koinzident dazu Pionen nachgewiesen. Es zeigte sich, dass bedingt durch die hohen Untergrundraten von Positronen in KAOS eine eindeutige Identifizierung von Hyperkernen in dieser Konfiguration nicht möglich war. Basierend auf diesen Erkenntnissen wurde das KAOS-Spektrometer so modifiziert, dass es als dedizierter Kaonenmarkierer fungierte. Zu diesem Zweck wurde ein Absorber aus Blei im Spektrometer montiert, in dem Positronen durch Schauerbildung abgestoppt werden. Die Auswirkung eines solchen Absorbers wurde in einem Strahltest untersucht. Eine Simulation basierend auf Geant4 wurde entwickelt mittels derer der Aufbau von Absorber und Detektoren optimiert wurde, und die Vorhersagen über die Auswirkung auf die Datenqualität ermöglichte. Zusätzlich wurden mit der Simulation individuelle Rückrechnungsmatrizen für Kaonen, Pionen und Protonen erzeugt, die die Wechselwirkung der Teilchen mit der Bleiwand beinhalteten, und somit eine Korrektur der Auswirkungen ermöglichen. Mit dem verbesserten Aufbau wurde 2012 eine Produktionsstrahlzeit durchgeführt, wobei erfolgreich Kaonen unter 0° Streuwinkel koninzident mit Pionen aus schwachen Zerfällen detektiert werden konnten. Dabei konnte im Impulsspektrum der Zerfallspionen eine Überhöhung mit einer Signifikanz, die einem p-Wert von 2,5 x 10^-4 entspricht, festgestellt werden. Diese Ereignisse können aufgrund ihres Impulses, den Zerfällen von Wasserstoff-4-Lambda-Hyperkernen zugeordnet werden, wobei die Anzahl detektierter Pionen konsistent mit der berechneten Ausbeute ist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we develop high precision tools for the simulation of slepton pair production processes at hadron colliders and apply them to phenomenological studies at the LHC. Our approach is based on the POWHEG method for the matching of next-to-leading order results in perturbation theory to parton showers. We calculate matrix elements for slepton pair production and for the production of a slepton pair in association with a jet perturbatively at next-to-leading order in supersymmetric quantum chromodynamics. Both processes are subsequently implemented in the POWHEG BOX, a publicly available software tool that contains general parts of the POWHEG matching scheme. We investigate phenomenological consequences of our calculations in several setups that respect experimental exclusion limits for supersymmetric particles and provide precise predictions for slepton signatures at the LHC. The inclusion of QCD emissions in the partonic matrix elements allows for an accurate description of hard jets. Interfacing our codes to the multi-purpose Monte-Carlo event generator PYTHIA, we simulate parton showers and slepton decays in fully exclusive events. Advanced kinematical variables and specific search strategies are examined as means for slepton discovery in experimentally challenging setups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis I present a new coarse-grained model suitable to investigate the phase behavior of rod-coil block copolymers on mesoscopic length scales. In this model the rods are represented by hard spherocylinders, whereas the coil block consists of interconnected beads. The interactions between the constituents are based on local densities. This facilitates an efficient Monte-Carlo sampling of the phase space. I verify the applicability of the model and the simulation approach by means of several examples. I treat pure rod systems and mixtures of rod and coil polymers. Then I append coils to the rods and investigate the role of the different model parameters. Furthermore, I compare different implementations of the model. I prove the capability of the rod-coil block copolymers in our model to exhibit typical micro-phase separated configurations as well as extraordinary phases, such as the wavy lamellar state, percolating structuresrnand clusters. Additionally, I demonstrate the metastability of the observed zigzag phase in our model. A central point of this thesis is the examination of the phase behavior of the rod-coil block copolymers in dependence of different chain lengths and interaction strengths between rods and coil. The observations of these studies are summarized in a phase diagram for rod-coil block copolymers. Furthermore, I validate a stabilization of the smectic phase with increasing coil fraction.rnIn the second part of this work I present a side project in which I derive a model permitting the simulation of tetrapods with and without grafted semiconducting block copolymers. The effect of these polymers is added in an implicit manner by effective interactions between the tetrapods. While the depletion interaction is described in an approximate manner within the Asakura-Oosawa model, the free energy penalty for the brush compression is calculated within the Alexander-de Gennes model. Recent experiments with CdSe tetrapods show that grafted tetrapods are clearly much better dispersed in the polymer matrix than bare tetrapods. My simulations confirm that bare tetrapods tend to aggregate in the matrix of excess polymers, while clustering is significantly reduced after grafting polymer chains to the tetrapods. Finally, I propose a possible extension enabling the simulation of a system with fluctuating volume and demonstrate its basic functionality. This study is originated in a cooperation with an experimental group with the goal to analyze the morphology of these systems in order to find the ideal morphology for hybrid solar cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oggi sappiamo che la materia ordinaria rappresenta solo una piccola parte dell'intero contenuto in massa dell'Universo. L'ipotesi dell'esistenza della Materia Oscura, un nuovo tipo di materia che interagisce solo gravitazionalmente e, forse, tramite la forza debole, è stata avvalorata da numerose evidenze su scala sia galattica che cosmologica. Gli sforzi rivolti alla ricerca delle cosiddette WIMPs (Weakly Interacting Massive Particles), il generico nome dato alle particelle di Materia Oscura, si sono moltiplicati nel corso degli ultimi anni. L'esperimento XENON1T, attualmente in costruzione presso i Laboratori Nazionali del Gran Sasso (LNGS) e che sarà in presa dati entro la fine del 2015, segnerà un significativo passo in avanti nella ricerca diretta di Materia Oscura, che si basa sulla rivelazione di collisioni elastiche su nuclei bersaglio. XENON1T rappresenta la fase attuale del progetto XENON, che ha già realizzato gli esperimenti XENON10 (2005) e XENON100 (2008 e tuttora in funzione) e che prevede anche un ulteriore sviluppo, chiamato XENONnT. Il rivelatore XENON1T sfrutta circa 3 tonnellate di xeno liquido (LXe) e si basa su una Time Projection Chamber (TPC) a doppia fase. Dettagliate simulazioni Monte Carlo della geometria del rivelatore, assieme a specifiche misure della radioattività dei materiali e stime della purezza dello xeno utilizzato, hanno permesso di predire con accuratezza il fondo atteso. In questo lavoro di tesi, presentiamo lo studio della sensibilità attesa per XENON1T effettuato tramite il metodo statistico chiamato Profile Likelihood (PL) Ratio, il quale nell'ambito di un approccio frequentista permette un'appropriata trattazione delle incertezze sistematiche. In un primo momento è stata stimata la sensibilità usando il metodo semplificato Likelihood Ratio che non tiene conto di alcuna sistematica. In questo modo si è potuto valutare l'impatto della principale incertezza sistematica per XENON1T, ovvero quella sulla emissione di luce di scintillazione dello xeno per rinculi nucleari di bassa energia. I risultati conclusivi ottenuti con il metodo PL indicano che XENON1T sarà in grado di migliorare significativamente gli attuali limiti di esclusione di WIMPs; la massima sensibilità raggiunge una sezione d'urto σ=1.2∙10-47 cm2 per una massa di WIMP di 50 GeV/c2 e per una esposizione nominale di 2 tonnellate∙anno. I risultati ottenuti sono in linea con l'ambizioso obiettivo di XENON1T di abbassare gli attuali limiti sulla sezione d'urto, σ, delle WIMPs di due ordini di grandezza. Con tali prestazioni, e considerando 1 tonnellata di LXe come massa fiduciale, XENON1T sarà in grado di superare gli attuali limiti (esperimento LUX, 2013) dopo soli 5 giorni di acquisizione dati.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The available results of deep imaging searches for planetary companions around nearby stars provide us useful constraints on the frequencies of giant planets in very wide orbits. Here we present some preliminary results of the Monte Carlo simulation which compare the published detection limits with the generated planetary masses and orbital parameters. This allow us to consider the impications of the null detection, which comes from the direct imaging techniques, on the distributions of mass and semimajor axis derived from the results of the other search techniques and also to check the agreement of the observations with the available planetary formation models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study was conducted to estimate the direct losses due to Neospora caninum in Swiss dairy cattle and to assess the costs and benefits of different potential control strategies. A Monte Carlo simulation spreadsheet module was developed to estimate the direct costs caused by N. caninum, with and without control strategies, and to estimate the costs of these control strategies in a financial analysis. The control strategies considered were "testing and culling of seropositive female cattle", "discontinued breeding with offspring from seropositive cows", "chemotherapeutical treatment of female offspring" and "vaccination of all female cattle". Each parameter in the module that was considered to be uncertain, was described using probability distributions. The simulations were run with 20,000 iterations over a time period of 25 years. The median annual losses due to N. caninum in the Swiss dairy cow population were estimated to be euro 9.7 million euros. All control strategies that required yearly serological testing of all cattle in the population produced high costs and thus were not financially profitable. Among the other control strategies, two showed benefit-cost ratios (BCR) >1 and positive net present values (NPV): "Discontinued breeding with offspring from seropositive cows" (BCR=1.29, NPV=25 million euros ) and "chemotherapeutical treatment of all female offspring" (BCR=2.95, NPV=59 million euros). In economic terms, the best control strategy currently available would therefore be "discontinued breeding with offspring from seropositive cows".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simulation model adopting a health system perspective showed population-based screening with DXA, followed by alendronate treatment of persons with osteoporosis, or with anamnestic fracture and osteopenia, to be cost-effective in Swiss postmenopausal women from age 70, but not in men. INTRODUCTION: We assessed the cost-effectiveness of a population-based screen-and-treat strategy for osteoporosis (DXA followed by alendronate treatment if osteoporotic, or osteopenic in the presence of fracture), compared to no intervention, from the perspective of the Swiss health care system. METHODS: A published Markov model assessed by first-order Monte Carlo simulation was refined to reflect the diagnostic process and treatment effects. Women and men entered the model at age 50. Main screening ages were 65, 75, and 85 years. Age at bone densitometry was flexible for persons fracturing before the main screening age. Realistic assumptions were made with respect to persistence with intended 5 years of alendronate treatment. The main outcome was cost per quality-adjusted life year (QALY) gained. RESULTS: In women, costs per QALY were Swiss francs (CHF) 71,000, CHF 35,000, and CHF 28,000 for the main screening ages of 65, 75, and 85 years. The threshold of CHF 50,000 per QALY was reached between main screening ages 65 and 75 years. Population-based screening was not cost-effective in men. CONCLUSION: Population-based DXA screening, followed by alendronate treatment in the presence of osteoporosis, or of fracture and osteopenia, is a cost-effective option in Swiss postmenopausal women after age 70.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of our study was to develop a modeling framework suitable to quantify the incidence, absolute number and economic impact of osteoporosis-attributable hip, vertebral and distal forearm fractures, with a particular focus on change over time, and with application to the situation in Switzerland from 2000 to 2020. A Markov process model was developed and analyzed by Monte Carlo simulation. A demographic scenario provided by the Swiss Federal Statistical Office and various Swiss and international data sources were used as model inputs. Demographic and epidemiologic input parameters were reproduced correctly, confirming the internal validity of the model. The proportion of the Swiss population aged 50 years or over will rise from 33.3% in 2000 to 41.3% in 2020. At the total population level, osteoporosis-attributable incidence will rise from 1.16 to 1.54 per 1,000 person-years in the case of hip fracture, from 3.28 to 4.18 per 1,000 person-years in the case of radiographic vertebral fracture, and from 0.59 to 0.70 per 1,000 person-years in the case of distal forearm fracture. Osteoporosis-attributable hip fracture numbers will rise from 8,375 to 11,353, vertebral fracture numbers will rise from 23,584 to 30,883, and distal forearm fracture numbers will rise from 4,209 to 5,186. Population-level osteoporosis-related direct medical inpatient costs per year will rise from 713.4 million Swiss francs (CHF) to CHF946.2 million. These figures correspond to 1.6% and 2.2% of Swiss health care expenditures in 2000. The modeling framework described can be applied to a wide variety of settings. It can be used to assess the impact of new prevention, diagnostic and treatment strategies. In Switzerland incidences of osteoporotic hip, vertebral and distal forearm fracture will rise by 33%, 27%, and 19%, respectively, between 2000 and 2020, if current prevention and treatment patterns are maintained. Corresponding absolute fracture numbers will rise by 36%, 31%, and 23%. Related direct medical inpatient costs are predicted to increase by 33%; however, this estimate is subject to uncertainty due to limited availability of input data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clays and claystones are used as backfill and barrier materials in the design of waste repositories, because they act as hydraulic barriers and retain contaminants. Transport through such barriers occurs mainly by molecular diffusion. There is thus an interest to relate the diffusion properties of clays to their structural properties. In previous work, we have developed a concept for up-scaling pore-scale molecular diffusion coefficients using a grid-based model for the sample pore structure. Here we present an operational algorithm which can generate such model pore structures of polymineral materials. The obtained pore maps match the rock’s mineralogical components and its macroscopic properties such as porosity, grain and pore size distributions. Representative ensembles of grains in 2D or 3D are created by a lattice Monte Carlo (MC) method, which minimizes the interfacial energy of grains starting from an initial grain distribution. Pores are generated at grain boundaries and/or within grains. The method is general and allows to generate anisotropic structures with grains of approximately predetermined shapes, or with mixtures of different grain types. A specific focus of this study was on the simulation of clay-like materials. The generated clay pore maps were then used to derive upscaled effective diffusion coefficients for non-sorbing tracers using a homogenization technique. The large number of generated maps allowed to check the relations between micro-structural features of clays and their effective transport parameters, as is required to explain and extrapolate experimental diffusion results. As examples, we present a set of 2D and 3D simulations and investigated the effects of nanopores within particles (interlayer pores) and micropores between particles. Archie’s simple power law is followed in systems with only micropores. When nanopores are present, additional parameters are required; the data reveal that effective diffusion coefficients could be described by a sum of two power functions, related to the micro- and nanoporosity. We further used the model to investigate the relationships between particle orientation and effective transport properties of the sample.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE A beamlet based direct aperture optimization (DAO) for modulated electron radiotherapy (MERT) using photon multileaf collimator (pMLC) shaped electron fields is developed and investigated. METHODS The Swiss Monte Carlo Plan (SMCP) allows the calculation of dose distributions for pMLC shaped electron beams. SMCP is interfaced with the Eclipse TPS (Varian Medical Systems, Palo Alto, CA) which can thus be included into the inverse treatment planning process for MERT. This process starts with the import of a CT-scan into Eclipse, the contouring of the target and the organs at risk (OARs), and the choice of the initial electron beam directions. For each electron beam, the number of apertures, their energy, and initial shape are defined. Furthermore, the DAO requires dose-volume constraints for the structures contoured. In order to carry out the DAO efficiently, the initial electron beams are divided into a grid of beamlets. For each of those, the dose distribution is precalculated using a modified electron beam model, resulting in a dose list for each beamlet and energy. Then the DAO is carried out, leading to a set of optimal apertures and corresponding weights. These optimal apertures are now converted into pMLC shaped segments and the dose calculation for each segment is performed. For these dose distributions, a weight optimization process is launched in order to minimize the differences between the dose distribution using the optimal apertures and the pMLC segments. Finally, a deliverable dose distribution for the MERT plan is obtained and loaded back into Eclipse for evaluation. For an idealized water phantom geometry, a MERT treatment plan is created and compared to the plan obtained using a previously developed forward planning strategy. Further, MERT treatment plans for three clinical situations (breast, chest wall, and parotid metastasis of a squamous cell skin carcinoma) are created using the developed inverse planning strategy. The MERT plans are compared to clinical standard treatment plans using photon beams and the differences between the optimal and the deliverable dose distributions are determined. RESULTS For the idealized water phantom geometry, the inversely optimized MERT plan is able to obtain the same PTV coverage, but with an improved OAR sparing compared to the forwardly optimized plan. Regarding the right-sided breast case, the MERT plan is able to reduce the lung volume receiving more than 30% of the prescribed dose and the mean lung dose compared to the standard plan. However, the standard plan leads to a better homogeneity within the CTV. The results for the left-sided thorax wall are similar but also the dose to the heart is reduced comparing MERT to the standard treatment plan. For the parotid case, MERT leads to lower doses for almost all OARs but to a less homogeneous dose distribution for the PTV when compared to a standard plan. For all cases, the weight optimization successfully minimized the differences between the optimal and the deliverable dose distribution. CONCLUSIONS A beamlet based DAO using multiple beam angles is implemented and successfully tested for an idealized water phantom geometry and clinical situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the performance of the ATLAS muon reconstruction during the LHC run with pp collisions at √s = 7–8 TeV in 2011–2012, focusing mainly on data collected in 2012. Measurements of the reconstruction efficiency and of the momentum scale and resolution, based on large reference samples of J/ψ → μμ, Z → μμ and ϒ → μμ decays, are presented and compared to Monte Carlo simulations. Corrections to the simulation, to be used in physics analysis, are provided. Over most of the covered phase space (muon |η| < 2.7 and 5 ≲ pT ≲ 100 GeV) the efficiency is above 99% and is measured with per-mille precision. The momentum resolution ranges from 1.7% at central rapidity and for transverse momentum pT ≅ 10 GeV, to 4% at large rapidity and pT ≅ 100 GeV. The momentum scale is known with an uncertainty of 0.05% to 0.2% depending on rapidity. A method for the recovery of final state radiation from the muons is also presented.