970 resultados para Top Quark Monte Carlo All-Hadronic Decay Mass Fit Cambridge-Aachen CMS LHC CERN


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurements of spin correlation in top quark pair production are presented using data collected with the ATLAS detector at the LHC with proton-proton collisions at a center-of-mass energy of 7 TeV, corresponding to an integrated luminosity of 4.6  fb −1 . Events are selected in final states with two charged leptons and at least two jets and in final states with one charged lepton and at least four jets. Four different observables sensitive to different properties of the top quark pair production mechanism are used to extract the correlation between the top and antitop quark spins. Some of these observables are measured for the first time. The measurements are in good agreement with the Standard Model prediction at next-to-leading-order accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ab initio calculations of Afρ are presented using Mie scattering theory and a Direct Simulation Monte Carlo (DSMC) dust outflow model in support of the Rosetta mission and its target 67P/Churyumov-Gerasimenko (CG). These calculations are performed for particle sizes ranging from 0.010 μm to 1.0 cm. The present status of our knowledge of various differential particle size distributions is reviewed and a variety of particle size distributions is used to explore their effect on Afρ , and the dust mass production View the MathML sourcem˙. A new simple two parameter particle size distribution that curtails the effect of particles below 1 μm is developed. The contributions of all particle sizes are summed to get a resulting overall Afρ. The resultant Afρ could not easily be predicted a priori and turned out to be considerably more constraining regarding the mass loss rate than expected. It is found that a proper calculation of Afρ combined with a good Afρ measurement can constrain the dust/gas ratio in the coma of comets as well as other methods presently available. Phase curves of Afρ versus scattering angle are calculated and produce good agreement with observational data. The major conclusions of our calculations are: – The original definition of A in Afρ is problematical and Afρ should be: qsca(n,λ)×p(g)×f×ρqsca(n,λ)×p(g)×f×ρ. Nevertheless, we keep the present nomenclature of Afρ as a measured quantity for an ensemble of coma particles.– The ratio between Afρ and the dust mass loss rate View the MathML sourcem˙ is dominated by the particle size distribution. – For most particle size distributions presently in use, small particles in the range from 0.10 to 1.0 μm contribute a large fraction to Afρ. – Simplifying the calculation of Afρ by considering only large particles and approximating qsca does not represent a realistic model. Mie scattering theory or if necessary, more complex scattering calculations must be used. – For the commonly used particle size distribution, dn/da ∼ a−3.5 to a−4, there is a natural cut off in Afρ contribution for both small and large particles. – The scattering phase function must be taken into account for each particle size; otherwise the contribution of large particles can be over-estimated by a factor of 10. – Using an imaginary index of refraction of i = 0.10 does not produce sufficient backscattering to match observational data. – A mixture of dark particles with i ⩾ 0.10 and brighter silicate particles with i ⩽ 0.04 matches the observed phase curves quite well. – Using current observational constraints, we find the dust/gas mass-production ratio of CG at 1.3 AU is confined to a range of 0.03–0.5 with a reasonably likely value around 0.1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We estimate the momentum diffusion coefficient of a heavy quark within a pure SU(3) plasma at a temperature of about 1.5Tc. Large-scale Monte Carlo simulations on a series of lattices extending up to 1923×48 permit us to carry out a continuum extrapolation of the so-called color-electric imaginary-time correlator. The extrapolated correlator is analyzed with the help of theoretically motivated models for the corresponding spectral function. Evidence for a nonzero transport coefficient is found and, incorporating systematic uncertainties reflecting model assumptions, we obtain κ=(1.8–3.4)T3. This implies that the “drag coefficient,” characterizing the time scale at which heavy quarks adjust to hydrodynamic flow, is η−1D=(1.8–3.4)(Tc/T)2(M/1.5  GeV)  fm/c, where M is the heavy quark kinetic mass. The results apply to bottom and, with somewhat larger systematic uncertainties, to charm quarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanics-based analysis framework predicts top-down fatigue cracking initiation time in asphalt concrete pavements by utilising fracture mechanics and mixture morphology-based property. To reduce the level of complexity involved, traffic data were characterised and incorporated into the framework using the equivalent single axle load (ESAL) approach. There is a concern that this kind of simplistic traffic characterisation might result in erroneous performance predictions and pavement structural designs. This paper integrates axle load spectra and other traffic characterisation parameters into the mechanics-based analysis framework and studies the impact these traffic characterisation parameters have on predicted fatigue cracking performance. The traffic characterisation inputs studied are traffic growth rate, axle load spectra, lateral wheel wander and volume adjustment factors. For this purpose, a traffic integration approach which incorporates Monte Carlo simulation and representative traffic characterisation inputs was developed. The significance of these traffic characterisation parameters was established by evaluating a number of field pavement sections. It is evident from the results that all the traffic characterisation parameters except truck wheel wander have been observed to have significant influence on predicted top-down fatigue cracking performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Previous glucagon receptor gene (GCGR) studies have shown a Gly40Ser mutation to be more prevalent in essential hypertension and to affect glucagon binding affinity to its receptor. An Alu-repeat poly(A) polymorphism colocalized to GCGR was used in the present study to test for association and linkage in hypertension as well as association in obesity development. 2. Using a cross-sectional approach, 85 hypertensives and 95 normotensives were genotyped using polymerase chain reaction primers flanking the Alu-repeat. Both hypertensive and normotensive populations were subdivided into lean and obese categories based on body mass index (BMI) to determine involvement of this variant in obesity. For the linkage study, 89 Australian Caucasian hypertension affected sibships (174 sibpairs) were genotyped and the results were analysed using GENE-HUNTER, Mapmaker Sibs, ERPA and SPLINK (all freely available from http://linlkage.rockefeller. edu/soft/list.html). 3. Cross-sectional results for both hypertension and obesity were analysed using Chi-squared and Monte Carlo analyses. Results did not show an association of this variant with either hypertension (χ2 = 6.9, P = 0.14; Monte Carlo χ2 = 7.0, P = 0.11; n = 5000) or obesity (χ2 = 3.3, P = 0.35; Monte Carlo χ2 = 3.26, P = 0.34; n = 5000). In addition, results from the linkage study using hypertensive sib-pairs did not indicate linkage of the poly(A) repent with hypertension. Hence, results did not indicate a role far the Alu-repeat in either hypertension or obesity. However, as the heterozygosity of this poly(A) repeat is low (35%), a larger number of hypertensive sib-pairs may be required to draw definitive conclusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. To estimate the burden of disease attributable to excess body weight using the body mass index (BMI), by age and sex, in South Africa in 2000. Design. World Health Organization comparative risk assessment (CRA) methodology was followed. Re-analysis of the 1998 South Africa Demographic and Health Survey data provided mean BMI estimates by age and sex. Populationattributable fractions were calculated and applied to revised burden of disease estimates. Monte Carlo simulation-modelling techniques were used for the uncertainty analysis. Setting. South Africa. Subjects. Adults 30 years of age. Outcome measures. Deaths and disability-adjusted life years (DALYs) from ischaemic heart disease, ischaemic stroke, hypertensive disease, osteoarthritis, type 2 diabetes mellitus, and selected cancers. Results. Overall, 87% of type 2 diabetes, 68% of hypertensive disease, 61% of endometrial cancer, 45% of ischaemic stroke, 38% of ischaemic heart disease, 31% of kidney cancer, 24% of osteoarthritis, 17% of colon cancer, and 13% of postmenopausal breast cancer were attributable to a BMI 21 kg/m2. Excess body weight is estimated to have caused 36 504 deaths (95% uncertainty interval 31 018 - 38 637) or 7% (95% uncertainty interval 6.0 - 7.4%) of all deaths in 2000, and 462 338 DALYs (95% uncertainty interval 396 512 - 478 847) or 2.9% of all DALYs (95% uncertainty interval 2.4 - 3.0%). The burden in females was approximately double that in males. Conclusions. This study shows the importance of recognising excess body weight as a major risk to health, particularly among females, highlighting the need to develop, implement and evaluate comprehensive interventions to achieve lasting change in the determinants and impact of excess body weight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behavior of pile foundations in non liquefiable soil under seismic loading is considerably influenced by the variability in the soil and seismic design parameters. Hence, probabilistic models for the assessment of seismic pile design are necessary. Deformation of pile foundation in non liquefiable soil is dominated by inertial force from superstructure. The present study considers a pseudo-static approach based on code specified design response spectra. The response of the pile is determined by equivalent cantilever approach. The soil medium is modeled as a one-dimensional random field along the depth. The variability associated with undrained shear strength, design response spectrum ordinate, and superstructure mass is taken into consideration. Monte Carlo simulation technique is adopted to determine the probability of failure and reliability indices based on pile failure modes, namely exceedance of lateral displacement limit and moment capacity. A reliability-based design approach for the free head pile under seismic force is suggested that enables a rational choice of pile design parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report the recent charged Higgs search in top quark decays in 2.2/fb CDF data. This is the first attempt to search for charged Higgs using fully reconstructed mass assuming H->c-sbar in small tan beta region. No evidence of a charged Higgs is observed in the CDF data, hence 95% upper limits are placed at B(t->H+b)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the result of a search for a massive color-octet vector particle, (e.g. a massive gluon) decaying to a pair of top quarks in proton-antiproton collisions with a center-of-mass energy of 1.96 TeV. This search is based on 1.9 fb$^{-1}$ of data collected using the CDF detector during Run II of the Tevatron at Fermilab. We study $t\bar{t}$ events in the lepton+jets channel with at least one $b$-tagged jet. A massive gluon is characterized by its mass, decay width, and the strength of its coupling to quarks. These parameters are determined according to the observed invariant mass distribution of top quark pairs. We set limits on the massive gluon coupling strength for masses between 400 and 800 GeV$/c^2$ and width-to-mass ratios between 0.05 and 0.50. The coupling strength of the hypothetical massive gluon to quarks is consistent with zero within the explored parameter space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present results for the QCD spectrum and the matrix elements of scalar and axial-vector densities at β=6/g2=5.4, 5.5, 5.6. The lattice update was done using the hybrid Monte Carlo algorithm to include two flavors of dynamical Wilson fermions. We have explored quark masses in the range ms≤mq≤3ms. The results for the spectrum are similar to quenched simulations and mass ratios are consistent with phenomenological heavy-quark models. The results for matrix elements of the scalar density show that the contribution of sea quarks is comparable to that of the valence quarks. This has important implications for the pion-nucleon σ term.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The physics potential of e(+) e(-) linear colliders is summarized in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosons and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, i.e. compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders lip to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e(+) e(-) linear colliders and the high precision with which the properties of particles and their interactions can be analyzed, define an exciting physics program complementary to hadron machines. (C) 1998 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The polarisation of top quarks produced in high energy processes can be a very sensitive probe of physics beyond the Standard Model. The kinematical distributions of the decay products of the top quark can provide clean information on the polarisation of the produced top and thus can probe new physics effects in the top quark sector. We study some of the recently proposed polarisation observables involving the decay products of the top quark in the context of H(-)t and Wt production. We show that the effect of the top polarisation on the decay lepton azimuthal angle distribution, studied recently for these processes at leading order in QCD, is robust with respect to the inclusion of next-to-leading order and parton shower corrections. We also consider the leptonic polar angle, as well as recently proposed energy-related distributions of the top decay products. We construct asymmetry parameters from these observables, which can be used to distinguish the new physics signal from the Wt background and discriminate between different values of tan beta and m(H)- in a general type II two-Higgs doublet model. Finally, we show that similar observables may be useful in separating a Standard Model Wt signal from the much larger QCD induced top pair production background.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the Randall-Sundrum (RS) setup to be a theory of flavor, as an alternative to Froggatt-Nielsen models instead of as a solution to the hierarchy problem. The RS framework is modified by taking the low-energy brane to be at the grand unified theory (GUT) scale. This also alleviates constraints from flavor physics. Fermion masses and mixing angles are fit at the GUT scale. The ranges of the bulk mass parameters are determined using a chi(2) fit taking into consideration the variation in O(1) parameters. In the hadronic sector, the heavy top quark requires large bulk mass parameters localizing the right-handed top quark close to the IR brane. Two cases of neutrino masses are considered: (a) Planck scale lepton number violation and (b) Dirac neutrino masses. Contrary to the case of weak scale RS models, both these cases give reasonable fits to the data, with the Planck scale lepton number violation fitting slightly better compared to the Dirac case. In the supersymmetric version, the fits are not significantly different except for the variation in tan beta. If the Higgs superfields and the supersymmetry breaking spurion are localized on the same brane, then the structure of the sfermion masses are determined by the profiles of the zero modes of the hypermultiplets in the bulk. Trilinear terms have the same structure as the Yukawa matrices. The resultant squark spectrum is around similar to 2-3 TeV required by the light Higgs mass to be around 125 GeV and to satisfy the flavor violating constraints.