1000 resultados para Heavy quark theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work addresses the electronical properties of the superconductors UPd2Al3 and UNi2Al3 on the basis of thin film experiments. These isotructural compounds are ideal candiates to study the interplay of magnetism and superconductivity due to the differences of their magnetically ordered states, as well as the experimental evidence for a magnetic pairing mechanism in UPd2Al3. Epitaxial thin film samples of UPd2Al3 and UNi2Al3 were prepared using UHV Molecular Beam Epitaxy (MBE). For UPd2Al3, the change of the growth direction from the intrinsic (001) to epitaxial (100) was predicted and sucessfully demonstrated using LaAlO3 substrates cut in (110) direction. With optimized deposition process parameters for UPd2Al3 (100) on LaAlO3 (110) superconducting samples with critical temperatures up to Tc = 1.75K were obtained. UPd2Al3-AlOx-Ag mesa junctions with superconducting base electrode were prepared and shown to be in the tunneling regime. However, no signatures of a superconducting density of states were observed in the tunneling spectra. The resistive superconducting transition was probed for a possible dependence on the current direction. In contrast to UNi2Al3, the existence of such feature was excluded in UPd2Al3 (100) thin films. The second focus of this work is the dependence of the resisitive transition in UNi2Al3 (100) thin films on the current direction. The experimental fact that the resisitive transition occurs at slightly higher temperatures for I║a than for I║c can be explained within a model of two weakly coupled superconducting bands. Evidence is presented for the key assumption of the two-band model, namely that transport in and out of the ab-plane is generated on different, weakly coupled parts of the Fermi surface. Main indications are the angle dependence of the superconducting transition and the dependence of the upper critical field Bc2 on current and field orientation. Additionally, several possible alternative explanations for the directional splitting of the transition are excluded in this work. An origin due to scattering on crystal defects or impurities is ruled out, likewise a relation to ohmic heating or vortex dynamics. The shift of the transition temperature as function of the current density was found to behave as predicted by the Ginzburg-Landau theory for critical current depairing, which plays a significant role in the two-band model. In conclusion, the directional splitting of the resisitive transition has to be regarded an intrinsic and unique property of UNi2Al3 up to now. Therefore, UNi2Al3 is proposed as a role model for weakly coupled multiband superconductivity. Magnetoresistance in the normalconducting state was measured for UPd2Al3 and UNi2Al3. For UNi2Al3, a negative contribution was observed close to the antiferromagnetic ordering temperature TN only for I║a, which can be associated to reduced spin-disorder scattering. In agreement with previous results it is concluded that the magnetic moments have to be attributed to the same part of the Fermi surface which generates transport in the ab-plane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diese Dissertation demonstriert und verbessert die Vorhersagekraft der Coupled-Cluster-Theorie im Hinblick auf die hochgenaue Berechnung von Moleküleigenschaften. Die Demonstration erfolgt mittels Extrapolations- und Additivitätstechniken in der Single-Referenz-Coupled-Cluster-Theorie, mit deren Hilfe die Existenz und Struktur von bisher unbekannten Molekülen mit schweren Hauptgruppenelementen vorhergesagt wird. Vor allem am Beispiel von cyclischem SiS_2, einem dreiatomigen Molekül mit 16 Valenzelektronen, wird deutlich, dass die Vorhersagekraft der Theorie sich heutzutage auf Augenhöhe mit dem Experiment befindet: Theoretische Überlegungen initiierten eine experimentelle Suche nach diesem Molekül, was schließlich zu dessen Detektion und Charakterisierung mittels Rotationsspektroskopie führte. Die Vorhersagekraft der Coupled-Cluster-Theorie wird verbessert, indem eine Multireferenz-Coupled-Cluster-Methode für die Berechnung von Spin-Bahn-Aufspaltungen erster Ordnung in 2^Pi-Zuständen entwickelt wird. Der Fokus hierbei liegt auf Mukherjee's Variante der Multireferenz-Coupled-Cluster-Theorie, aber prinzipiell ist das vorgeschlagene Berechnungsschema auf alle Varianten anwendbar. Die erwünschte Genauigkeit beträgt 10 cm^-1. Sie wird mit der neuen Methode erreicht, wenn Ein- und Zweielektroneneffekte und bei schweren Elementen auch skalarrelativistische Effekte berücksichtigt werden. Die Methode eignet sich daher in Kombination mit Coupled-Cluster-basierten Extrapolations-und Additivitätsschemata dafür, hochgenaue thermochemische Daten zu berechnen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the fundamental interactions in the Standard Model of particle physicsrnis the strong force, which can be formulated as a non-abelian gauge theoryrncalled Quantum Chromodynamics (QCD). rnIn the low-energy regime, where the QCD coupling becomes strong and quarksrnand gluons are confined to hadrons, a perturbativernexpansion in the coupling constant is not possible.rnHowever, the introduction of a four-dimensional Euclidean space-timernlattice allows for an textit{ab initio} treatment of QCD and provides arnpowerful tool to study the low-energy dynamics of hadrons.rnSome hadronic matrix elements of interest receive contributionsrnfrom diagrams including quark-disconnected loops, i.e. disconnected quarkrnlines from one lattice point back to the same point. The calculation of suchrnquark loops is computationally very demanding, because it requires knowledge ofrnthe all-to-all propagator. In this thesis we use stochastic sources and arnhopping parameter expansion to estimate such propagators.rnWe apply this technique to study two problems which relay crucially on therncalculation of quark-disconnected diagrams, namely the scalar form factor ofrnthe pion and the hadronic vacuum polarization contribution to the anomalousrnmagnet moment of the muon.rnThe scalar form factor of the pion describes the coupling of a charged pion torna scalar particle. We calculate the connected and the disconnected contributionrnto the scalar form factor for three different momentum transfers. The scalarrnradius of the pion is extracted from the momentum dependence of the form factor.rnThe use ofrnseveral different pion masses and lattice spacings allows for an extrapolationrnto the physical point. The chiral extrapolation is done using chiralrnperturbation theory ($chi$PT). We find that our pion mass dependence of thernscalar radius is consistent with $chi$PT at next-to-leading order.rnAdditionally, we are able to extract the low energy constant $ell_4$ from thernextrapolation, and ourrnresult is in agreement with results from other lattice determinations.rnFurthermore, our result for the scalar pion radius at the physical point isrnconsistent with a value that was extracted from $pipi$-scattering data. rnThe hadronic vacuum polarization (HVP) is the leading-order hadronicrncontribution to the anomalous magnetic moment $a_mu$ of the muon. The HVP canrnbe estimated from the correlation of two vector currents in the time-momentumrnrepresentation. We explicitly calculate the corresponding disconnectedrncontribution to the vector correlator. We find that the disconnectedrncontribution is consistent with zero within its statistical errors. This resultrncan be converted into an upper limit for the maximum contribution of therndisconnected diagram to $a_mu$ by using the expected time-dependence of therncorrelator and comparing it to the corresponding connected contribution. Wernfind the disconnected contribution to be smaller than $approx5%$ of thernconnected one. This value can be used as an estimate for a systematic errorrnthat arises from neglecting the disconnected contribution.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time series of geocenter coordinates were determined with data of two global navigation satellite systems (GNSSs), namely the U.S. GPS (Global Positioning System) and the Russian GLONASS (Global’naya Nawigatsionnaya Sputnikowaya Sistema). The data was recorded in the years 2008–2011 by a global network of 92 permanently observing GPS/GLONASS receivers. Two types of daily solutions were generated independently for each GNSS, one including the estimation of geocenter coordinates and one without these parameters. A fair agreement for GPS and GLONASS was found in the geocenter x- and y-coordinate series. Our tests, however, clearly reveal artifacts in the z-component determined with the GLONASS data. Large periodic excursions in the GLONASS geocenter z-coordinates of about 40 cm peak-to-peak are related to the maximum elevation angles of the Sun above/below the orbital planes of the satellite system and thus have a period of about 4 months (third of a year). A detailed analysis revealed that the artifacts are almost uniquely governed by the differences of the estimates of direct solar radiation pressure (SRP) in the two solution series (with and without geocenter estimation). A simple formula is derived, describing the relation between the geocenter z-coordinate and the corresponding parameter of the SRP. The effect can be explained by first-order perturbation theory of celestial mechanics. The theory also predicts a heavy impact on the GNSS-derived geocenter if once-per-revolution SRP parameters are estimated in the direction of the satellite’s solar panel axis. Specific experiments using GPS observations revealed that this is indeed the case. Although the main focus of this article is on GNSS, the theory developed is applicable to all satellite observing techniques. We applied the theory to satellite laser ranging (SLR) solutions using LAGEOS. It turns out that the correlation between geocenter and SRP parameters is not a critical issue for the SLR solutions. The reasons are threefold: The direct SRP is about a factor of 30–40 smaller for typical geodetic SLR satellites than for GNSS satellites, allowing it in most cases to not solve for SRP parameters (ruling out the correlation between these parameters and the geocenter coordinates); the orbital arc length of 7 days (which is typically used in SLR analysis) contains more than 50 revolutions of the LAGEOS satellites as compared to about two revolutions of GNSS satellites for the daily arcs used in GNSS analysis; the orbit geometry is not as critical for LAGEOS as for GNSS satellites, because the elevation angle of the Sun w.r.t. the orbital plane is usually significantly changing over 7 days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review the failure of lowest order chiral SU(3)L ×SU(3)R perturbation theory χPT3 to account for amplitudes involving the f0(500) resonance and O(mK) extrapolations in momenta. We summarize our proposal to replace χPT3 with a new effective theory χPTσ based on a low-energy expansion about an infrared fixed point in 3-flavour QCD. At the fixed point, the quark condensate ⟨q̅q⟩vac ≠ 0 induces nine Nambu-Goldstone bosons: π,K,η and a QCD dilaton σ which we identify with the f0(500) resonance. We discuss the construction of the χPTσ Lagrangian and its implications for meson phenomenology at low-energies. Our main results include a simple explanation for the ΔI = 1/2 rule in K-decays and an estimate for the Drell-Yan ratio in the infrared limit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among resummation techniques for perturbative QCD in the context of collider and flavor physics, soft-collinear effective theory (SCET) has emerged as both a powerful and versatile tool, having been applied to a large variety of processes, from B-meson decays to jet production at the LHC. This book provides a concise, pedagogical introduction to this technique. It discusses the expansion of Feynman diagrams around the high-energy limit, followed by the explicit construction of the effective Lagrangian - first for a scalar theory, then for QCD. The underlying concepts are illustrated with the quark vector form factor at large momentum transfer, and the formalism is applied to compute soft-gluon resummation and to perform transverse-momentum resummation for the Drell-Yan process utilizing renormalization group evolution in SCET. Finally, the infrared structure of n-point gauge-theory amplitudes is analyzed by relating them to effective-theory operators. This text is suitable for graduate students and non-specialist researchers alike as it requires only basic knowledge of perturbative QCD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We determine the mass of the bottom quark from high moments of the bbproduction cross section in e+e−annihilation, which are dominated by the threshold region. On the theory side next-to-next-to-next-to-leading order (NNNLO) calculations both for the resonances and the continuum cross section are used for the first time. We find mPSb(2GeV) =4.532+0.013−0.039GeVfor the potential-subtracted mass and mMSb(mMSb) =4.193+0.022−0.035GeVfor the MSbottom-quark mass.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heavy-ion collisions are a powerful tool to study hot and dense QCD matter, the so-called Quark Gluon Plasma (QGP). Since heavy quarks (charm and beauty) are dominantly produced in the early stages of the collision, they experience the complete evolution of the system. Measurements of electrons from heavy-flavour hadron decay is one possible way to study the interaction of these particles with the QGP. With ALICE at LHC, electrons can be identified with high efficiency and purity. A strong suppression of heavy-flavour decay electrons has been observed at high $p_{m T}$ in Pb-Pb collisions at 2.76 TeV. Measurements in p-Pb collisions are crucial to understand cold nuclear matter effects on heavy-flavour production in heavy-ion collisions. The spectrum of electrons from the decays of hadrons containing charm and beauty was measured in p-Pb collisions at $\\sqrt = 5.02$ TeV. The heavy flavour decay electrons were measured by using the Time Projection Chamber (TPC) and the Electromagnetic Calorimeter (EMCal) detectors from ALICE in the transverse-momentum range $2 < p_ < 20$ GeV/c. The measurements were done in two different data set: minimum bias collisions and data using the EMCal trigger. The non-heavy flavour electron background was removed using an invariant mass method. The results are compatible with one ($R_ \\approx$ 1) and the cold nuclear matter effects in p-Pb collisions are small for the electrons from heavy-flavour hadron decays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tests the existence of ‘reference dependence’ and ‘loss aversion’ in students’ academic performance. Accordingly, achieving a worse than expected academic performance would have a much stronger effect on students’ (dis)satisfaction than obtaining a better than expected grade. Although loss aversion is a well-established finding, some authors have demonstrated that it can be moderated – diminished, to be precise–. Within this line of research, we also examine whether the students’ emotional response (satisfaction/dissatisfaction) to their performance can be moderated by different musical stimuli. We design an experiment through which we test loss aversion in students’ performance with three conditions: ‘classical music’, ‘heavy music’ and ‘no music’. The empirical application supports the reference-dependence and loss aversion hypotheses (significant at p < 0.05), and the musical stimuli do have an influence on the students’ state of satisfaction with the grades (at p < 0.05). Analyzing students’ perceptions is vital to find the way they process information. Particularly, knowing the elements that can favour not only the academic performance of students but also their attitude towards certain results is fundamental. This study demonstrates that musical stimuli can modify the perceptions of a certain academic result: the effects of ‘positive’ and ‘negative’ surprises are higher or lower, not only in function of the size of these surprises, but also according to the musical stimulus received.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of estimating P(Yi + (...) + Y-n > x) by importance sampling when the Yi are i.i.d. and heavy-tailed. The idea is to exploit the cross-entropy method as a toot for choosing good parameters in the importance sampling distribution; in doing so, we use the asymptotic description that given P(Y-1 + (...) + Y-n > x), n - 1 of the Yi have distribution F and one the conditional distribution of Y given Y > x. We show in some specific parametric examples (Pareto and Weibull) how this leads to precise answers which, as demonstrated numerically, are close to being variance minimal within the parametric class under consideration. Related problems for M/G/l and GI/G/l queues are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of P(S-n > u) by simulation, where S, is the sum of independent. identically distributed random varibles Y-1,..., Y-n, is of importance in many applications. We propose two simulation estimators based upon the identity P(S-n > u) = nP(S, > u, M-n = Y-n), where M-n = max(Y-1,..., Y-n). One estimator uses importance sampling (for Y-n only), and the other uses conditional Monte Carlo conditioning upon Y1,..., Yn-1. Properties of the relative error of the estimators are derived and a numerical study given in terms of the M/G/1 queue in which n is replaced by an independent geometric random variable N. The conclusion is that the new estimators compare extremely favorably with previous ones. In particular, the conditional Monte Carlo estimator is the first heavy-tailed example of an estimator with bounded relative error. Further improvements are obtained in the random-N case, by incorporating control variates and stratification techniques into the new estimation procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).