928 resultados para heavy-quark effective theory
Resumo:
Heavy-ion collisions are a powerful tool to study hot and dense QCD matter, the so-called Quark Gluon Plasma (QGP). Since heavy quarks (charm and beauty) are dominantly produced in the early stages of the collision, they experience the complete evolution of the system. Measurements of electrons from heavy-flavour hadron decay is one possible way to study the interaction of these particles with the QGP. With ALICE at LHC, electrons can be identified with high efficiency and purity. A strong suppression of heavy-flavour decay electrons has been observed at high $p_{m T}$ in Pb-Pb collisions at 2.76 TeV. Measurements in p-Pb collisions are crucial to understand cold nuclear matter effects on heavy-flavour production in heavy-ion collisions. The spectrum of electrons from the decays of hadrons containing charm and beauty was measured in p-Pb collisions at $\\sqrt = 5.02$ TeV. The heavy flavour decay electrons were measured by using the Time Projection Chamber (TPC) and the Electromagnetic Calorimeter (EMCal) detectors from ALICE in the transverse-momentum range $2 < p_ < 20$ GeV/c. The measurements were done in two different data set: minimum bias collisions and data using the EMCal trigger. The non-heavy flavour electron background was removed using an invariant mass method. The results are compatible with one ($R_ \\approx$ 1) and the cold nuclear matter effects in p-Pb collisions are small for the electrons from heavy-flavour hadron decays.
Resumo:
A density-functional theory of ferromagnetism in heterostructures of compound semiconductors doped with magnetic impurities is presented. The variable functions in the density-functional theory are the charge and spin densities of the itinerant carriers and the charge and localized spins of the impurities. The theory is applied to study the Curie temperature of planar heterostructures of III-V semiconductors doped with manganese atoms. The mean-field, virtual-crystal and effective-mass approximations are adopted to calculate the electronic structure, including the spin-orbit interaction, and the magnetic susceptibilities, leading to the Curie temperature. By means of these results, we attempt to understand the observed dependence of the Curie temperature of planar δ-doped ferromagnetic structures on variation of their properties. We predict a large increase of the Curie temperature by additional confinement of the holes in a δ-doped layer of Mn by a quantum well.
Resumo:
We propose cotunneling as the microscopic mechanism that makes possible inelastic electron tunneling spectroscopy of magnetic atoms in surfaces for a wide range of systems, including single magnetic adatoms, molecules, and molecular stacks. We describe electronic transport between the scanning tip and the conducting surface through the magnetic system (MS) with a generalized Anderson model, without making use of effective spin models. Transport and spin dynamics are described with an effective cotunneling Hamiltonian in which the correlations in the magnetic system are calculated exactly and the coupling to the electrodes is included up to second order in the tip MS and MS substrate. In the adequate limit our approach is equivalent to the phenomenological Kondo exchange model that successfully describes the experiments. We apply our method to study in detail inelastic transport in two systems, stacks of cobalt phthalocyanines and a single Mn atom on Cu2N. Our method accounts for both the large contribution of the inelastic spin exchange events to the conductance and the observed conductance asymmetry.
Resumo:
This paper tests the existence of ‘reference dependence’ and ‘loss aversion’ in students’ academic performance. Accordingly, achieving a worse than expected academic performance would have a much stronger effect on students’ (dis)satisfaction than obtaining a better than expected grade. Although loss aversion is a well-established finding, some authors have demonstrated that it can be moderated – diminished, to be precise–. Within this line of research, we also examine whether the students’ emotional response (satisfaction/dissatisfaction) to their performance can be moderated by different musical stimuli. We design an experiment through which we test loss aversion in students’ performance with three conditions: ‘classical music’, ‘heavy music’ and ‘no music’. The empirical application supports the reference-dependence and loss aversion hypotheses (significant at p < 0.05), and the musical stimuli do have an influence on the students’ state of satisfaction with the grades (at p < 0.05). Analyzing students’ perceptions is vital to find the way they process information. Particularly, knowing the elements that can favour not only the academic performance of students but also their attitude towards certain results is fundamental. This study demonstrates that musical stimuli can modify the perceptions of a certain academic result: the effects of ‘positive’ and ‘negative’ surprises are higher or lower, not only in function of the size of these surprises, but also according to the musical stimulus received.
Resumo:
A united atom force field is empirically derived by minimizing the difference between experimental and simulated crystal cells and melting temperatures for eight compounds representative of organic electronic materials used in OLEDs and other devices: biphenyl, carbazole, fluorene, 9,9′-(1,3-phenylene)bis(9H-carbazole)-1,3-bis(N-carbazolyl)benzene (mCP), 4,4′-bis(N-carbazolyl)-1,1′-biphenyl (pCBP), phenazine, phenylcarbazole, and triphenylamine. The force field is verified against dispersion-corrected DFT calculations and shown to also successfully reproduce the crystal structure for two larger compounds employed as hosts in phosphorescent and thermally activated delayed fluorescence OLEDs: N,N′-di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (NPD), and 1,3,5-tri(1-phenyl-1H-benzo[d]imidazol-2-yl)phenyl (TPBI). The good performances of the force field coupled to the large computational savings granted by the united atom approximation make it an ideal choice for the simulation of the morphology of emissive layers for OLED materials in crystalline or glassy phases.
Resumo:
Although the recycling of municipal wastewater can play an important role in water supply security and ecosystem protection, the percentage of wastewater recycled is generally low and strikingly variable. Previous research has employed detailed case studies to examine the factors that contribute to recycling success but usually lacks a comparative perspective across cases. In this study, 25 water utilities in New South Wales, Australia, were compared using fuzzy-set Qualitative Comparative Analysis (fsQCA). This research method applies binary logic and set theory to identify the minimal combinations of conditions that are necessary and/or sufficient for an outcome to occur within the set of cases analyzed. The influence of six factors (rainfall, population density, coastal or inland location, proximity to users; cost recovery and revenue for water supply services) was examined for two outcomes, agricultural use and "heavy" (i.e., commercial/municipal/industrial) use. Each outcome was explained by two different pathways, illustrating that different combinations of conditions are associated with the same outcome. Generally, while economic factors are crucial for heavy use, factors relating to water stress and geographical proximity matter most for agricultural reuse. These results suggest that policies to promote wastewater reuse may be most effective if they target uses that are most feasible for utilities and correspond to the local context. This work also makes a methodological contribution through illustrating the potential utility of fsQCA for understanding the complex drivers of performance in water recycling.
Resumo:
Includes bibliographical references (p. 58-59)
Resumo:
"Supported in part by the National Science Foundation under grant no. NSF GJ 28289."
Resumo:
Thesis (D.M.A.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.
Resumo:
Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focussed recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the best choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Density functional theory (DFT) is a powerful approach to electronic structure calculations in extended systems, but suffers currently from inadequate incorporation of long-range dispersion, or Van der Waals (VdW) interactions. VdW-corrected DFT is tested for interactions involving molecular hydrogen, graphite, single-walled carbon nanotubes (SWCNTs), and SWCNT bundles. The energy correction, based on an empirical London dispersion term with a damping function at short range, allows a reasonable physisorption energy and equilibrium distance to be obtained for H-2 on a model graphite surface. The VdW-corrected DFT calculation for an (8, 8) nanotube bundle reproduces accurately the experimental lattice constant. For H-2 inside or outside an (8, 8) SWCNT, we find the binding energies are respectively higher and lower than that on a graphite surface, correctly predicting the well known curvature effect. We conclude that the VdW correction is a very effective method for implementing DFT calculations, allowing a reliable description of both short-range chemical bonding and long-range dispersive interactions. The method will find powerful applications in areas of SWCNT research where empirical potential functions either have not been developed, or do not capture the necessary range of both dispersion and bonding interactions.