964 resultados para Minimal Supersymmetric Standard Model (MSSM)
Resumo:
We present the differential rates and branching ratios of the radiative decays τ→lννγ, with l = e or μ, and μ→eννγ in the Standard Model at next-to-leading order. Radiative corrections are computed taking into account the full depencence on the mass m l of the final charged leptons, which is necessary for the correct determination of the branching ratios. Only partial agreement is found with previous calculations performed in the m l → 0 limit. Our results agree with the measurements of the branching ratios B(μ→eννγ) and B(τ→μννγ) for a minimum photon energy of 10 MeV in the μ and τ rest frames, respectively. Babar’s recent precise measurement of the branching ratio B(τ→eννγ), for the same photon energy threshold, differs from our prediction by 3.5 standard deviations.
Resumo:
Very recently, the ATLAS and CMS Collaborations reported diboson and dijet excesses above standard model expectations in the invariant mass region of 1.8–2.0 TeV. Interpreting the diboson excess of events in a model independent fashion suggests that the vector boson pair production searches are best described by WZ or ZZ topologies, because states decaying into W+W− pairs are strongly constrained by semileptonic searches. Under the assumption of a low string scale, we show that both the diboson and dijet excesses can be steered by an anomalous U(1) field with very small coupling to leptons. The Drell–Yan bounds are then readily avoided because of the leptophobic nature of the massive Z′ gauge boson. The non-negligible decay into ZZ required to accommodate the data is a characteristic footprint of intersecting D-brane models, wherein the Landau–Yang theorem can be evaded by anomaly-induced operators involving a longitudinal Z. The model presented herein can be viewed purely field-theoretically, although it is particularly well motivated from string theory. Should the excesses become statistically significant at the LHC13, the associated Zγ topology would become a signature consistent only with a stringy origin.
Resumo:
Purpose: Leadership positions are still stereotyped as male, especially in male-dominated fields such as STEM. Therefore, women in such positions run the risk of being evaluated less favorably than men. Our study investigates how female and male leaders in existing teams (engineering project) are evaluated, and how these evaluations change over time. Design/Methodology: Participants worked in 45 teams to develop specific engineering projects. Evaluations of 45 leaders (33% women) by 258 team members (39% women) were analyzed, that is, leaders’ self-evaluation and their evaluation by team members. Results: Although female and male leaders did not differ in their self-evaluations at the beginning of the project, female leaders evaluated themselves better within time. However, team members evaluated female leaders better than male leaders at the beginning of the project. These gender differences disappeared over the time. Limitations: It should be replicated in a non-student sample. Implications: The results show that female leaders entering a male-dominated field (engineering) are evaluated better by team members than male leaders at the beginning of the team work, in line with the ‘shifting standard model’ (Biernat & Fuegen, 2001). While the initial impression formation of female and male leaders is influenced by category membership, its impact decreases over time as a consequence of individualization (Fiske & Neuberg, 1990); this results in similar evaluations over time. Originality: To our knowledge this is the first study to systematically test perceptions of change in the evaluation over time of female and male leaders in natural setting.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
The largest uncertainties in the Standard Model calculation of the anomalous magnetic moment of the muon (g − 2)μ come from hadronic contributions. In particular, it can be expected that in a few years the subleading hadronic light-by-light (HLbL) contribution will dominate the theory uncertainty. We present a dispersive description of the HLbL tensor, which is based on unitarity, analyticity, crossing symmetry, and gauge invariance. Such a model-independent Approach opens up an avenue towards a data-driven determination of the HLbL contribution to the (g − 2)μ.
Resumo:
This article gives an overview on the status of experimental searches for dark matter at the end of 2014. The main focus is on direct searches for weakly interacting massive particles (WIMPs) using underground-based low-background detectors, especially on the new results published in 2014. WIMPs are excellent dark matter candidates, predicted by many theories beyond the standard model of particle physics, and are expected to interact with the target nuclei either via spin-independent (scalar) or spin-dependent (axial-vector) couplings. Non-WIMP dark matter candidates, especially axions and axion-like particles are also briefly discussed.
Resumo:
The issue of bias-motivated crimes has attracted consderable attention in recent years. In this paper, we develop an economic framework to analyze penalty enhancements for bias-motivated crimes. We extend the standard model by introducing two different groups of potential victims of crime, and assume that a potential offender's benefits from a crime depend on the group to which the victim belongs. We begin with the assumption that the harm to an individual victim from a bias-motivated crime is identical to that from an equivalent non-hate crime. Nonetheless, we derive the result that a pattern of crimes disproportionately targeting an identifiable group leads to greater social harm. This conclusion follows both from a model where disparities in groups' victimization probabilities lead to social losses due to fairness concerns, as well as a model where potential victims have the opportunity to undertake socially costly victimization avoidance activities. In particular, penalty enhancements can reduce the incentives for avoidance activity, and thereby protect the networks of profitable interactions that link members of different groups. We also argue that those groups that are covered by hate crime statutes tend to be those whose characteristics make it especially likely that penalty enhancement is socially optimal. Finally, we consider a number of other issues related to hate crimes, including teh choice of sanctions from behind a Rawlsian 'veil of ignorance' concerning group identity.
Resumo:
Global, near-surface temperature data sets and their derivations are discussed, and differences between the Jones and Intergovernmental Panel on Climate Change data sets are explained. Global-mean temperature changes are then interpreted in terms of anthropogenic forcing influences and natural variability. The inclusion of aerosol forcing improves the fit between modeled and observed changes but does not improve the agreement between the implied climate sensitivity value and the standard model-based range of 1.5–4.5°C equilibrium warming for a CO2 doubling. The implied sensitivity goes from below the model-based range of estimates to substantially above this range. The addition of a solar forcing effect further improves the fit and brings the best-fit sensitivity into the middle of the model-based range. Consistency is further improved when internally generated changes are considered. This consistency, however, hides many uncertainties that surround observed data/model comparisons. These uncertainties make it impossible currently to use observed global-scale temperature changes to narrow the uncertainty range in the climate sensitivity below that estimated directly from climate models.
Resumo:
In this article, a new methodology is presented to obtain representation models for a priori relation z = u(x1, x2, . . . ,xn) (1), with a known an experimental dataset zi; x1i ; x2i ; x3i ; . . . ; xni i=1;2;...;p· In this methodology, a potential energy is initially defined over each possible model for the relationship (1), what allows the application of the Lagrangian mechanics to the derived system. The solution of the Euler–Lagrange in this system allows obtaining the optimal solution according to the minimal action principle. The defined Lagrangian, corresponds to a continuous medium, where a n-dimensional finite elements model has been applied, so it is possible to get a solution for the problem solving a compatible and determined linear symmetric equation system. The computational implementation of the methodology has resulted in an improvement in the process of get representation models obtained and published previously by the authors.
Resumo:
A decade of the European Neighbourhood Policy (ENP) and the standard model of business as usual remains. Is there a reluctance to take the prevailing development paradigm based on economic growth and question its suitability as a motor for development? Most ENP resources and most tangible results remain within a financial framework, with a concentration on market-driven reforms in relation to economic and social change. On this basis, the current atmosphere represents a historical opportunity for rethinking the EU´s development paradigm fostered in the region. Drawing on extensive field work in Morocco and Tunisia, this policy brief highlights limitations and contradictions of the EU´s socio-economic development policies.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
What resources are universal for quantum computation? In the standard model of a quantum computer, a computation consists of a sequence of unitary gates acting coherently on the qubits making up the computer. This requirement for coherent unitary dynamical operations is widely believed to be the critical element of quantum computation. Here we show that a very different model involving only projective measurements and quantum memory is also universal for quantum computation. In particular, no coherent unitary dynamics are involved in the computation. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The standard model for the migration of the monarch butterfly in western North America has hitherto been movement in the autumn to overwintering sites in coastal California, followed by a return inland by most individuals in the spring. This model is based largely on observational and limited tagging and recovery data. In this paper we test the model by plotting many years of museum and collection records on a monthly basis on a map of the region. Our plots suggest a movement of Oregon, Washington and other north-western populations of summer butterflies to California in the autumn, but movement of more north-easterly populations (e.g. from Idaho and Montana) along two pathways through Nevada, Utah and Arizona to Mexico. The more westerly of these two pathways may follow the Colorado River south as indicated by museum records and seasonal temperature data. The eastern pathway may enter northern Utah along the western scarp of the Wasatch Mountains and run south through Utah and Arizona. Further analysis of distributions suggests that monarch butterflies in the American West occur primarily along rivers, and there are observations indicating that autumn migrants often follow riparian corridors. More data are needed to test our new model; we suggest the nature of the data required. (c) 2005 The Linnean Society of London.
Resumo:
The demand for palliative care is increasing, yet there are few data on the best models of care nor well-validated interventions that translate current evidence into clinical practice. Supporting multidisciplinary patient-centered palliative care while successfully conducting a large clinical trial is a challenge. The Palliative Care Trial (PCT) is a pragmatic 2 x 2 x 2 factorial cluster randomized controlled trial that tests the ability of educational outreach visiting and case conferencing to improve patient-based outcomes such as performance status and pain intensity. Four hundred sixty-one consenting patients and their general practitioners (GPs) were randomized to the following: (1) GP educational outreach visiting versus usual care, (2) Structured patient and caregiver educational outreach visiting versus usual care and (3) A coordinated palliative care model of case conferencing versus the standard model of palliative care in Adelaide, South Australia (3:1 randomization). Main outcome measures included patient functional status over time, pain intensity, and resource utilization. Participants were followed longitudinally until death or November 30, 2004. The interventions are aimed at translating current evidence into clinical practice and there was particular attention in the trial's design to addressing common pitfalls for clinical studies in palliative care. Given the need for evidence about optimal interventions and service delivery models that improve the care of people with life-limiting illness, the results of this rigorous, high quality clinical trial will inform practice. Initial results are expected in mid 2005. (c) 2005 Elsevier Inc. All rights reserved.