38 resultados para Eigenvalue Bounds
Resumo:
A systematic analysis of New Physics impacts on the rare decays KL→π0ell+ell- is performed. Thanks to their different sensitivities to flavor-changing local effective interactions, these two modes could provide valuable information on the nature of the possible New Physics at play. In particular, a combined measurement of both modes could disentangle scalar/pseudoscalar from vector or axial-vector contributions. For the latter, model-independent bounds are derived. Finally, the KL→π0μ+μ- forward-backward CP-asymmetry is considered, and shown to give interesting complementary information.
Resumo:
An important problem in unsupervised data clustering is how to determine the number of clusters. Here we investigate how this can be achieved in an automated way by using interrelation matrices of multivariate time series. Two nonparametric and purely data driven algorithms are expounded and compared. The first exploits the eigenvalue spectra of surrogate data, while the second employs the eigenvector components of the interrelation matrix. Compared to the first algorithm, the second approach is computationally faster and not limited to linear interrelation measures.
Resumo:
Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.
Resumo:
The execution of a project requires resources that are generally scarce. Classical approaches to resource allocation assume that the usage of these resources by an individual project activity is constant during the execution of that activity; in practice, however, the project manager may vary resource usage over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and various work-content-related constraints are met. We formulate this problem for the first time as a mixed-integer linear program. Our computational results for a standard test set from the literature indicate that this model outperforms the state-of-the-art solution methods for this problem.
Resumo:
In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain Ω ⊂ ℝd, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.
Resumo:
We introduce and analyze hp-version discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems in three-dimensional polyhedral domains. To resolve possible corner-, edge- and corner-edge singularities, we consider hexahedral meshes that are geometrically and anisotropically refined toward the corresponding neighborhoods. Similarly, the local polynomial degrees are increased linearly and possibly anisotropically away from singularities. We design interior penalty hp-dG methods and prove that they are well-defined for problems with singular solutions and stable under the proposed hp-refinements. We establish (abstract) error bounds that will allow us to prove exponential rates of convergence in the second part of this work.
Resumo:
In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.
Resumo:
A generic search for anomalous production of events with at least three charged leptons is presented. The search uses a pp-collision data sample at a center-of-mass energy of root s = 7 TeV corresponding to 4.6 fb(-1) of integrated luminosity collected in 2011 by the ATLAS detector at the CERN Large Hadron Collider. Events are required to contain at least two electrons or muons, while the third lepton may either be an additional electron or muon, or a hadronically decaying tau lepton. Events are categorized by the presence or absence of a reconstructed tau-lepton or Z-boson candidate decaying to leptons. No significant excess above backgrounds expected from Standard Model processes is observed. Results are presented as upper limits on event yields from non-Standard-Model processes producing at least three prompt, isolated leptons, given as functions of lower bounds on several kinematic variables. Fiducial efficiencies for model testing are also provided. The use of the results is illustrated by setting upper limits on the production of doubly charged Higgs bosons decaying to same-sign lepton pairs.
Resumo:
The ATLAS detector at the Large Hadron Collider is used to search for excited electrons and excited muons in the channel pp -> ll* -> ll gamma, assuming that excited leptons are produced via contact interactions. The analysis is based on 13 fb(-1) of pp collisions at a centre-of-mass energy of 8 TeV. No evidence for excited leptons is found, and a limit is set at the 95% credibility level on the cross section times branching ratio as a function of the excited-lepton mass m(l*). For m(l*) >= 0.8 TeV, the respective upper limits on sigma B(l(*) -> l gamma) are 0.75 and 0.90 fb for the e* and mu* searches. Limits on sigma B are converted into lower bounds on the compositeness scale 3. In the special case where Lambda = m(l*), excited-electron and excited-muon masses below 2.2 TeV are excluded.
Resumo:
Compared to μ→eγ and μ→eee, the process μ→e conversion in nuclei receives enhanced contributions from Higgs-induced lepton flavor violation. Upcoming μ→e conversion experiments with drastically increased sensitivity will be able to put extremely stringent bounds on Higgs-mediated μ→e transitions. We point out that the theoretical uncertainties associated with these Higgs effects, encoded in the couplings of quark scalar operators to the nucleon, can be accurately assessed using our recently developed approach based on SU(2) chiral perturbation theory that cleanly separates two- and three-flavor observables. We emphasize that with input from lattice QCD for the coupling to strangeness fNs, hadronic uncertainties are appreciably reduced compared to the traditional approach where fNs is determined from the pion-nucleon σ term by means of an SU(3) relation. We illustrate this point by considering Higgs-mediated lepton flavor violation in the standard model supplemented with higher-dimensional operators, the two-Higgs-doublet model with generic Yukawa couplings, and the minimal supersymmetric standard model. Furthermore, we compare bounds from present and future μ→e conversion and μ→eγ experiments.
Resumo:
We consider an effective field theory for a gauge singlet Dirac dark matter particle interacting with the standard model fields via effective operators suppressed by the scale Λ≳1 TeV. We perform a systematic analysis of the leading loop contributions to spin-independent Dirac dark matter–nucleon scattering using renormalization group evolution between Λ and the low-energy scale probed by direct detection experiments. We find that electroweak interactions induce operator mixings such that operators that are naively velocity suppressed and spin dependent can actually contribute to spin-independent scattering. This allows us to put novel constraints on Wilson coefficients that were so far poorly bounded by direct detection. Constraints from current searches are already significantly stronger than LHC bounds, and will improve in the near future. Interestingly, the loop contribution we find is isospin violating even if the underlying theory is isospin conserving.
Resumo:
In this article we calculate the one-loop supersymmetric QCD (SQCD) corrections to the decay u˜1→cχ˜01 in the minimal supersymmetric standard model with generic flavor structure. This decay mode is phenomenologically important if the mass difference between the lightest squark u˜1 (which is assumed to be mainly stoplike) and the neutralino lightest supersymmetric particle χ˜01 is smaller than the top mass. In such a scenario u˜1→tχ˜01 is kinematically not allowed and searches for u˜1→Wbχ˜01 and u˜1→cχ˜01 are performed. A large decay rate for u˜1→cχ˜01 can weaken the LHC bounds from u˜1→Wbχ01 which are usually obtained under the assumption Br[u˜1→Wbχ01]=100%. We find the SQCD corrections enhance Γ[u˜1→cχ˜01] by approximately 10% if the flavor violation originates from bilinear terms. If flavor violation originates from trilinear terms, the effect can be ±50% or more, depending on the sign of At. We note that connecting a theory of supersymmetry breaking to LHC observables, the shift from the DR¯¯¯¯¯ to the on-shell mass is numerically very important for light stop decays.
Resumo:
In these proceedings we review the flavour phenomenology of 2HDMs with generic Yukawa structures [1]. We first consider the quark sector and find that despite the stringent constraints from FCNC processes large effects in tauonic B decays are still possible. We then consider lepton flavour observables, show correlations between m →eg and m− →e−e+e− in the 2HDM of type III and give upper bounds on the lepton flavour violating B decay Bd →me.
Resumo:
We review various inequalities for Mills' ratio (1 - Φ)= Ø, where Ø and Φ denote the standard Gaussian density and distribution function, respectively. Elementary considerations involving finite continued fractions lead to a general approximation scheme which implies and refines several known bounds.