14 resultados para Unification of Bulgaria
em Indian Institute of Science - Bangalore - Índia
Resumo:
We investigate the effects of new physics scenarios containing a high mass vector resonance on top pair production at the LHC, using the polarization of the produced top. In particular we use kinematic distributions of the secondary lepton coming from top decay, which depends on top polarization, as it has been shown that the angular distribution of the decay lepton is insensitive to the anomalous tbW vertex and hence is a pure probe of new physics in top quark production. Spin sensitive variables involving the decay lepton are used to reconstruct the top polarization. Some sensitivity is found for the new couplings of the top.
Resumo:
A recent article on the unified theory of Elementary Particle Forces by Howard Georgi and Sheldon Glashow (September 1980, page 30) points out that the unification of strong, weak and electromagnetic interactions involves the appearance of particles having almost macroscopic masses of about a nanogram (~1014 GeV). Such superheavy particles seem to be an inevitable feature of most grand unified theories Gravitation is still, however, left out of these various schemes.
Resumo:
Current scientific research is characterized by increasing specialization, accumulating knowledge at a high speed due to parallel advances in a multitude of sub-disciplines. Recent estimates suggest that human knowledge doubles every two to three years – and with the advances in information and communication technologies, this wide body of scientific knowledge is available to anyone, anywhere, anytime. This may also be referred to as ambient intelligence – an environment characterized by plentiful and available knowledge. The bottleneck in utilizing this knowledge for specific applications is not accessing but assimilating the information and transforming it to suit the needs for a specific application. The increasingly specialized areas of scientific research often have the common goal of converting data into insight allowing the identification of solutions to scientific problems. Due to this common goal, there are strong parallels between different areas of applications that can be exploited and used to cross-fertilize different disciplines. For example, the same fundamental statistical methods are used extensively in speech and language processing, in materials science applications, in visual processing and in biomedicine. Each sub-discipline has found its own specialized methodologies making these statistical methods successful to the given application. The unification of specialized areas is possible because many different problems can share strong analogies, making the theories developed for one problem applicable to other areas of research. It is the goal of this paper to demonstrate the utility of merging two disparate areas of applications to advance scientific research. The merging process requires cross-disciplinary collaboration to allow maximal exploitation of advances in one sub-discipline for that of another. We will demonstrate this general concept with the specific example of merging language technologies and computational biology.
Resumo:
Schemes that can be proven to be unconditionally stable in the linear context can yield unstable solutions when used to solve nonlinear dynamical problems. Hence, the formulation of numerical strategies for nonlinear dynamical problems can be particularly challenging. In this work, we show that time finite element methods because of their inherent energy momentum conserving property (in the case of linear and nonlinear elastodynamics), provide a robust time-stepping method for nonlinear dynamic equations (including chaotic systems). We also show that most of the existing schemes that are known to be robust for parabolic or hyperbolic problems can be derived within the time finite element framework; thus, the time finite element provides a unification of time-stepping schemes used in diverse disciplines. We demonstrate the robust performance of the time finite element method on several challenging examples from the literature where the solution behavior is known to be chaotic. (C) 2015 Elsevier Inc. All rights reserved.
Resumo:
Schemes that can be proven to be unconditionally stable in the linear context can yield unstable solutions when used to solve nonlinear dynamical problems. Hence, the formulation of numerical strategies for nonlinear dynamical problems can be particularly challenging. In this work, we show that time finite element methods because of their inherent energy momentum conserving property (in the case of linear and nonlinear elastodynamics), provide a robust time-stepping method for nonlinear dynamic equations (including chaotic systems). We also show that most of the existing schemes that are known to be robust for parabolic or hyperbolic problems can be derived within the time finite element framework; thus, the time finite element provides a unification of time-stepping schemes used in diverse disciplines. We demonstrate the robust performance of the time finite element method on several challenging examples from the literature where the solution behavior is known to be chaotic. (C) 2015 Elsevier Inc. All rights reserved.
Resumo:
A general derivation of the coupling constant relations which result on embedding a non-simple group like SU L (2) @ U(1) in a larger simple group (or graded Lie group) is given. It is shown that such relations depend only on the requirement (i) that the multiplet of vector fields form an irreducible representation of the unifying algebra and (ii) the transformation properties of the fermions under SU L (2). This point is illustrated in two ways, one by constructing two different unification groups containing the same fermions and therefore have same Weinberg angle; the other by putting different SU L (2) structures on the same fermions and consequently have different Weinberg angles. In particular the value sin~0=3/8 is characteristic of the sequential doublet models or models which invoke a large number of additional leptons like E 6, while addition of extra charged fermion singlets can reduce the value of sin ~ 0 to 1/4. We point out that at the present time the models of grand unification are far from unique.
Resumo:
In this brief addendum, we clarify a point which we left unaddressed in a previous publication [Phys. Rev. D 78, 066006 (2008)]. In particular, we show that a specific vacuum configuration constructed in one of our models satisfies the condition D=0. In the previous publication, we only showed F=0. Both D=0 and F=0 are necessary to ensure that supersymmetry survives to the weak scale.
Resumo:
A study of radio intensity variations at seven frequencies in the range 0.3 to 90 GHz for compact extragalactic radio sources classified as BL Lacs and high- and low-optical polarization quasars (HPQs and LPQs) is presented. This include the results of flux-density monitoring of 33 compact sources for three years at 327 MHz with the Ooty Synthesis Radio Telescope. The degrees of 'short-term' (tau less than about 1 yr) variability for the three optical types are found to be indistinguishable at low frequencies (less than 1 GHz), pointing to an extrinsic origin for the low-frequency variability. At high frequencies, a distinct dependence on optical type is present, the variability increasing from LPQs, through HPQs to BL Lacs. This trend persists even when only sources with ultra-flat radio spectra (alpha greater than -0.2) are considered. Implications of this for the phenomenon of high-frequency variability and the proposed unification schemes for different optical types of active galactic nuclei are discussed.
Resumo:
A radio study of a carefully selected sample of 20 Seyfert galaxies that are matched in orientation-independent parameters, which are measures of intrinsic active galactic nucleus power and host galaxy properties, is presented to test the predictions of the unified scheme hypothesis. Our sample sources have core flux densities greater than 8 mJy at 5 GHz on arcsec scales due to the feasibility requirements. These simultaneous parsec-scale and kiloparsec-scale radio observations reveal (1) that Seyfert 1 and Seyfert 2 galaxies have an equal tendency to show compact radio structures on milliarcsecond scales, (2) the distributions of parsec-scale and kiloparsec-scale radio luminosities are similar for both Seyfert 1 and Seyfert 2 galaxies, (3) there is no evidence for relativistic beaming in Seyfert galaxies, (4) similar distributions of source spectral indices in spite of the fact that Seyferts show nuclear radio flux density variations, and (5) the distributions of the projected linear size for Seyfert 1 and Seyfert 2 galaxies are not significantly different as would be expected in the unified scheme. The latter could be mainly due to a relatively large spread in the intrinsic sizes. We also find that a starburst alone cannot power these radio sources. Finally, an analysis of the kiloparsec-scale radio properties of the CfA Seyfert galaxy sample shows results consistent with the predictions of the unified scheme.
Resumo:
We present a sound and complete decision procedure for the bounded process cryptographic protocol insecurity problem, based on the notion of normal proofs [2] and classical unification. We also show a result about the existence of attacks with “high” normal cuts. Our proof of correctness provides an alternate proof and new insights into the fundamental result of Rusinowitch and Turuani [9] for the same setting.
Resumo:
Dielectric dispersion and NMRD experiments have revealed that a significant fraction of water molecules in the hydration shell of various proteins do not exhibit any slowing down of dynamics. This is usually attributed to the presence of the hydrophobic residues (HBR) on the surface, although HBRs alone cannot account for the large amplitude of the fast component. Solvation dynamics experiments and also computer simulation studies, on the other hand, repeatedly observed the presence of a non-negligible slow component. Here we show, by considering three well-known proteins (lysozyme, myoglobin and adelynate kinase), that the fast component arises partly from the response of those water molecules that are hydrogen bonded with the backbone oxygen (BBO) atoms. These are structurally and energetically less stable than those with the side chain oxygen (SCO) atoms. In addition, the electrostatic interaction energy distribution (EIED) of individual water molecules (hydrogen bonded to SCO) with side chain oxygen atoms shows a surprising two peak character with the lower energy peak almost coincident with the energy distribution of water hydrogen bonded to backbone oxygen atoms (BBO). This two peak contribution appears to be quite general as we find it for lysozyme, myoglobin and adenylate kinase (ADK). The sharp peak of EIED at small energy (at less than 2 k(B)T) for the BBO atoms, together with the first peak of EIED of SCO and the HBRs on the protein surface, explain why a large fraction (similar to 80%) of water in the protein hydration layer remains almost as mobile as bulk water Significant slowness arises only from the hydrogen bonds that populate the second peak of EIED at larger energy (at about 4 k(B)T). Thus, if we consider hydrogen bond interaction alone, only 15-20% of water molecules in the protein hydration layer can exhibit slow dynamics, resulting in an average relaxation time of about 5-10 ps. The latter estimate assumes a time constant of 20-100 ps for the slow component. Interestingly, relaxation of water molecules hydrogen bonded to back bone oxygen exhibit an initial component faster than the bulk, suggesting that hydrogen bonding of these water molecules remains frustrated. This explanation of the heterogeneous and non-exponential dynamics of water in the hydration layer is quantitatively consistent with all the available experimental results, and provides unification among diverse features.
Resumo:
The presence of new matter fields charged under the Standard Model gauge group at intermediate scales below the Grand Unification scale modifies the renormalization group evolution of the gauge couplings. This can in turn significantly change the running of the Minimal Supersymmetric Standard Model parameters, in particular the gaugino and the scalar masses. In the absence of new large Yukawa couplings we can parameterise all the intermediate scale models in terms of only two parameters controlling the size of the unified gauge coupling. As a consequence of the modified running, the low energy spectrum can be strongly affected with interesting phenomenological consequences. In particular, we show that scalar over gaugino mass ratios tend to increase and the regions of the parameter space with neutralino Dark Matter compatible with cosmological observations get drastically modified. Moreover, we discuss some observables that can be used to test the intermediate scale physics at the LHC in a wide class of models.
Resumo:
We revisit the issue of considering stochasticity of Grassmannian coordinates in N = 1 superspace, which was analyzed previously by Kobakhidze et al. In this stochastic supersymmetry (SUSY) framework, the soft SUSY breaking terms of the minimal supersymmetric Standard Model (MSSM) such as the bilinear Higgs mixing, trilinear coupling, as well as the gaugino mass parameters are all proportional to a single mass parameter xi, a measure of supersymmetry breaking arising out of stochasticity. While a nonvanishing trilinear coupling at the high scale is a natural outcome of the framework, a favorable signature for obtaining the lighter Higgs boson mass m(h) at 125 GeV, the model produces tachyonic sleptons or staus turning to be too light. The previous analyses took Lambda, the scale at which input parameters are given, to be larger than the gauge coupling unification scale M-G in order to generate acceptable scalar masses radiatively at the electroweak scale. Still, this was inadequate for obtaining m(h) at 125 GeV. We find that Higgs at 125 GeV is highly achievable, provided we are ready to accommodate a nonvanishing scalar mass soft SUSY breaking term similar to what is done in minimal anomaly mediated SUSY breaking (AMSB) in contrast to a pure AMSB setup. Thus, the model can easily accommodate Higgs data, LHC limits of squark masses, WMAP data for dark matter relic density, flavor physics constraints, and XENON100 data. In contrast to the previous analyses, we consider Lambda = M-G, thus avoiding any ambiguities of a post-grand unified theory physics. The idea of stochastic superspace can easily be generalized to various scenarios beyond the MSSM. DOI: 10.1103/PhysRevD.87.035022
Resumo:
We consider supersymmetric models in which the lightest Higgs scalar can decay invisibly consistent with the constraints on the 126 GeV state discovered at the CERN LHC. We consider the invisible decay in the minimal supersymmetric standard model (MSSM), as well its extension containing an additional chiral singlet superfield, the so-called next-to-minimal or nonminimal supersymmetric standard model (NMSSM). We consider the case of MSSM with both universal as well as nonuniversal gaugino masses at the grand unified scale, and find that only an E-6 grand unified model with unnaturally large representation can give rise to sufficiently light neutralinos which can possibly lead to the invisible decay h(0) -> (chi) over tilde (0)(1)(chi) over tilde (0)(1). Following this, we consider the case of NMSSM in detail, where we also find that it is not possible to have the invisible decay of the lightest Higgs scalar with universal gaugino masses at the grand unified scale. We delineate the regions of the NMSSM parameter space where it is possible for the lightest Higgs boson to have a mass of about 126 GeV, and then concentrate on the region where this Higgs can decay into light neutralinos, with the soft gaugino masses M-1 and M-2 as two independent parameters, unconstrained by grand unification. We also consider, simultaneously, the other important invisible Higgs decay channel in the NMSSM, namely the decay into the lightest CP-odd scalars, h(1) -> a(1)a(1), which is studied in detail. With the invisible Higgs branching ratio being constrained by the present LHC results, we find that mu(eff) < 170 GeV and M-1 < 80 GeV are disfavored in NMSSM for fixed values of the other input parameters. The dependence of our results on the parameters of NMSSM is discussed in detail.