959 resultados para SUPERSYMMETRIC STANDARD MODEL
Resumo:
Pós-graduação em Física - IFT
Resumo:
We propose a new CPT-even and Lorentz-violating nonminimal coupling between fermions and Abelian gauge fields involving the CPT-even tensor (K-F)(mu nu alpha beta) of the standard model extension. We thus investigate its effects on the cross section of the electron-positron scattering by analyzing the process e(+) + e(-) -> mu(+) + mu(-). Such a study was performed for the parity-odd and parity-even nonbirefringent components of the Lorentz-violating (K-F)(mu nu alpha beta) tensor. Finally, by using experimental data available in the literature, we have imposed upper bounds as tight as 10(-12) (eV)(-1) on the magnitude of the CPT-even and Lorentz-violating parameters while nonminimally coupled. DOI: 10.1103/PhysRevD.86.125033
Resumo:
We propose an alternative, nonsingular, cosmic scenario based on gravitationally induced particle production. The model is an attempt to evade the coincidence and cosmological constant problems of the standard model (Lambda CDM) and also to connect the early and late time accelerating stages of the Universe. Our space-time emerges from a pure initial de Sitter stage thereby providing a natural solution to the horizon problem. Subsequently, due to an instability provoked by the production of massless particles, the Universe evolves smoothly to the standard radiation dominated era thereby ending the production of radiation as required by the conformal invariance. Next, the radiation becomes subdominant with the Universe entering in the cold dark matter dominated era. Finally, the negative pressure associated with the creation of cold dark matter (CCDM model) particles accelerates the expansion and drives the Universe to a final de Sitter stage. The late time cosmic expansion history of the CCDM model is exactly like in the standard Lambda CDM model; however, there is no dark energy. The model evolves between two limiting (early and late time) de Sitter regimes. All the stages are also discussed in terms of a scalar field description. This complete scenario is fully determined by two extreme energy densities, or equivalently, the associated de Sitter Hubble scales connected by rho(I)/rho(f) = (H-I/H-f)(2) similar to 10(122), a result that has no correlation with the cosmological constant problem. We also study the linear growth of matter perturbations at the final accelerating stage. It is found that the CCDM growth index can be written as a function of the Lambda growth index, gamma(Lambda) similar or equal to 6/11. In this framework, we also compare the observed growth rate of clustering with that predicted by the current CCDM model. Performing a chi(2) statistical test we show that the CCDM model provides growth rates that match sufficiently well with the observed growth rate of structure.
Resumo:
The lepton mixing angle theta(13), the only unknown angle in the standard three-flavor neutrino mixing scheme, is finally measured by the recent reactor and accelerator neutrino experiments. We perform a combined analysis of the data coming from T2K, MINOS, Double Chooz, Daya Bay and RENO experiments and find sin(2)2 theta(13) = 0.096 +/- 0.013(+/- 0.040) at 1 sigma (3 sigma) CL and that the hypothesis theta(13) = 0 is now rejected at a significance level of 7.7 sigma. We also discuss the near future expectation on the precision of the theta(13) determination by using expected data from these ongoing experiments.
Resumo:
Several extensions of the standard model predict the existence of new neutral spin-1 resonances associated with the electroweak symmetry breaking sector. Using the data from ATLAS (with integrated luminosity of L = 1.02 fb(-1)) and CMS (with integrated luminosity of L = 1.55 fb(-1)) on the production of W+W- pairs through the process pp --> l(+)l(-)' is not an element of(T), we place model independent bounds on these new vector resonances masses, couplings, and widths. Our analyses show that the present data exclude new neutral vector resonances with masses up to 1-2.3 TeV depending on their couplings and widths. We also demonstrate how to extend our analysis framework to different models with a specific example.
Resumo:
In the framework of gauged flavour symmetries, new fermions in parity symmetric representations of the standard model are generically needed for the compensation of mixed anomalies. The key point is that their masses are also protected by flavour symmetries and some of them are expected to lie way below the flavour symmetry breaking scale(s), which has to occur many orders of magnitude above the electroweak scale to be compatible with the available data from flavour changing neutral currents and CP violation experiments. We argue that, actually, some of these fermions would plausibly get masses within the LHC range. If they are taken to be heavy quarks and leptons, in (bi)-fundamental representations of the standard model symmetries, their mixings with the light ones are strongly constrained to be very small by electroweak precision data. The alternative chosen here is to exactly forbid such mixings by breaking of flavour symmetries into an exact discrete symmetry, the so-called proton-hexality, primarily suggested to avoid proton decay. As a consequence of the large value needed for the flavour breaking scale, those heavy particles are long-lived and rather appropriate for the current and future searches at the LHC for quasi-stable hadrons and leptons. In fact, the LHC experiments have already started to look for them.
Resumo:
The recently announced Higgs boson discovery marks the dawn of the direct probing of the electroweak symmetry breaking sector. Sorting out the dynamics responsible for electroweak symmetry breaking now requires probing the Higgs boson interactions and searching for additional states connected to this sector. In this work, we analyze the constraints on Higgs boson couplings to the standard model gauge bosons using the available data from Tevatron and LHC. We work in a model-independent framework expressing the departure of the Higgs boson couplings to gauge bosons by dimension-six operators. This allows for independent modifications of its couplings to gluons, photons, and weak gauge bosons while still preserving the Standard Model (SM) gauge invariance. Our results indicate that best overall agreement with data is obtained if the cross section of Higgs boson production via gluon fusion is suppressed with respect to its SM value and the Higgs boson branching ratio into two photons is enhanced, while keeping the production and decays associated to couplings to weak gauge bosons close to their SM prediction.
Resumo:
Background: This study evaluated a wide range of viral load (VL) thresholds to identify a cut-point that best predicts new clinical events in children on stable highly active antiretroviral therapy (HAART). Methods: Cox proportional hazards modeling was used to assess the adjusted risk for World Health Organization stage 3 or 4 clinical events (WHO events) as a function of time-varying CD4, VL, and hemoglobin values in a cohort study of Latin American children on HAART >= 6 months. Models were fit using different VL cut-points between 400 and 50,000 copies per milliliter, with model fit evaluated on the basis of the minimum Akaike information criterion value, a standard model fit statistic. Results: Models were based on 67 subjects with WHO events out of 550 subjects on study. The VL cut-points of >2600 and >32,000 copies per milliliter corresponded to the lowest Akaike information criterion values and were associated with the highest hazard ratios (2.0, P = 0.015; and 2.1, P = 0.0058, respectively) for WHO events. Conclusions: In HIV-infected Latin American children on stable HAART, 2 distinct VL thresholds (>2600 and >32,000 copies/mL) were identified for predicting children at significantly increased risk for HIV-related clinical illness, after accounting for CD4 level, hemoglobin level, and other significant factors.
Resumo:
We analyse the interplay between the Higgs to diphoton rate and electroweak precision measurements constraints in extensions of the Standard Model with new uncolored charged fermions that do not mix with the ordinary ones. We also compute the pair production cross sections for the lightest fermion and compare them with current bounds.
Resumo:
We have searched for sidereal variations in the rate of antineutrino interactions in the MINOS Near Detector. Using antineutrinos produced by the NuMI beam, we find no statistically significant sidereal modulation in the rate. When this result is placed in the context of the Standard Model Extension theory we are able to place upper limits on the coefficients defining the theory. These limits are used in combination with the results from an earlier analysis of MINOS neutrino data to further constrain the coefficients.
Resumo:
The heating of the solar corona has been investigated during four of decades and several mechanisms able to produce heating have been proposed. It has until now not been possible to produce quantitative estimates that would establish any of these heating mechanism as the most important in the solar corona. In order to investigate which heating mechanism is the most important, a more detailed approach is needed. In this thesis, the heating problem is approached ”ab initio”, using well observed facts and including realistic physics in a 3D magneto-hydrodynamic simulation of a small part of the solar atmosphere. The ”engine” of the heating mechanism is the solar photospheric velocity field, that braids the magnetic field into a configuration where energy has to be dissipated. The initial magnetic field is taken from an observation of a typical magnetic active region scaled down to fit inside the computational domain. The driving velocity field is generated by an algorithm that reproduces the statistical and geometrical fingerprints of solar granulation. Using a standard model atmosphere as the thermal initial condition, the simulation goes through a short startup phase, where the initial thermal stratification is quickly forgotten, after which the simulation stabilizes in statistical equilibrium. In this state, the magnetic field is able to dissipate the same amount of energy as is estimated to be lost through radiation, which is the main energy loss mechanism in the solar corona. The simulation produces heating that is intermittent on the smallest resolved scales and hot loops similar to those observed through narrow band filters in the ultra violet. Other observed characteristics of the heating are reproduced, as well as a coronal temperature of roughly one million K. Because of the ab initio approach, the amount of heating produced in these simulations represents a lower limit to coronal heating and the conclusion is that such heating of the corona is unavoidable.
Resumo:
In the present study we are using multi variate analysis techniques to discriminate signal from background in the fully hadronic decay channel of ttbar events. We give a brief introduction to the role of the Top quark in the standard model and a general description of the CMS Experiment at LHC. We have used the CMS experiment computing and software infrastructure to generate and prepare the data samples used in this analysis. We tested the performance of three different classifiers applied to our data samples and used the selection obtained with the Multi Layer Perceptron classifier to give an estimation of the statistical and systematical uncertainty on the cross section measurement.
Resumo:
Über viele Jahre hinweg wurden wieder und wieder Argumente angeführt, die diskreten Räumen gegenüber kontinuierlichen Räumen eine fundamentalere Rolle zusprechen. Unser Zugangzur diskreten Welt wird durch neuere Überlegungen der Nichtkommutativen Geometrie (NKG) bestimmt. Seit ca. 15Jahren gibt es Anstrengungen und auch Fortschritte, Physikmit Hilfe von Nichtkommutativer Geometrie besser zuverstehen. Nur eine von vielen Möglichkeiten ist dieReformulierung des Standardmodells derElementarteilchenphysik. Unter anderem gelingt es, auch denHiggs-Mechanismus geometrisch zu beschreiben. Das Higgs-Feld wird in der NKG in Form eines Zusammenhangs auf einer zweielementigen Menge beschrieben. In der Arbeit werden verschiedene Ziele erreicht:Quantisierung einer nulldimensionalen ,,Raum-Zeit'', konsistente Diskretisierungf'ur Modelle im nichtkommutativen Rahmen.Yang-Mills-Theorien auf einem Punkt mit deformiertemHiggs-Potenzial. Erweiterung auf eine ,,echte''Zwei-Punkte-Raum-Zeit, Abzählen von Feynman-Graphen in einer nulldimensionalen Theorie, Feynman-Regeln. Eine besondere Rolle werden Termini, die in derQuantenfeldtheorie ihren Ursprung haben, gewidmet. In diesemRahmen werden Begriffe frei von Komplikationen diskutiert,die durch etwaige Divergenzen oder Schwierigkeitentechnischer Natur verursacht werden könnten.Eichfixierungen, Geistbeiträge, Slavnov-Taylor-Identität undRenormierung. Iteratives Lösungsverfahren derDyson-Schwinger-Gleichung mit Computeralgebra-Unterstützung,die Renormierungsprozedur berücksichtigt.
Resumo:
Das Standardmodell der Elementarteilchenphysik istexperimentell hervorragend bestätigt, hat auf theoretischerSeite jedoch unbefriedigende Aspekte: Zum einen wird derHiggssektor der Theorie von Hand eingefügt, und zum anderenunterscheiden sich die Beschreibung des beobachtetenTeilchenspektrums und der Gravitationfundamental. Diese beiden Nachteile verschwinden, wenn mandas Standardmodell in der Sprache der NichtkommutativenGeometrie formuliert. Ziel hierbei ist es, die Raumzeit der physikalischen Theoriedurch algebraische Daten zu erfassen. Beispielsweise stecktdie volle Information über eine RiemannscheSpinmannigfaltigkeit M in dem Datensatz (A,H,D), den manspektrales Tripel nennt. A ist hierbei die kommutativeAlgebra der differenzierbaren Funktionen auf M, H ist derHilbertraum der quadratintegrablen Spinoren über M und D istder Diracoperator. Mit Hilfe eines solchen Tripels (zu einer nichtkommutativenAlgebra) lassen sich nun sowohl Gravitation als auch dasStandardmodell mit mathematisch ein und demselben Mittelerfassen. In der vorliegenden Arbeit werden nulldimensionale spektraleTripel (die diskreten Raumzeiten entsprechen) zunächstklassifiziert und in Beispielen wird eine Quantisierungsolcher Objekte durchgeführt. Ein Problem der spektralenTripel stellt ihre Beschränkung auf echt RiemannscheMetriken dar. Zu diesem Problem werden Lösungsansätzepräsentiert. Im abschließenden Kapitel der Arbeit wird dersogenannte 'Feynman-Beweis der Maxwellgleichungen' aufnichtkommutative Konfigurationsräume verallgemeinert.
Resumo:
In dieser Dissertation wird der seltene Zerfall K_L->emu imRahmen eines verallgemeinerten Standardmodells detailliertstudiert. In diesem Prozess bleibt die zu einer gegebenen Familie gehoerende Leptonenzahl nicht erhalten. Deswegenwerden unsere Untersuchungen im Rahmen der SU(2)_L x U(1)_Y-und SU(2)_R x SU(2)_L x U(1)_{B-L}-Modelle mit schwerenMajorana-Neutrinos ausgefuehrt. Die wichtigsten Ergebnisse dieser Arbeit betreffen dieBerechnung des Verzweigungsverhaeltnisses fuer den ZerfallK_L->emu. Im SU(2)_L x U(1)_Y-Modell mit schwerenMajorana-Neutrinos wird eine deutliche Steigerung desVerzweigungsverhaeltnisses gefunden, jedoch liegen dieerhaltenen Ergebnisse um einige Groessenordnungen unter derjetzigen experimentellen Grenze. Benutzt man das gewaehlte,auf der SU(2)_R x SU(2)_L x U(1)_{B-L}$-Eichgruppebasierende Modell mit Links-Rechts-Symmetrie, dann erhoehtdie Anwesenheit der links- und rechtshaendigen Stroeme inden Schleifendiagrammen deutlich den Wert desVerzweigungsverhaeltnisses. Dadurch koennen sich Werte inder Naehe der aktuellen experimentellen Grenze vonB(K_L->emu) < 4.7 x 10^{-12} ergeben. Um unsere Ergebnisse zu untermauern, wird die Frage derEichinvarianz bei diesem Zerfallsprozess auf demEin-Schleifen-Niveau mit besonderer Aufmerksamkeitbehandelt. Ein sogenanntes ,,on-shell skeleton``Renormierungsschema wird benutzt, um die erste vollstaendigeAnalyse der Eichinvarianz fuer den Prozess K_L->emuauszufuehren.