770 resultados para GLUINO DECAYS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The features of two popular models used to describe the observed response characteristics of typical oxygen optical sensors based on luminescence quenching are examined critically. The models are the 'two-site' and 'Gaussian distribution in natural lifetime, tau(o),' models. These models are used to characterise the response features of typical optical oxygen sensors; features which include: downward curving Stern-Volmer plots and increasingly non-first order luminescence decay kinetics with increasing partial pressures of oxygen, pO(2). Neither model appears able to unite these latter features, let alone the observed disparate array of response features exhibited by the myriad optical oxygen sensors reported in the literature, and still maintain any level of physical plausibility. A model based on a Gaussian distribution in quenching rate constant, k(q), is developed and, although flawed by a limited breadth in distribution, rho, does produce Stern-Volmer plots which would cover the range in curvature seen with real optical oxygen sensors. A new 'log-Gaussian distribution in tau(o) or k(q)' model is introduced which has the advantage over a Gaussian distribution model of placing no limitation on the value of rho. Work on a 'log-Gaussian distribution in tau(o)' model reveals that the Stern-Volmer quenching plots would show little degree in curvature, even at large rho values and the luminescence decays would become increasingly first order with increasing pO(2). In fact, with real optical oxygen sensors, the opposite is observed and thus the model appears of little value. In contrast, a 'log-Gaussian distribution in k(o)' model does produce the trends observed with real optical oxygen sensors; although it is technically restricted in use to those in which the kinetics of luminescence decay are good first order in the absence of oxygen. The latter model gives a good fit to the major response features of sensors which show the latter feature, most notably the [Ru(dpp)(3)(2+)(Ph4B-)(2)] in cellulose optical oxygen sensors. The scope of a log-Gaussian model for further expansion and, therefore, application to optical oxygen sensors, by combining both a log-Gaussian distribution in k(o) with one in tau(o) is briefly discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a diagnostic of high-intensity laser interactions (> 10(19) W cm(-2)), the detection of radioactive isotopes is regularly used for the characterization of proton, neutron, ion, and photon beams. This involves sample removal from the interaction chamber and time consuming post shot analysis using NaI coincidence counting or Ge detectors. This letter describes the use of in situ detectors to measure laser-driven (p,n) reactions in Al-27 as an almost real-time diagnostic for proton acceleration. The produced Si-27 isotope decays with a 4.16 s half-life by the predominantly beta+ emission, producing a strong 511 keV annihilation peak. (c) 2006 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy release from radioactive decays contributes significantly to supernova light curves. Previous works, which considered the energy deposited by ?-rays and positrons produced by Ni, Co, Ni, Co, Ti and Sc, have been quite successful in explaining the light curves of both core collapse and thermonuclear supernovae. We point out that Auger and internal conversion electrons, together with the associated X-ray cascade, constitute an additional heat source. When a supernova is transparent to ?-rays, these electrons can contribute significantly to light curves for reasonable nucleosynthetic yields. In particular, the electrons emitted in the decay of Co, which are largely due to internal conversion from a fortuitously low-lying 3/2 state in the daughter Fe, constitute an additional significant energy-deposition channel. We show that when the heating by these electrons is accounted for, a slow-down in the light curve of SN 1998bw is naturally obtained for typical hypernova nucleosynthetic yields. Additionally, we show that for generic Type Ia supernova yields, the Auger electrons emitted in the ground-state to ground-state electron capture decay of Fe exceed the energy released by the Ti decay chain for many years after the explosion. © 2009 RAS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Super-luminous supernovae that radiate more than 10 44 ergs per second at their peak luminosity have recently been discovered in faint galaxies at redshifts of 0.1-4. Some evolve slowly, resembling models of 'pair-instability' supernovae. Such models involve stars with original masses 140-260 times that of the Sun that now have carbon-oxygen cores of 65-130 solar masses. In these stars, the photons that prevent gravitational collapse are converted to electron-positron pairs, causing rapid contraction and thermonuclear explosions. Many solar masses of 56 Ni are synthesized; this isotope decays to 56 Fe via 56 Co, powering bright light curves. Such massive progenitors are expected to have formed from metal-poor gas in the early Universe. Recently, supernova 2007bi in a galaxy at redshift 0.127 (about 12 billion years after the Big Bang) with a metallicity one-third that of the Sun was observed to look like a fading pair-instability supernova. Here we report observations of two slow-to-fade super-luminous supernovae that show relatively fast rise times and blue colours, which are incompatible with pair-instability models. Their late-time light-curve and spectral similarities to supernova 2007bi call the nature of that event into question. Our early spectra closely resemble typical fast-declining super-luminous supernovae, which are not powered by radioactivity. Modelling our observations with 10-16 solar masses of magnetar-energized ejecta demonstrates the possibility of a common explosion mechanism. The lack of unambiguous nearby pair-instability events suggests that their local rate of occurrence is less than 6 × 10 -6 times that of the core-collapse rate. © 2013 Macmillan Publishers Limited. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use ground-based images of high spatial and temporal resolution to search for evidence of nanoflare activity in the solar chromosphere. Through close examination of more than 1 x 10(9) pixels in the immediate vicinity of an active region, we show that the distributions of observed intensity fluctuations have subtle asymmetries. A negative excess in the intensity fluctuations indicates that more pixels have fainter-than-average intensities compared with those that appear brighter than average. By employing Monte Carlo simulations, we reveal how the negative excess can be explained by a series of impulsive events, coupled with exponential decays, that are fractionally below the current resolving limits of low-noise equipment on high-resolution ground-based observatories. Importantly, our Monte Carlo simulations provide clear evidence that the intensity asymmetries cannot be explained by photon-counting statistics alone. A comparison to the coronal work of Terzo et al. suggests that nanoflare activity in the chromosphere is more readily occurring, with an impulsive event occurring every similar to 360 s in a 10,000 km(2) area of the chromosphere, some 50 times more events than a comparably sized region of the corona. As a result, nanoflare activity in the chromosphere is likely to play an important role in providing heat energy to this layer of the solar atmosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measurements of explosive nucleosynthesis yields in core-collapse supernovae provide tests for explosion models. We investigate constraints on explosive conditions derivable from measured amounts of nickel and iron after radioactive decays using nucleosynthesis networks with parameterized thermodynamic trajectories. The Ni/Fe ratio is for most regimes dominated by the production ratio of Ni-58/(Fe-54 + Ni-56), which tends to grow with higher neutron excess and with higher entropy. For SN 2012ec, a supernova (SN) that produced a Ni/Fe ratio of 3.4 +/- 1.2 times solar, we find that burning of a fuel with neutron excess eta approximate to 6 x 10(-3) is required. Unless the progenitor metallicity is over five times solar, the only layer in the progenitor with such a neutron excess is the silicon shell. SNe producing large amounts of stable nickel thus suggest that this deep-lying layer can be, at least partially, ejected in the explosion. We find that common spherically symmetric models of M-ZAMS less than or similar to 13 M-circle dot stars exploding with a delay time of less than one second (M-cut < 1.5 M-circle dot) are able to achieve such silicon-shell ejection. SNe that produce solar or subsolar Ni/Fe ratios, such as SN 1987A, must instead have burnt and ejected only oxygen-shell material, which allows a lower limit to the mass cut to be set. Finally, we find that the extreme Ni/Fe value of 60-75 times solar derived for the Crab cannot be reproduced by any realistic entropy burning outside the iron core, and neutrino-neutronization obtained in electron capture models remains the only viable explanation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context. Although the question of progenitor systems and detailed explosion mechanisms still remains a matter of discussion, it is commonly believed that Type Ia supernovae (SNe Ia) are production sites of large amounts of radioactive nuclei. Even though the gamma-ray emission due to radioactive decays is responsible for powering the light curves of SNe Ia, gamma rays themselves are of particular interest as a diagnostic tool because they directly lead to deeper insight into the nucleosynthesis and the kinematics of these explosion events. Aims: We study the evolution of gamma-ray line and continuum emission of SNe Ia with the objective of analyzing the relevance of observations in this energy range. We seek to investigate the chances for the success of future MeV missions regarding their capabilities for constraining the intrinsic properties and the physical processes of SNe Ia. Methods: Focusing on two of the most broadly discussed SN Ia progenitor scenarios - a delayed detonation in a Chandrasekhar-mass white dwarf (WD) and a violent merger of two WDs - we used three-dimensional explosion models and performed radiative transfer simulations to obtain synthetic gamma-ray spectra. Both chosen models produce the same mass of 56Ni and have similar optical properties that are in reasonable agreement with the recently observed supernova SN 2011fe. We examine the gamma-ray spectra with respect to their distinct features and draw connections to certain characteristics of the explosion models. Applying diagnostics, such as line and hardness ratios, the detection prospects for future gamma-ray missions with higher sensitivities in the MeV energy range are discussed. Results: In contrast to the optical regime, the gamma-ray emission of our two chosen models proves to be quite different. The almost direct connection of the emission of gamma rays to fundamental physical processes occurring in SNe Ia permits additional constraints concerning several explosion model properties that are not easily accessible within other wavelength ranges. Proposed future MeV missions such as GRIPS will resolve all spectral details only for nearby SNe Ia, but hardness ratio and light curve measurements still allow for a distinction of the two different models at 10 Mpc and 16 Mpc for an exposure time of 106 s. The possibility of detecting the strongest line features up to the Virgo distance will offer the opportunity to build up a first sample of SN Ia detections in the gamma-ray energy range and underlines the importance of future space observatories for MeV gamma rays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Configuration-average distorted-wave calculations are carried out for the electron-impact single ionization of Xe 24 + . Contributions are included from direct ionization of the 3s, 3p, 3d and 4s subshells and from indirect ionization via 3s → nl , 3p → nl and 3d → nl excitations followed by autoionization. Branching ratios are found for single versus double ionization of the 3s and 3p subshells and for autoionization versus radiative decay of all 3 l → nl excitations. Additional distorted-wave and R -matrix calculations find resonant-capture double-autoionization contributions to be quite small. The total ionization cross section for Xe 24 + is found to be dominated by indirect excitation–autoionization contributions, especially near the single-ionization threshold. An approximate 15% reduction in the total ionization cross section is found due to the radiative decays included in the branching ratios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flavour effects due to lepton interactions in the early Universe may have played an important role in the generation of the cosmological baryon asymmetry through leptogenesis. If the only source of high-energy CP violation comes from the left-handed leptonic sector, then it is possible to establish a bridge between flavoured leptogenesis and low-energy leptonic CP violation. We explore this connection taking into account our present knowledge about low-energy neutrino parameters and the matter-antimatter asymmetry observed in the Universe. In this framework, we find that leptogenesis favours a hierarchical light neutrino mass spectrum, while for quasi-degenerate and inverted hierarchical neutrino masses there is a very narrow allowed window. The absolute neutrino mass scale turns out to be m less than or similar to 0.1 eV. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maximum production rates ofs and decay kinetics for the hydrated electron, the indolyl neutral radical and the indole triplet state have been obtained in the microsecond, broadband (X > 260 nm) flash photolysis of helium-saturated, neutral aqueous solutions of indole, in the absence and in the presence of the solutes NaBr, BaCl2*2H20 and CdSCV Fluorescence spectra and fluorescence lifetimes have also been obtained in the absence and in the presence of the above solutes, The hydrated electron is produced monophotonically and biphotonically at an apparent maximum rate which is increased by BaCl2*2H20 and decreased by NaBr and CdSOif. The neutral indolyl radical may be produced monophotonically and biphotonically or strictly monophotonically at an apparent maximum rate which is increased by NaBr and CdSO^ and is unaffected by BaCl2*2H20. The indole triplet state is produced monophotonically at a maximum rate which is increased by all solutes. The hydrated electron decays by pseudo first order processes, the neutral indolyl radical decays by second order recombination and the indole triplet state decays by combined first and second order processes. Hydrated electrons are shown to react with H , H2O, indole, Na and Cd"*""1"". No evidence has been found for the reaction of hydrated electrons with Ba . The specific rate of second order neutral indolyl radical recombination is unaffected by NaBr and BaCl2*2H20, and is increased by CdSO^. Specific rates for both first and second order triplet state decay processes are increased by all solutes. While NaBr greatly reduced the fluorescence lifetime and emission band intensity, BaCl2*2H20 and CdSO^ had no effect on these parameters. It is suggested that in solute-free solutions and in those containing BaCl2*2H20 and CdSO^, direct excitation occurs to CTTS states as well as to first excited singlet states. It is further suggested that in solutions containing NaBr, direct excitation to first excited singlet states predominates. This difference serves to explain increased indole triplet state production (by ISC from CTTS states) and unchanged fluorescence lifetimes and emission band intensities in the presence of BaCl2*2H20 and CdSOt^., and increased indole triplet state production (by ISC from S^ states) and decreased fluorescence lifetime and emission band intensity in the presence of NaBr. Evidence is presented for (a) very rapid (tx ^ 1 us) processes involving reactions of the hydrated electron with Na and Cd which compete with the reformation of indole by hydrated electron-indole radical cation recombination, and (b) first and second order indole triplet decay processes involving the conversion of first excited triplet states to vibrationally excited ground singlet states.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nous rapportons les résultats d'une étude des désintégrations semileptoniques non-charmées B^+--> êta^(') l^+v et B^0--> pi^- l^+v, mesurés par le détecteur BABAR avec une production d'environ 464 millions de paires de mésons BBbar issues des collisions e^+e^- à la résonance Upsilon(4S). L'analyse reconstruit les événements avec une technique relâchée des neutrinos. Nous obtenons les rapports d'embranchement partiels pour les désintégrations B^+--> êta l^+v et B^0--> pi^- l^+v en trois et douze intervalles de q^2, respectivement, à partir desquels nous extrayons les facteurs de forme f_+(q^2) et les rapports d'embranchement totaux B(B^+--> êta l^+v) = (3.39 +/- 0.46_stat +/- 0.47_syst) x 10^-5 et B(B^0--> pi^- l^+v) = (1.42 +/- 0.05_stat +/- 0.08_syst) x 10^-4. Nous mesurons aussi B(B^+--> êta' l^+v) = (2.43 +/- 0.80_stat +/- 0.34_syst) x 10^-5. Nous obtenons les valeurs de la norme de l'élément |V_ub| de la matrice CKM en utilisant trois calculs différents de la CDQ.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire présente l’application de la méthode de décomposition en termes de diagrammes aux désintégrations de mésons B vers trois mésons de type pseudos- calaire ne comportant pas de quarks charmés. La décomposition diagrammatique des désintégrations de types B → Kππ, B → KKK ̄, B → KK ̄π et B → πππ est effectuée de façon systématique. Il est démontré que lorsque l’on néglige les dia- grammes d’échanges et d’annihilations, dont les contributions sont estimées être petites, de nouvelles relations apparaissent entre les amplitudes. Ces relations sont de nouveaux tests du modèle standard qui ne peuvent être obtenus que par la méthode diagrammatique. Lorsque les données nécessaires sont disponibles, nous vérifions ces relations et obtenons un bon accord avec les données expérimentales. Nous démontrons également qu’il est possible d’utiliser le secteur B → Kππ pour mesurer la phase faible γ avec une incertitude théorique que nous estimons être de l’ordre de 5%. Les autres secteurs de désintégrations ne permettent d’extraire des phases faibles que si l’on invoque des approximations de précisions inconnues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire de maîtrise a pour objet une recherche de leptons lourds de quatrième génération avec les données prises par le détecteur ATLAS au LHC dans les collisions pp à $\sqrt{s}$ = 7 TeV et avec une luminosité intégrée de 1.02 fb$^{-1}$. Le processus étudié est la production au singulet de leptons lourds neutres de quatrième génération (N) par la voie du courant chargé suivi de la désintégration du celui-ci en un électron et un boson W : $ pp \to W \to N e \to e W e \to e e \nu_{\ell} \ell $ ($\ell$ = $e$ ou $\mu$), et dépend d'un paramètre de mélange $\xi^{2}$ avec un lepton léger. L'analyse passe par plusieurs étapes, soit l'utilisation de FeynRules pour construire le modèle pour ensuite générer des événements par MadGraph 5.1.2.4. Comme hypothèse de référence, on a choisi une masse de 100 GeV pour le lepton lourd neutre et $\xi_{Ne}^2$ = 0.19, donnant une section efficace de 0.312 pb pour une énergie au centre de masse de 7 TeV. Puisque la génération du signal s'est faite de manière privée à Montréal et non par la collaboration ATLAS, les résultats ne peuvent pas être reconnus officiellement. Sur la base de la simulation, avec des données correspondant à 1 fb$^{-1}$, la limite supérieure attendue à un niveau de confiance de $95\%$ sur la section efficace du signal est de 0.145 pb avec 0.294 pb pour un écart type($\sigma$) et 0.519 pb pour 2$\sigma$. La limite supérieure attendue à un niveau de confiance de $95\%$ sur $\xi_{Ne}^{2}$ de 0.09 pour une masse de 100 GeV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les logiciels sont en constante évolution, nécessitant une maintenance et un développement continus. Ils subissent des changements tout au long de leur vie, que ce soit pendant l'ajout de nouvelles fonctionnalités ou la correction de bogues dans le code. Lorsque ces logiciels évoluent, leurs architectures ont tendance à se dégrader avec le temps et deviennent moins adaptables aux nouvelles spécifications des utilisateurs. Elles deviennent plus complexes et plus difficiles à maintenir. Dans certains cas, les développeurs préfèrent refaire la conception de ces architectures à partir du zéro plutôt que de prolonger la durée de leurs vies, ce qui engendre une augmentation importante des coûts de développement et de maintenance. Par conséquent, les développeurs doivent comprendre les facteurs qui conduisent à la dégradation des architectures, pour prendre des mesures proactives qui facilitent les futurs changements et ralentissent leur dégradation. La dégradation des architectures se produit lorsque des développeurs qui ne comprennent pas la conception originale du logiciel apportent des changements au logiciel. D'une part, faire des changements sans comprendre leurs impacts peut conduire à l'introduction de bogues et à la retraite prématurée du logiciel. D'autre part, les développeurs qui manquent de connaissances et–ou d'expérience dans la résolution d'un problème de conception peuvent introduire des défauts de conception. Ces défauts ont pour conséquence de rendre les logiciels plus difficiles à maintenir et évoluer. Par conséquent, les développeurs ont besoin de mécanismes pour comprendre l'impact d'un changement sur le reste du logiciel et d'outils pour détecter les défauts de conception afin de les corriger. Dans le cadre de cette thèse, nous proposons trois principales contributions. La première contribution concerne l'évaluation de la dégradation des architectures logicielles. Cette évaluation consiste à utiliser une technique d’appariement de diagrammes, tels que les diagrammes de classes, pour identifier les changements structurels entre plusieurs versions d'une architecture logicielle. Cette étape nécessite l'identification des renommages de classes. Par conséquent, la première étape de notre approche consiste à identifier les renommages de classes durant l'évolution de l'architecture logicielle. Ensuite, la deuxième étape consiste à faire l'appariement de plusieurs versions d'une architecture pour identifier ses parties stables et celles qui sont en dégradation. Nous proposons des algorithmes de bit-vecteur et de clustering pour analyser la correspondance entre plusieurs versions d'une architecture. La troisième étape consiste à mesurer la dégradation de l'architecture durant l'évolution du logiciel. Nous proposons un ensemble de m´etriques sur les parties stables du logiciel, pour évaluer cette dégradation. La deuxième contribution est liée à l'analyse de l'impact des changements dans un logiciel. Dans ce contexte, nous présentons une nouvelle métaphore inspirée de la séismologie pour identifier l'impact des changements. Notre approche considère un changement à une classe comme un tremblement de terre qui se propage dans le logiciel à travers une longue chaîne de classes intermédiaires. Notre approche combine l'analyse de dépendances structurelles des classes et l'analyse de leur historique (les relations de co-changement) afin de mesurer l'ampleur de la propagation du changement dans le logiciel, i.e., comment un changement se propage à partir de la classe modifiée è d'autres classes du logiciel. La troisième contribution concerne la détection des défauts de conception. Nous proposons une métaphore inspirée du système immunitaire naturel. Comme toute créature vivante, la conception de systèmes est exposée aux maladies, qui sont des défauts de conception. Les approches de détection sont des mécanismes de défense pour les conception des systèmes. Un système immunitaire naturel peut détecter des pathogènes similaires avec une bonne précision. Cette bonne précision a inspiré une famille d'algorithmes de classification, appelés systèmes immunitaires artificiels (AIS), que nous utilisions pour détecter les défauts de conception. Les différentes contributions ont été évaluées sur des logiciels libres orientés objets et les résultats obtenus nous permettent de formuler les conclusions suivantes: • Les métriques Tunnel Triplets Metric (TTM) et Common Triplets Metric (CTM), fournissent aux développeurs de bons indices sur la dégradation de l'architecture. La d´ecroissance de TTM indique que la conception originale de l'architecture s’est dégradée. La stabilité de TTM indique la stabilité de la conception originale, ce qui signifie que le système est adapté aux nouvelles spécifications des utilisateurs. • La séismologie est une métaphore intéressante pour l'analyse de l'impact des changements. En effet, les changements se propagent dans les systèmes comme les tremblements de terre. L'impact d'un changement est plus important autour de la classe qui change et diminue progressivement avec la distance à cette classe. Notre approche aide les développeurs à identifier l'impact d'un changement. • Le système immunitaire est une métaphore intéressante pour la détection des défauts de conception. Les résultats des expériences ont montré que la précision et le rappel de notre approche sont comparables ou supérieurs à ceux des approches existantes.