1000 resultados para TEV system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for direct chargino production in anomaly-mediated supersymmetry breaking scenarios is performed in p p collisions at root s = 7 TeV using 4.7 fb(-1) of data collected with the ATLAS experiment at the LHC. In these models, the lightest chargino is predicted to have a lifetime long enough to be detected in the tracking detectors of collider experiments. This analysis explores such models by searching for chargino decays that result in tracks with few associated hits in the outer region of the tracking system. The transverse-momentum spectrum of candidate tracks is found to be consistent with the expectation from the Standard Model background processes and constraints on chargino properties are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results of a search for pair production of light top squarks are presented, using 4.7 fb(-1) of root s = 7 TeV proton-proton collisions collected with the ATLAS detector at the Large Hadron Collider. This search targets top squarks with masses similar to, or lighter than, the top quark mass. Final states containing exclusively one or two leptons (e, mu), large missing transverse momentum, light-flavour jets and b-jets are used to reconstruct the top squark pair system. Event-based mass scale variables are used to separate the signal from a large t (t) over bar background. No excess over the Standard Model expectations is found. The results are interpreted in the framework of the Minimal Supersymmetric Standard Model, assuming the top squark decays exclusively to a chargino and a b-quark, while requiring different mass relationships between the Supersymmetric particles in the decay chain. Light top squarks with masses between 123-167 GeV are excluded for neutralino masses around 55 GeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at root S-NN = 2.76 TeV corresponding to an integrated luminosity of approximately 7 mu b(-1), ATLAS has measured jets with a calorimeter system over the pseudorapidity interval vertical bar eta vertical bar < 2.1 and over the transverse momentum range 38 < pT <210 GeV. Jets were reconstructed using the anti-k(t) algorithm with values for the distance parameter that determines the nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of the jet yield is characterized by the jet "central-to-peripheral ratio," R-CP. Jet production is found to be suppressed by approximately a factor of two in the 10% most central collisions relative to peripheral collisions. R-CP varies smoothly with centrality as characterized by the number of participating nucleons. The observed suppression is only weakly dependent on jet radius and transverse momentum. These results provide the first direct measurement of inclusive jet suppression in heavy ion collisions and complement previous measurements of dijet transverse energy imbalance at the LHC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements are presented of differential cross-sections for top quark pair production in pp collisions at root s = 7 TeV relative to the total inclusive top quark pair production cross-section. A data sample of 2.05 fb(-1) recorded by the ATLAS detector at the Large Hadron Collider is used. Relative differential cross-sections are derived as a function of the invariant mass, the transverse momentum and the rapidity of the top quark pair system. Events are selected in the lepton (electron or muon) + jets channel. The background-subtracted differential distributions are corrected for detector effects, normalized to the total inclusive top quark pair production cross-section and compared to theoretical predictions. The measurement uncertainties range typically between 10 % and 20 % and are generally dominated by systematic effects. No significant deviations from the Standard Model expectations are observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for new particles that decay into top quark pairs (t (t) over bar) is performed with the ATLAS experiment at the LHC using an integrated luminosity of 4.7 fb(-1) of proton-proton (pp) collision data collected at a center-of-mass energy root s = 7 TeV. In the t (t) over bar) -> WbWb decay, the lepton plus jets final state is used, where one W boson decays leptonically and the other hadronically. The t (t) over bar) system is reconstructed using both small-radius and large-radius jets, the latter being supplemented by a jet substructure analysis. A search for local excesses in the number of data events compared to the Standard Model expectation in the t (t) over bar) invariant mass spectrum is performed. No evidence for a t (t) over bar) resonance is found and 95% credibility-level limits on the production rate are determined for massive states predicted in two benchmark models. The upper limits on the cross section times branching ratio of a narrow Z' resonance range from 5.1 pb for a boson mass of 0.5 TeV to 0.03 pb for a mass of 3 TeV. A narrow leptophobic topcolor Z' resonance with a mass below 1.74 TeV is excluded. Limits are also derived for a broad color-octet resonance with m 15.3%. A Kaluza-Klein excitation of the gluon in a Randall-Sundrum model is excluded for masses below 2.07 TeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to study further the long-range correlations ("ridge") observed recently in p+Pb collisions at sqrt(s_NN) =5.02 TeV, the second-order azimuthal anisotropy parameter of charged particles, v_2, has been measured with the cumulant method using the ATLAS detector at the LHC. In a data sample corresponding to an integrated luminosity of approximately 1 microb^(-1), the parameter v_2 has been obtained using two- and four-particle cumulants over the pseudorapidity range |eta|<2.5. The results are presented as a function of transverse momentum and the event activity, defined in terms of the transverse energy summed over 3.1system, the large magnitude of v_2 and its similarity to hydrodynamic predictions provide additional evidence for the importance of final-state effects in p+Pb reactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search is presented for direct chargino production based on a disappearing-track signature using 20.3  fb−1 of proton-proton collisions at s√=8  TeV collected with the ATLAS experiment at the LHC. In anomaly-mediated supersymmetry breaking (AMSB) models, the lightest chargino is nearly mass degenerate with the lightest neutralino and its lifetime is long enough to be detected in the tracking detectors by identifying decays that result in tracks with no associated hits in the outer region of the tracking system. Some models with supersymmetry also predict charginos with a significant lifetime. This analysis attains sensitivity for charginos with a lifetime between 0.1 and 10 ns, and significantly surpasses the reach of the LEP experiments. No significant excess above the background expectation is observed for candidate tracks with large transverse momentum, and constraints on chargino properties are obtained. In the AMSB scenarios, a chargino mass below 270 GeV is excluded at 95% confidence level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a measurement of the top quark pair () production charge asymmetry A (C) using 4.7 fb(-1) of proton-proton collisions at a centre-of-mass energy root s = 7 TeV collected by the ATLAS detector at the LHC. A -enriched sample of events with a single lepton (electron or muon), missing transverse momentum and at least four high transverse momentum jets, of which at least one is tagged as coming from a b-quark, is selected. A likelihood fit is used to reconstruct the event kinematics. A Bayesian unfolding procedure is employed to estimate A (C) at the parton-level. The measured value of the production charge asymmetry is A (C) = 0.006 +/- 0.010, where the uncertainty includes both the statistical and the systematic components. Differential A (C) measurements as a function of the invariant mass, the rapidity and the transverse momentum of the system are also presented. In addition, A (C) is measured for a subset of events with large velocity, where physics beyond the Standard Model could contribute. All measurements are consistent with the Standard Model predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of fiducial and differential cross sections of Higgs boson production in the H →ZZ* → 4ℓ decay channel are presented. The cross sections are determined within a fiducial phase space and corrected for detection efficiency and resolution effects. They are based on 20.3 fb−1 of pp collision data, produced at √s = 8 TeV centre-of-mass energy at the LHC and recorded by the ATLAS detector. The differential measurements are performed in bins of transverse momentum and rapidity of the four-lepton system, the invariant mass of the subleading lepton pair and the decay angle of the leading lepton pair with respect to the beam line in the four-lepton rest frame, as well as the number of jets and the transverse momentum of the leading jet. The measured cross sections are compared to selected theoretical calculations of the Standard Model expectations. No significant deviation from any of the tested predictions is found. c

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Übers. d. Hauptsacht.: Das Tor der Umkehr

Relevância:

30.00% 30.00%

Publicador:

Resumo:

iberzeṭzṭ, bearbeiṭeṭ und heroigegeben mi- ... Juda Krois

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heavy-ion collisions are a powerful tool to study hot and dense QCD matter, the so-called Quark Gluon Plasma (QGP). Since heavy quarks (charm and beauty) are dominantly produced in the early stages of the collision, they experience the complete evolution of the system. Measurements of electrons from heavy-flavour hadron decay is one possible way to study the interaction of these particles with the QGP. With ALICE at LHC, electrons can be identified with high efficiency and purity. A strong suppression of heavy-flavour decay electrons has been observed at high $p_{m T}$ in Pb-Pb collisions at 2.76 TeV. Measurements in p-Pb collisions are crucial to understand cold nuclear matter effects on heavy-flavour production in heavy-ion collisions. The spectrum of electrons from the decays of hadrons containing charm and beauty was measured in p-Pb collisions at $\\sqrt = 5.02$ TeV. The heavy flavour decay electrons were measured by using the Time Projection Chamber (TPC) and the Electromagnetic Calorimeter (EMCal) detectors from ALICE in the transverse-momentum range $2 < p_ < 20$ GeV/c. The measurements were done in two different data set: minimum bias collisions and data using the EMCal trigger. The non-heavy flavour electron background was removed using an invariant mass method. The results are compatible with one ($R_ \\approx$ 1) and the cold nuclear matter effects in p-Pb collisions are small for the electrons from heavy-flavour hadron decays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.

In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.

The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.

We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.

In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.