994 resultados para LARGE HADRON COLLIDER


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The production of a W boson in association with a single charm quark is studied using 4.6 fb−1 of pp collision data at ps = 7TeV collected with the ATLAS detector at the Large Hadron Collider. In events in which a W boson decays to an electron or muon, the charm quark is tagged either by its semileptonic decay to a muon or by the presence of a charmed meson. The integrated and differential cross sections as a function of the pseudorapidity of the lepton from the W-boson decay are measured. Results are compared to the predictions of next-to-leading-order QCD calculations obtained from various parton distribution function parameterisations. The ratio of the strange-to-down sea-quark distributions is determined to be 0.96+0.26−0.30 at Q2 = 1.9 GeV2, which supports the hypothesis of an SU(3)-symmetric composition of the light-quark sea. Additionally, the cross-section ratio ơ(W++c)/ơ(W−+c) is compared to the predictions obtained using parton distribution function parameterisations with different assumptions about the s–s quark asymmetry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A search for evidence of invisible-particle decay modes of a Higgs boson produced in association with a Z boson at the Large Hadron Collider is presented. No deviation from the standard model expectation is observed in 4.5 fb−1 (20.3 fb−1) of 7 (8) TeV pp collision data collected by the ATLAS experiment. Assuming the standard model rate for ZH production, an upper limit of 75%, at the 95% confidence level is set on the branching ratio to invisible-particle decay modes of the Higgs boson at a mass of 125.5 GeV. The limit on the branching ratio is also interpreted in terms of an upper limit on the allowed dark matter-nucleon scattering cross section within a Higgs-portal dark matter scenario. Within the constraints of such a scenario, the results presented in this Letter provide the strongest available limits for low-mass dark matter candidates. Limits are also set on an additional neutral Higgs boson, in the mass range 110 < mH < 400 GeV, produced in association with a Z boson and decaying to invisible particles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A search is reported for a neutral Higgs boson in the decay channel H → Zγ, Z → ℓ+ℓ− (ℓ = e, μ), using 4.5 fb−1 of pp collisions at √s = 7 TeV and 20.3 fb−1 of pp collisions at √s = 8 TeV, recorded by the ATLAS detector at the CERN Large Hadron Collider. The observed distribution of the invariantmass of the three final-state particles, mℓℓγ, is consistent with the Standard Model hypothesis in the investigated mass range of 120–150 GeV. For a Higgs boson with a mass of 125.5 GeV, the observed upper limit at the 95% confidence level is 11 times the Standard Model expectation. Upper limits are set on the cross section times branching ratio of a neutral Higgs boson with mass in the range 120–150 GeV between 0.13 and 0.5 pb for √s = 8 TeV at 95% confidence level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurements of fiducial cross sections for the electroweak production of two jets in association with a Z-boson are presented. The measurements are performed using 20.3 fb−1 of proton-proton collision data collected at a centre-of-mass energy of p s = 8TeV by the ATLAS experiment at the Large Hadron Collider. The electroweak component is extracted by a fit to the dijet invariant mass distribution in a fiducial region chosen to enhance the electroweak contribution over the dominant background in which the jets are produced via the strong interaction. The electroweak cross sections measured in two fiducial regions are in good agreement with the Standard Model expectations and the background-only hypothesis is rejected with significance above the 5ơ level. The electroweak process includes the vector boson fusion production of a Z-boson and the data are used to place limits on anomalous triple gauge boson couplings. In addition, measurements of cross sections and differential distributions for inclusive Z-boson-plus-dijet production are performed in five fiducial regions, each with different sensitivity to the electroweak contribution. The results are corrected for detector effects and compared to predictions from the Sherpa and Powheg event generators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a sample of dilepton top-quark pair (tt ¯ ) candidate events, a study is performed of the production of top-quark pairs together with heavy-flavor (HF) quarks, the sum of tt ¯ +b+X and tt ¯ +c+X , collectively referred to as tt ¯  + HF . The data set used corresponds to an integrated luminosity of 4.7  fb −1 of proton-proton collisions at a center-of-mass energy of 7 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. The presence of additional HF (b or c ) quarks in the tt ¯ sample is inferred by looking for events with at least three b -tagged jets, where two are attributed to the b quarks from the tt ¯ decays and the third to additional HF production. The dominant background to tt ¯  + HF in this sample is tt ¯ +jet events in which a light-flavor jet is misidentified as a heavy-flavor jet. To determine the heavy- and light-flavor content of the additional b -tagged jets, a fit to the vertex mass distribution of b -tagged jets in the sample is performed. The result of the fit shows that 79 ± 14 (stat) ± 22 (syst) of the 105 selected extra b -tagged jets originate from HF quarks, 3 standard deviations away from the hypothesis of zero tt ¯  + HF production. The result for extra HF production is quoted as a ratio (R HF ) of the cross section for tt ¯  + HF production to the cross section for tt ¯ production with at least one additional jet. Both cross sections are measured in a fiducial kinematic region within the ATLAS acceptance. R HF is measured to be [6.2±1.1(stat)±1.8(syst)]% for jets with p T >25  GeV and |η|<2.5 , in agreement with the expectations from Monte Carlo generators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Large Hadron Collider is the world’s largest and most powerful particle accelerator. The project is divided in phases. The first one goes from 2009 until 2020. The second phase will consist of the implementation of upgrades. One of the upgrades is to increase the ratio of collision, the luminosity. This objective is the main of one of the most important projects which is carrying out the upgrades: Hi-Lumi LHC project. Increasing luminosity could be done by using a new material in the superconductor magnets placed at the interaction points: Nb3Sn, instead of NbTi, the one being used right now. Before implementing it many aspects should be analysed. One of them is the induction magnetic field quality. The tool used so far has been ROXIE, software developed at CERN by S. Russenschuck. One of the main features of the programme is the time-transient analysis, which is based on three mathematical models. It is quite precise for fields above 1.5 Tesla. However, they are not very accurate for lower fields. Therefore the aim of this project is to evaluate a more accurate model: Classical Preisach Model of Hysteresis, in order to better analyse induced field quality in the new material Nb3Sn. Resumen: El Gran Colisionador de Hadrones es el mayor acelerador de partículas circular del mundo. Se trata de uno de los mayores proyectos de investigación. La primera fase de funcionamiento comprende desde 2009 a 2020, cuando comenzará la siguiente fase. Durante el primer periodo se han pensado mejoras para que puedan ser implementadas en la segunda fase. Una de ellas es el aumento del ratio de las colisiones entre protones por choque. Este es el principal objetivo de uno de los proyectos que está llevando a cabo las mejoras a ser implementadas en 2020: Hi- Lumi LHC. Se cambiarán los imanes superconductores de NbTi de las dos zonas principales de interacción, y se sustituirán por imanes de Nb3Sn. Esta sustituciónn conlleva un profundo estudio previo. Entre otros, uno de los factores a analizar es la calidad del campo magnético. La herramienta utilizada es el software desarrollado por S. Russenschuck en el CERN llamado ROXIE. Está basado en tres modelos de magnetización, los cuales son precisos para campos mayores de 1.5 T. Sin embargo, no lo son tanto para campos menores. Con este proyecto se pretende evaluar la implementación de un cuarto modelo, el modelo clásico de histéresis de Preisach que permita llevar a cabo un mejor análisis de la calidad del campo inducido por el futuro material a utilizar en algunos de los imanes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A measurement of the production cross sections of top quark pairs in association with a W or Z boson is presented. The measurement uses 20.3 fb−1 of data from proton-proton collisions at √s = 8 TeV collected by the ATLAS detector at the Large Hadron Collider. Four different final states are considered: two opposite-sign leptons, two same-sign leptons, three leptons, and four leptons. The t t̅ W and t t̅ Z cross sections are simultaneously extracted using a maximum likelihood fit over all the final states. The t t̅ Z cross section is measured to be 176+58−52 fb, corresponding to a signal significance of 4.2σ. The t t̅ W cross section is measured to be 369+100−91 fb, corresponding to a signal significance of 5.0σ. The results are consistent with next-to-leading-order calculations for the tt̅W and tt̅Z processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A search for new heavy resonances decaying to boson pairs (WZ, WW or ZZ) using 20.3 inverse femtobarns of proton-proton collision data at a center of mass energy of 8 TeV is presented. The data were recorded by the ATLAS detector at the Large Hadron Collider (LHC) in 2012. The analysis combines several search channels with the leptonic, semi-leptonic and fully hadronic final states. The diboson invariant mass spectrum is studied for local excesses above the Standard Model background prediction, and no significant excess is observed for the combined analysis. 95$\%$ confidence limits are set on the cross section times branching ratios for three signal models: an extended gauge model with a heavy W boson, a bulk Randall-Sundrum model with a spin-2 graviton, and a simplified model with a heavy vector triplet. Among the individual search channels, the fully-hadronic channel is predominantly presented where boson tagging technique and jet substructure cuts are used. Local excesses are found in the dijet mass distribution around 2 TeV, leading to a global significance of 2.5 standard deviations. This deviation from the Standard Model prediction results in many theory explanations, and the possibilities could be further explored using the LHC Run 2 data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Searches for the supersymmetric partner of the top quark (stop) are motivated by natural supersymmetry, where the stop has to be light to cancel the large radiative corrections to the Higgs boson mass. This thesis presents three different searches for the stop at √s = 8 TeV and √s = 13 TeV using data from the ATLAS experiment at CERN’s Large Hadron Collider. The thesis also includes a study of the primary vertex reconstruction performance in data and simulation at √s = 7 TeV using tt and Z events. All stop searches presented are carried out in final states with a single lepton, four or more jets and large missing transverse energy. A search for direct stop pair production is conducted with 20.3 fb−1 of data at a center-of-mass energy of √s = 8 TeV. Several stop decay scenarios are considered, including those to a top quark and the lightest neutralino and to a bottom quark and the lightest chargino. The sensitivity of the analysis is also studied in the context of various phenomenological MSSM models in which more complex decay scenarios can be present. Two different analyses are carried out at √s = 13 TeV. The first one is a search for both gluino-mediated and direct stop pair production with 3.2 fb−1 of data while the second one is a search for direct stop pair production with 13.2 fb−1 of data in the decay scenario to a bottom quark and the lightest chargino. The results of the analyses show no significant excess over the Standard Model predictions in the observed data. Consequently, exclusion limits are set at 95% CL on the masses of the stop and the lightest neutralino.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.