605 resultados para Interacoes hadron-hadron
Resumo:
A search for evidence of invisible-particle decay modes of a Higgs boson produced in association with a Z boson at the Large Hadron Collider is presented. No deviation from the standard model expectation is observed in 4.5 fb−1 (20.3 fb−1) of 7 (8) TeV pp collision data collected by the ATLAS experiment. Assuming the standard model rate for ZH production, an upper limit of 75%, at the 95% confidence level is set on the branching ratio to invisible-particle decay modes of the Higgs boson at a mass of 125.5 GeV. The limit on the branching ratio is also interpreted in terms of an upper limit on the allowed dark matter-nucleon scattering cross section within a Higgs-portal dark matter scenario. Within the constraints of such a scenario, the results presented in this Letter provide the strongest available limits for low-mass dark matter candidates. Limits are also set on an additional neutral Higgs boson, in the mass range 110 < mH < 400 GeV, produced in association with a Z boson and decaying to invisible particles.
Resumo:
A search is reported for a neutral Higgs boson in the decay channel H → Zγ, Z → ℓ+ℓ− (ℓ = e, μ), using 4.5 fb−1 of pp collisions at √s = 7 TeV and 20.3 fb−1 of pp collisions at √s = 8 TeV, recorded by the ATLAS detector at the CERN Large Hadron Collider. The observed distribution of the invariantmass of the three final-state particles, mℓℓγ, is consistent with the Standard Model hypothesis in the investigated mass range of 120–150 GeV. For a Higgs boson with a mass of 125.5 GeV, the observed upper limit at the 95% confidence level is 11 times the Standard Model expectation. Upper limits are set on the cross section times branching ratio of a neutral Higgs boson with mass in the range 120–150 GeV between 0.13 and 0.5 pb for √s = 8 TeV at 95% confidence level.
Resumo:
Measurements of fiducial cross sections for the electroweak production of two jets in association with a Z-boson are presented. The measurements are performed using 20.3 fb−1 of proton-proton collision data collected at a centre-of-mass energy of p s = 8TeV by the ATLAS experiment at the Large Hadron Collider. The electroweak component is extracted by a fit to the dijet invariant mass distribution in a fiducial region chosen to enhance the electroweak contribution over the dominant background in which the jets are produced via the strong interaction. The electroweak cross sections measured in two fiducial regions are in good agreement with the Standard Model expectations and the background-only hypothesis is rejected with significance above the 5ơ level. The electroweak process includes the vector boson fusion production of a Z-boson and the data are used to place limits on anomalous triple gauge boson couplings. In addition, measurements of cross sections and differential distributions for inclusive Z-boson-plus-dijet production are performed in five fiducial regions, each with different sensitivity to the electroweak contribution. The results are corrected for detector effects and compared to predictions from the Sherpa and Powheg event generators.
Resumo:
Using a sample of dilepton top-quark pair (tt ¯ ) candidate events, a study is performed of the production of top-quark pairs together with heavy-flavor (HF) quarks, the sum of tt ¯ +b+X and tt ¯ +c+X , collectively referred to as tt ¯ + HF . The data set used corresponds to an integrated luminosity of 4.7 fb −1 of proton-proton collisions at a center-of-mass energy of 7 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. The presence of additional HF (b or c ) quarks in the tt ¯ sample is inferred by looking for events with at least three b -tagged jets, where two are attributed to the b quarks from the tt ¯ decays and the third to additional HF production. The dominant background to tt ¯ + HF in this sample is tt ¯ +jet events in which a light-flavor jet is misidentified as a heavy-flavor jet. To determine the heavy- and light-flavor content of the additional b -tagged jets, a fit to the vertex mass distribution of b -tagged jets in the sample is performed. The result of the fit shows that 79 ± 14 (stat) ± 22 (syst) of the 105 selected extra b -tagged jets originate from HF quarks, 3 standard deviations away from the hypothesis of zero tt ¯ + HF production. The result for extra HF production is quoted as a ratio (R HF ) of the cross section for tt ¯ + HF production to the cross section for tt ¯ production with at least one additional jet. Both cross sections are measured in a fiducial kinematic region within the ATLAS acceptance. R HF is measured to be [6.2±1.1(stat)±1.8(syst)]% for jets with p T >25 GeV and |η|<2.5 , in agreement with the expectations from Monte Carlo generators.
Resumo:
The OPERA experiment is searching for νμ → ντ oscillations in appearance mode, i.e., via the direct detection of τ leptons in ντ charged-current interactions. The evidence of νμ → ντ appearance has been previously reported with three ντ candidate events using a sub-sample of data from the 2008–2012 runs. We report here a fourth ντ candidate event, with the τ decaying into a hadron, found after adding the 2012 run events without any muon in the final state to the data sample. Given the number of analyzed events and the low background, νμ → ντ oscillations are established with a significance of 4.2σ.
Resumo:
NA61/SHINE (SPS Heavy Ion and Neutrino Experiment) is a multi-purpose experimental facility to study hadron production in hadron-proton, hadron-nucleus and nucleus-nucleus collisions at the CERN Super Proton Synchrotron. It recorded the first physics data with hadron beams in 2009 and with ion beams (secondary 7Be beams) in 2011. NA61/SHINE has greatly profited from the long development of the CERN proton and ion sources and the accelerator chain as well as the H2 beamline of the CERN North Area. The latter has recently been modified to also serve as a fragment separator as needed to produce the Be beams for NA61/SHINE. Numerous components of the NA61/SHINE set-up were inherited from its predecessors, in particular, the last one, the NA49 experiment. Important new detectors and upgrades of the legacy equipment were introduced by the NA61/SHINE Collaboration. This paper describes the state of the NA61/SHINE facility — the beams and the detector system — before the CERN Long Shutdown I, which started in March 2013.
Resumo:
We present experimental results on inclusive spectra and mean multiplicities of negatively charged pions produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158GeV/c (√s = 6.3, 7.7,8.8, 12.3 and 17.3GeV, respectively). The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN super proton synchrotron. Two-dimensional spectra are determined in terms of rapidity and transverse momentum. Their properties such as the width of rapidity distributions and the inverse slope parameter of transverse mass spectra are extracted and their collision energy dependences are presented. The results on inelastic p+p interactions are compared with the corresponding data on central Pb+Pb collisions measured by the NA49 experiment at the CERNSPS. The results presented in this paper are part of the NA61/SHINE ion program devoted to the study of the properties of the onset of deconfinement and search for the critical point of strongly interacting matter. They are required for interpretation of results on nucleus–nucleus and proton–nucleus collisions.
Resumo:
We analyze transverse thrust in the framework of Soft Collinear Effective Theory and obtain a factorized expression for the cross section that permits resummation of terms enhanced in the dijet limit to arbitrary accuracy. The factorization theorem for this hadron-collider event-shape variable involves collinear emissions at different virtualities and suffers from a collinear anomaly. We compute all its ingredients at the one-loop order, and show that the two-loop input for next-to-next-to-leading logarithmic accuracy can be extracted numerically, from existing fixed-order codes.
Resumo:
The Tokai-to-Kamioka (T2K) neutrino experiment measures neutrino oscillations by using an almost pure muon neutrino beam produced at the J-PARC accelerator facility. The T2K muon monitor was installed to measure the direction and stability of the muon beam which is produced together with the muon neutrino beam. The systematic error in the muon beam direction measurement was estimated, using data and MC simulation, to be 0.28 mrad. During beam operation, the proton beam has been controlled using measurements from the muon monitor and the direction of the neutrino beam has been tuned to within 0.3 mrad with respect to the designed beam-axis. In order to understand the muon beam properties, measurement of the absolute muon yield at the muon monitor was conducted with an emulsion detector. The number of muon tracks was measured to be (4.06 ± 0.05) × 10⁴ cm⁻² normalized with 4 × 10¹¹protons on target with 250 kA horn operation. The result is in agreement with the prediction which is corrected based on hadron production data.
Resumo:
Ion beam therapy is a valuable method for the treatment of deep-seated and radio-resistant tumors thanks to the favorable depth-dose distribution characterized by the Bragg peak. Hadrontherapy facilities take advantage of the specific ion range, resulting in a highly conformal dose in the target volume, while the dose in critical organs is reduced as compared to photon therapy. The necessity to monitor the delivery precision, i.e. the ion range, is unquestionable, thus different approaches have been investigated, such as the detection of prompt photons or annihilation photons of positron emitter nuclei created during the therapeutic treatment. Based on the measurement of the induced β+ activity, our group has developed various in-beam PET prototypes: the one under test is composed by two planar detector heads, each one consisting of four modules with a total active area of 10 × 10 cm2. A single detector module is made of a LYSO crystal matrix coupled to a position sensitive photomultiplier and is read-out by dedicated frontend electronics. A preliminary data taking was performed at the Italian National Centre for Oncological Hadron Therapy (CNAO, Pavia), using proton beams in the energy range of 93–112 MeV impinging on a plastic phantom. The measured activity profiles are presented and compared with the simulated ones based on the Monte Carlo FLUKA package.
Resumo:
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.
Resumo:
The Large Hadron Collider is the world’s largest and most powerful particle accelerator. The project is divided in phases. The first one goes from 2009 until 2020. The second phase will consist of the implementation of upgrades. One of the upgrades is to increase the ratio of collision, the luminosity. This objective is the main of one of the most important projects which is carrying out the upgrades: Hi-Lumi LHC project. Increasing luminosity could be done by using a new material in the superconductor magnets placed at the interaction points: Nb3Sn, instead of NbTi, the one being used right now. Before implementing it many aspects should be analysed. One of them is the induction magnetic field quality. The tool used so far has been ROXIE, software developed at CERN by S. Russenschuck. One of the main features of the programme is the time-transient analysis, which is based on three mathematical models. It is quite precise for fields above 1.5 Tesla. However, they are not very accurate for lower fields. Therefore the aim of this project is to evaluate a more accurate model: Classical Preisach Model of Hysteresis, in order to better analyse induced field quality in the new material Nb3Sn. Resumen: El Gran Colisionador de Hadrones es el mayor acelerador de partículas circular del mundo. Se trata de uno de los mayores proyectos de investigación. La primera fase de funcionamiento comprende desde 2009 a 2020, cuando comenzará la siguiente fase. Durante el primer periodo se han pensado mejoras para que puedan ser implementadas en la segunda fase. Una de ellas es el aumento del ratio de las colisiones entre protones por choque. Este es el principal objetivo de uno de los proyectos que está llevando a cabo las mejoras a ser implementadas en 2020: Hi- Lumi LHC. Se cambiarán los imanes superconductores de NbTi de las dos zonas principales de interacción, y se sustituirán por imanes de Nb3Sn. Esta sustituciónn conlleva un profundo estudio previo. Entre otros, uno de los factores a analizar es la calidad del campo magnético. La herramienta utilizada es el software desarrollado por S. Russenschuck en el CERN llamado ROXIE. Está basado en tres modelos de magnetización, los cuales son precisos para campos mayores de 1.5 T. Sin embargo, no lo son tanto para campos menores. Con este proyecto se pretende evaluar la implementación de un cuarto modelo, el modelo clásico de histéresis de Preisach que permita llevar a cabo un mejor análisis de la calidad del campo inducido por el futuro material a utilizar en algunos de los imanes.
Resumo:
With the advent of the new extragalactic deuterium observations, Big Bang nucleosynthesis (BBN) is on the verge of undergoing a transformation. In the past, the emphasis has been on demonstrating the concordance of the BBN model with the abundances of the light isotopes extrapolated back to their primordial values by using stellar and galactic evolution theories. As a direct measure of primordial deuterium is converged upon, the nature of the field will shift to using the much more precise primordial D/H to constrain the more flexible stellar and galactic evolution models (although the question of potential systematic error in 4He abundance determinations remains open). The remarkable success of the theory to date in establishing the concordance has led to the very robust conclusion of BBN regarding the baryon density. This robustness remains even through major model variations such as an assumed first-order quark-hadron phase transition. The BBN constraints on the cosmological baryon density are reviewed and demonstrate that the bulk of the baryons are dark and also that the bulk of the matter in the universe is nonbaryonic. Comparison of baryonic density arguments from Lyman-α clouds, x-ray gas in clusters, and the microwave anisotropy are made.
Resumo:
Electromagnetic energy injected into the universe above a few hundred TeV is expected to pile up as γ radiation in a relatively narrow energy interval below 100 TeV due to its interaction with the 2.7^°K background radiation. We present an upper limit (90% C.L.) on the ratio of primary γ to charged cosmic rays in the energy interval 65–160 TeV (80–200 TeV) of 10.3 • 10^−3 (7.8 • 10^−3). Data from the HEGRA cosmic-ray detector complex consisting of a wide angle Čerenkov array (AIROBICC) measuring the lateral distribution of air Čerenkov light and a scintillator array, were used with a novel method to discriminate γ-ray and hadron induced air showers. If the presently unmeasured universal far infrared background radiation is not too intense, the result rules out a topological-defect origin of ultrahigh energy cosmic rays for masses of the X particle released by the defects equal to or larger than about 10^16 GeV.
Resumo:
ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.