866 resultados para LHC Ê


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A measurement of the cross section for the production of isolated prompt photons in pp collisions at a center-of-mass energy s √ =7  TeV is presented. The results are based on an integrated luminosity of 4.6  fb −1 collected with the ATLAS detector at the LHC. The cross section is measured as a function of photon pseudorapidity η γ and transverse energy E γ T in the kinematic range 100≤E γ T <1000  GeV and in the regions |η γ |<1.37 and 1.52≤|η γ |<2.37 . The results are compared to leading-order parton-shower Monte Carlo models and next-to-leading-order perturbative QCD calculations. Next-to-leading-order perturbative QCD calculations agree well with the measured cross sections as a function of E γ T and η γ .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A measurement of the mass difference between top and anti-top quarks is presented. In a 4.7 fb−14.7 fb−1 data sample of proton–proton collisions at View the MathML sources=7 TeV recorded with the ATLAS detector at the LHC, events consistent with View the MathML sourcett¯ production and decay into a single charged lepton final state are reconstructed. For each event, the mass difference between the top and anti-top quark candidate is calculated. A two b -tag requirement is used in order to reduce the background contribution. A maximum likelihood fit to these per-event mass differences yields View the MathML sourceΔm≡mt−mt¯=0.67±0.61(stat)±0.41(syst) GeV, consistent with CPT invariance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among resummation techniques for perturbative QCD in the context of collider and flavor physics, soft-collinear effective theory (SCET) has emerged as both a powerful and versatile tool, having been applied to a large variety of processes, from B-meson decays to jet production at the LHC. This book provides a concise, pedagogical introduction to this technique. It discusses the expansion of Feynman diagrams around the high-energy limit, followed by the explicit construction of the effective Lagrangian - first for a scalar theory, then for QCD. The underlying concepts are illustrated with the quark vector form factor at large momentum transfer, and the formalism is applied to compute soft-gluon resummation and to perform transverse-momentum resummation for the Drell-Yan process utilizing renormalization group evolution in SCET. Finally, the infrared structure of n-point gauge-theory amplitudes is analyzed by relating them to effective-theory operators. This text is suitable for graduate students and non-specialist researchers alike as it requires only basic knowledge of perturbative QCD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a precise theoretical prediction for the signal-background interference process of gg(→ h ∗) → ZZ, which is useful to constrain the Higgs boson decay width and to measure Higgs couplings to the SM particles. The approximate NNLO K-factor is in the range of 2.05 − 2.45 (1.85 − 2.25), depending on M ZZ , at the 8 (13) TeV LHC. And the soft gluon resummation can increase the approximate NNLO result by about 10% at both the 8 TeV and 13 TeV LHC. The theoretical uncertainties including the scale, uncalculated multi-loop amplitudes of the background and PDF+αs are roughly O(10%) at NNLL′. We also confirm that the approximate K-factors in the interference and the pure signal processes are the same.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Large Hadron Collider is the world’s largest and most powerful particle accelerator. The project is divided in phases. The first one goes from 2009 until 2020. The second phase will consist of the implementation of upgrades. One of the upgrades is to increase the ratio of collision, the luminosity. This objective is the main of one of the most important projects which is carrying out the upgrades: Hi-Lumi LHC project. Increasing luminosity could be done by using a new material in the superconductor magnets placed at the interaction points: Nb3Sn, instead of NbTi, the one being used right now. Before implementing it many aspects should be analysed. One of them is the induction magnetic field quality. The tool used so far has been ROXIE, software developed at CERN by S. Russenschuck. One of the main features of the programme is the time-transient analysis, which is based on three mathematical models. It is quite precise for fields above 1.5 Tesla. However, they are not very accurate for lower fields. Therefore the aim of this project is to evaluate a more accurate model: Classical Preisach Model of Hysteresis, in order to better analyse induced field quality in the new material Nb3Sn. Resumen: El Gran Colisionador de Hadrones es el mayor acelerador de partículas circular del mundo. Se trata de uno de los mayores proyectos de investigación. La primera fase de funcionamiento comprende desde 2009 a 2020, cuando comenzará la siguiente fase. Durante el primer periodo se han pensado mejoras para que puedan ser implementadas en la segunda fase. Una de ellas es el aumento del ratio de las colisiones entre protones por choque. Este es el principal objetivo de uno de los proyectos que está llevando a cabo las mejoras a ser implementadas en 2020: Hi- Lumi LHC. Se cambiarán los imanes superconductores de NbTi de las dos zonas principales de interacción, y se sustituirán por imanes de Nb3Sn. Esta sustituciónn conlleva un profundo estudio previo. Entre otros, uno de los factores a analizar es la calidad del campo magnético. La herramienta utilizada es el software desarrollado por S. Russenschuck en el CERN llamado ROXIE. Está basado en tres modelos de magnetización, los cuales son precisos para campos mayores de 1.5 T. Sin embargo, no lo son tanto para campos menores. Con este proyecto se pretende evaluar la implementación de un cuarto modelo, el modelo clásico de histéresis de Preisach que permita llevar a cabo un mejor análisis de la calidad del campo inducido por el futuro material a utilizar en algunos de los imanes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Con el devenir de los tiempos e incentivado por el desarrollo tecnológico, la cantidad y complejidad de los experimentos realizados en el conocido laboratorio de física de partículas, C.E.R.N, ha alcanzado límites insospechados. Además, su evolución se acentúa y motiva con cada nuevo descubrimiento. Prueba de estas ansias por desvelar las entrañas y secretos del universo se encuentra en el choque de 13 TeV que tuvo lugar el pasado mes de mayo. Con él, no sólo se marcaban inequívocamente las expectativas del complejo para este nuevo ciclo de funcionamiento, sino que además se daba el pistoletazo de salida a la carrera que culminaría con el descubrimiento de los pentaquarks. A nivel ingenieril, esta mejora de las capacidades del complejo implica un exponencial endurecimiento de las exigencias impuestas a los sistemas empleados. Por consiguiente y de forma inevitable, las condiciones del interior del acelerador migran hacia baremos cada vez más drásticos. Tanto es así que los niveles de radiación alcanzados actualmente limitan notablemente el acceso de personal al acelerador; lo que se traduce en un incremento de los tiempos de mantenimiento y reparación. Actualmente estos retardos tratan de ser mitigados mediante el uso de robots móviles operados remotamente. De entre ellos, llama la atención aquél conocido bajo el acrónimo T.I.M (Train for RP Survey and visual inspection in LHC). Este tren, constituido por 5 vagones, se desplaza a lo largo del acelerador de partículas midiendo los niveles de radiación y oxígeno al tiempo que proporciona realimentación visual. En el presente proyecto se propone la mejora de las tareas de inspección y mantenimiento mediante la integración de un manipulador robótico de 6 grados de libertad en uno de los vagones del citado tren. De este modo, se consigue un sistema capaz de trasladarse a cualquier punto del acelerador, en un tiempo record, y realizar una gran cantidad de tareas de mantenimiento que comprenden desde simples inspecciones visuales a complejas labores como puede ser desatornillado o extracción de componentes dañados. Por otro lado, se plantea un segundo desarrollo sobre el que sustentar el diseño propuesto: “Construcción de un simulador robótico de alta fiabilidad, basado en ROS y Gazebo”. Adicionalmente, esta herramienta Software atiende a otros fines complementarios: sirve de trampolín para futuros desarrollos encauzados a la mejora del sistema mecánico; entrega una herramienta de bajo coste con la que analizar la integración de nuevos hitos en robótica y, por último, permite evaluar la adopción de un nuevo paradigma de programación en el que ROS se encuentre inmerso.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies of photosynthetic acclimation to elevated CO2 have focused on the most recently expanded, sunlit leaves in the canopy. We examined acclimation in a vertical profile of leaves through a canopy of wheat (Triticum aestivum L.). The crop was grown at an elevated CO2 partial pressure of 55 Pa within a replicated field experiment using free-air CO2 enrichment. Gas exchange was used to estimate in vivo carboxylation capacity and the maximum rate of ribulose-1,5-bisphosphate-limited photosynthesis. Net photosynthetic CO2 uptake was measured for leaves in situ within the canopy. Leaf contents of ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco), light-harvesting-complex (LHC) proteins, and total N were determined. Elevated CO2 did not affect carboxylation capacity in the most recently expanded leaves but led to a decrease in lower, shaded leaves during grain development. Despite this acclimation, in situ photosynthetic CO2 uptake remained higher under elevated CO2. Acclimation at elevated CO2 was accompanied by decreases in both Rubisco and total leaf N contents and an increase in LHC content. Elevated CO2 led to a larger increase in LHC/Rubisco in lower canopy leaves than in the uppermost leaf. Acclimation of leaf photosynthesis to elevated CO2 therefore depended on both vertical position within the canopy and the developmental stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the dinoflagellate Amphidinium carterae, photoadaptation involves changes in the transcription of genes encoding both of the major classes of light-harvesting proteins, the peridinin chlorophyll a proteins (PCPs) and the major a/c-containing intrinsic light-harvesting proteins (LHCs). PCP and LHC transcript levels were increased up to 86- and 6-fold higher, respectively, under low-light conditions relative to cells grown at high illumination. These increases in transcript abundance were accompanied by decreases in the extent of methylation of CpG and CpNpG motifs within or near PCP- and LHC-coding regions. Cytosine methylation levels in A. carterae are therefore nonstatic and may vary with environmental conditions in a manner suggestive of involvement in the regulation of gene expression. However, chemically induced undermethylation was insufficient in activating transcription, because treatment with two methylation inhibitors had no effect on PCP mRNA or protein levels. Regulation of gene activity through changes in DNA methylation has traditionally been assumed to be restricted to higher eukaryotes (deuterostomes and green plants); however, the atypically large genomes of dinoflagellates may have generated the requirement for systems of this type in a relatively “primitive” organism. Dinoflagellates may therefore provide a unique perspective on the evolution of eukaryotic DNA-methylation systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The azimuthal asymmetry in the risetime of signals in Auger surface detector stations is a source of information on shower development. The azimuthal asymmetry is due to a combination of the longitudinal evolution of the shower and geometrical effects related to the angles of incidence of the particles into the detectors. The magnitude of the effect depends upon the zenith angle and state of development of the shower and thus provides a novel observable, (sec theta)(max), sensitive to the mass composition of cosmic rays above 3 x 10(18) eV. By comparing measurements with predictions from shower simulations, we find for both of our adopted models of hadronic physics (QGSJETII-04 and EPOS-LHC) an indication that the mean cosmic-ray mass increases slowly with energy, as has been inferred from other studies. However, the mass estimates are dependent on the shower model and on the range of distance from the shower core selected. Thus the method has uncovered further deficiencies in our understanding of shower modeling that must be resolved before the mass composition can be inferred from (sec theta)(max).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the first time, the Z0 boson angular distribution in the center-of-momentum frame is measured in proton-proton collisions at [special characters omitted] = 7 TeV at the CERN LHC. The data sample, recorded with the CMS detector, corresponds to an integrated luminosity of approximately 36 pb–1 . Events in which there is a Z0 and at least one jet, with a jet transverse momentum threshold of 20 GeV and absolute jet rapidity less than 2.4, are selected for the analysis. Only the Z0's muon decay channel is studied. Within experimental and theoretical uncertainties, the measured angular distribution is in agreement with next-to-leading order perturbative QCD predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.