930 resultados para Large detector systems for particle and astroparticle physics
Resumo:
A measurement of W boson production in lead-lead collisions at sNN−−−√=2.76 TeV is presented. It is based on the analysis of data collected with the ATLAS detector at the LHC in 2011 corresponding to an integrated luminosity of 0.14 nb−1 and 0.15 nb−1 in the muon and electron decay channels, respectively. The differential production cross-sections and lepton charge asymmetry are each measured as a function of the average number of participating nucleons ⟨Npart⟩ and absolute pseudorapidity of the charged lepton. The results are compared to predictions based on next-to-leading-order QCD calculations. These measurements are, in principle, sensitive to possible nuclear modifications to the parton distribution functions and also provide information on scaling of W boson production in multi-nucleon systems.
Resumo:
This Letter presents measurements of correlated production of nearby jets in Pb+Pb collisions at sNN−−−√=2.76 TeV using the ATLAS detector at the Large Hadron Collider. The measurement was performed using 0.14 nb−1 of data recorded in 2011. The production of correlated jet pairs was quantified using the rate, RΔR, of ``neighbouring'' jets that accompany ``test'' jets within a given range of angular distance, ΔR, in the pseudorapidity--azimuthal angle plane. The jets were measured in the ATLAS calorimeter and were reconstructed using the anti-kt algorithm with radius parameters d=0.2, 0.3, and 0.4. RΔR was measured in different Pb+Pb collision centrality bins, characterized by the total transverse energy measured in the forward calorimeters. A centrality dependence of RΔR is observed for all three jet radii with RΔR found to be lower in central collisions than in peripheral collisions. The ratios formed by the RΔR values in different centrality bins and the values in the 40--80 % centrality bin are presented.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.
Resumo:
Light emitting polymers (LEP) have drawn considerable attention because of their numerous potential applications in the field of optoelectronic devices. Till date, a large number of organic molecules and polymers have been designed and devices fabricated based on these materials. Optoelectronic devices like polymer light emitting diodes (PLED) have attracted wide-spread research attention owing to their superior properties like flexibility, lower operational power, colour tunability and possibility of obtaining large area coatings. PLEDs can be utilized for the fabrication of flat panel displays and as replacements for incandescent lamps. The internal efficiency of the LEDs mainly depends on the electroluminescent efficiency of the emissive polymer such as quantum efficiency, luminance-voltage profile of LED and the balanced injection of electrons and holes. Poly (p-phenylenevinylene) (PPV) and regio-regular polythiophenes are interesting electro-active polymers which exhibit good electrical conductivity, electroluminescent activity and high film-forming properties. A combination of Red, Green and Blue emitting polymers is necessary for the generation of white light which can replace the high energy consuming incandescent lamps. Most of these polymers show very low solubility, stability and poor mechanical properties. Many of these light emitting polymers are based on conjugated extended chains of alternating phenyl and vinyl units. The intra-chain or inter-chain interactions within these polymer chains can change the emitted colour. Therefore an effective way of synthesizing polymers with reduced π-stacking, high solubility, high thermal stability and high light-emitting efficiency is still a challenge for chemists. New copolymers have to be effectively designed so as to solve these issues. Hence, in the present work, the suitability of a few novel copolymers with very high thermal stability, excellent solubility, intense light emission (blue, cyan and green) and high glass transition temperatures have been investigated to be used as emissive layers for polymer light emitting diodes.
Resumo:
All the orthogonal space-time block coding (O-STBC) schemes are based on the following assumption: the channel remains static over the entire length of the codeword. However, time selective fading channels do exist, and in many cases the conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. This paper addresses such an issue by introducing a parallel interference cancellation (PIC) based detector for the Gi coded systems (i=3 and 4).
Resumo:
We present a descriptive overview of the meteorology in the south eastern subtropical Pacific (SEP) during the VOCALS-REx intensive observations campaign which was carried out between October and November 2008. Mainly based on data from operational analyses, forecasts, reanalysis, and satellite observations, we focus on spatio-temporal scales from synoptic to planetary. A climatological context is given within which the specific conditions observed during the campaign are placed, with particular reference to the relationships between the large-scale and the regional circulations. The mean circulations associated with the diurnal breeze systems are also discussed. We then provide a summary of the day-to-day synoptic-scale circulation, air-parcel trajectories, and cloud cover in the SEP during VOCALS-REx. Three meteorologically distinct periods of time are identified and the large-scale causes for their different character are discussed. The first period was characterised by significant variability associated with synoptic-scale systems interesting the SEP; while the two subsequent phases were affected by planetary-scale disturbances with a slower evolution. The changes between initial and later periods can be partly explained from the regular march of the annual cycle, but contributions from subseasonal variability and its teleconnections were important. Across the whole of the two months under consideration we find a significant correlation between the depth of the inversion-capped marine boundary layer (MBL) and the amount of low cloud in the area of study. We discuss this correlation and argue that at least as a crude approximation a typical scaling may be applied relating MBL and cloud properties with the large-scale parameters of SSTs and tropospheric temperatures. These results are consistent with previously found empirical relationships involving lower-tropospheric stability.
Resumo:
The shadowing of cosmic ray primaries by the moon and sun was observed by the MINOS far detector at a depth of 2070 mwe using 83.54 million cosmic ray muons accumulated over 1857.91 live-days. The shadow of the moon was detected at the 5.6 sigma level and the shadow of the sun at the 3.8 sigma level using a log-likelihood search in celestial coordinates. The moon shadow was used to quantify the absolute astrophysical pointing of the detector to be 0.17 +/- 0.12 degrees. Hints of interplanetary magnetic field effects were observed in both the sun and moon shadow. Published by Elsevier B.V.
Resumo:
A method is developed to search for air showers initiated by photons using data recorded by the surface detector of the Auger Observatory. The approach is based on observables sensitive to the longitudinal shower development, the signal risetime and the curvature of the shower front. Applying this method to the data, tipper limits on the flux of photons of 3.8 x 10(-3), 2.5 x 10(-3), and 2.2 x 10(-3) km(-2) sr(-1) yr(-1) above 10(19) eV, 2 x 10(19) eV, and 4 x 10(19) eV are derived, with corresponding limits on the fraction of photons being 2.0%, 5.1%, and 31% (all limits at 95% c.l.). These photon limits disfavor certain exotic models of sources of cosmic rays. The results also show that the approach adopted by the Auger Observatory to calibrate the shower energy is not strongly biased by a contamination from photons. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The Pierre Auger Observatory is a hybrid detector for ultra-high energy cosmic rays. It combines a surface array to measure secondary particles at ground level together with a fluorescence detector to measure the development of air showers in the atmosphere above the array. The fluorescence detector comprises 24 large telescopes specialized for measuring the nitrogen fluorescence caused by charged particles of cosmic ray air showers. In this paper we describe the components of the fluorescence detector including its optical system, the design of the camera, the electronics, and the systems for relative and absolute calibration. We also discuss the operation and the monitoring of the detector. Finally, we evaluate the detector performance and precision of shower reconstructions. (C) 2010 Elsevier B.V All rights reserved.
Resumo:
We apply the general principles of effective field theories to the construction of effective interactions suitable for few- and many-body calculations in a no-core shell model framework. We calculate the spectrum of systems with three and four two-component fermions in a harmonic trap. In the unitary limit, we find that three-particle results are within 10% of known semianalytical values even in small model spaces. The method is very general, and can be readily extended to other regimes, more particles, different species (e.g., protons and neutrons in nuclear physics), or more-component fermions (as well as bosons). As an illustration, we present calculations of the lowest-energy three-fermion states away from the unitary limit and find a possible inversion of parity in the ground state in the limit of trap size large compared to the scattering length. Furthermore, we investigate the lowest positive-parity states for four fermions, although we are limited by the dimensions we can currently handle in this case.
Resumo:
CMS is a general purpose experiment, designed to study the physics of pp collisions at 14 TeV at the Large Hadron Collider ( LHC). It currently involves more than 2000 physicists from more than 150 institutes and 37 countries. The LHC will provide extraordinary opportunities for particle physics based on its unprecedented collision energy and luminosity when it begins operation in 2007. The principal aim of this report is to present the strategy of CMS to explore the rich physics programme offered by the LHC. This volume demonstrates the physics capability of the CMS experiment. The prime goals of CMS are to explore physics at the TeV scale and to study the mechanism of electroweak symmetry breaking - through the discovery of the Higgs particle or otherwise. To carry out this task, CMS must be prepared to search for new particles, such as the Higgs boson or supersymmetric partners of the Standard Model particles, from the start- up of the LHC since new physics at the TeV scale may manifest itself with modest data samples of the order of a few fb(-1) or less. The analysis tools that have been developed are applied to study in great detail and with all the methodology of performing an analysis on CMS data specific benchmark processes upon which to gauge the performance of CMS. These processes cover several Higgs boson decay channels, the production and decay of new particles such as Z' and supersymmetric particles, B-s production and processes in heavy ion collisions. The simulation of these benchmark processes includes subtle effects such as possible detector miscalibration and misalignment. Besides these benchmark processes, the physics reach of CMS is studied for a large number of signatures arising in the Standard Model and also in theories beyond the Standard Model for integrated luminosities ranging from 1 fb(-1) to 30 fb(-1). The Standard Model processes include QCD, B-physics, diffraction, detailed studies of the top quark properties, and electroweak physics topics such as the W and Z(0) boson properties. The production and decay of the Higgs particle is studied for many observable decays, and the precision with which the Higgs boson properties can be derived is determined. About ten different supersymmetry benchmark points are analysed using full simulation. The CMS discovery reach is evaluated in the SUSY parameter space covering a large variety of decay signatures. Furthermore, the discovery reach for a plethora of alternative models for new physics is explored, notably extra dimensions, new vector boson high mass states, little Higgs models, technicolour and others. Methods to discriminate between models have been investigated. This report is organized as follows. Chapter 1, the Introduction, describes the context of this document. Chapters 2-6 describe examples of full analyses, with photons, electrons, muons, jets, missing E-T, B-mesons and tau's, and for quarkonia in heavy ion collisions. Chapters 7-15 describe the physics reach for Standard Model processes, Higgs discovery and searches for new physics beyond the Standard Model.
Search for New Physics with a Monojet and Missing Transverse Energy in pp Collisions at root s=7 TeV
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We show in this Letter that the observation of the angular distribution of upward-going muons and cascade events induced by atmospheric neutrinos at the TeV energy scale which can be performed by a kilometer-scale neutrino telescope, such as the IceCube detector, can be used to probe a large neutrino mass splitting, |Δm 2| ∼ (0.5-2.0) eV 2, implied by the LSND experiment and discriminate among four neutrino mass schemes. This is due to the fact that such a large mass scale can promote non-negligible v μ → v e, v τ/v μ → v e, v τ conversions at these energies by the MSW effect as well as vacuum oscillation, unlike what is expected if all the neutrino mass splittings are small. © 2003 Elsevier Science B.V. All rights reserved.