932 resultados para Read Out Driver, Data Acquisition, Electronics, FPGA, ATLAS, IBL, Pixel Detector, LHC, VME
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Search for a standard model Higgs boson in the H→ZZ→ℓ(+)ℓ(-)νν decay channel with the ATLAS detector
Resumo:
A search for a heavy standard model Higgs boson decaying via H→ZZ→→ℓ(+)ℓ(-)νν, where ℓ=e, μ, is presented. It is based on proton-proton collision data at √s=7 TeV, collected by the ATLAS experiment at the LHC in the first half of 2011 and corresponding to an integrated luminosity of 1.04 fb(-1). The data are compared to the expected standard model backgrounds. The data and the background expectations are found to be in agreement and upper limits are placed on the Higgs boson production cross section over the entire mass window considered; in particular, the production of a standard model Higgs boson is excluded in the region 340
Resumo:
This article gives an overview over the methods used in the low--level analysis of gene expression data generated using DNA microarrays. This type of experiment allows to determine relative levels of nucleic acid abundance in a set of tissues or cell populations for thousands of transcripts or loci simultaneously. Careful statistical design and analysis are essential to improve the efficiency and reliability of microarray experiments throughout the data acquisition and analysis process. This includes the design of probes, the experimental design, the image analysis of microarray scanned images, the normalization of fluorescence intensities, the assessment of the quality of microarray data and incorporation of quality information in subsequent analyses, the combination of information across arrays and across sets of experiments, the discovery and recognition of patterns in expression at the single gene and multiple gene levels, and the assessment of significance of these findings, considering the fact that there is a lot of noise and thus random features in the data. For all of these components, access to a flexible and efficient statistical computing environment is an essential aspect.
Resumo:
Currently, observations of space debris are primarily performed with ground-based sensors. These sensors have a detection limit at some centimetres diameter for objects in Low Earth Orbit (LEO) and at about two decimetres diameter for objects in Geostationary Orbit (GEO). The few space-based debris observations stem mainly from in-situ measurements and from the analysis of returned spacecraft surfaces. Both provide information about mostly sub-millimetre-sized debris particles. As a consequence the population of centimetre- and millimetre-sized debris objects remains poorly understood. The development, validation and improvement of debris reference models drive the need for measurements covering the whole diameter range. In 2003 the European Space Agency (ESA) initiated a study entitled “Space-Based Optical Observation of Space Debris”. The first tasks of the study were to define user requirements and to develop an observation strategy for a space-based instrument capable of observing uncatalogued millimetre-sized debris objects. Only passive optical observations were considered, focussing on mission concepts for the LEO, and GEO regions respectively. Starting from the requirements and the observation strategy, an instrument system architecture and an associated operations concept have been elaborated. The instrument system architecture covers the telescope, camera and onboard processing electronics. The proposed telescope is a folded Schmidt design, characterised by a 20 cm aperture and a large field of view of 6°. The camera design is based on the use of either a frame-transfer charge coupled device (CCD), or on a cooled hybrid sensor with fast read-out. A four megapixel sensor is foreseen. For the onboard processing, a scalable architecture has been selected. Performance simulations have been executed for the system as designed, focussing on the orbit determination of observed debris particles, and on the analysis of the object detection algorithms. In this paper we present some of the main results of the study. A short overview of the user requirements and observation strategy is given. The architectural design of the instrument is discussed, and the main tradeoffs are outlined. An insight into the results of the performance simulations is provided.
Resumo:
The verification possibilities of dynamically collimated treatment beams with a scanning liquid ionization chamber electronic portal image device (SLIC-EPID) are investigated. The ion concentration in the liquid of a SLIC-EPID and therefore the read-out signal is determined by two parameters of a differential equation describing the creation and recombination of the ions. Due to the form of this equation, the portal image detector describes a nonlinear dynamic system with memory. In this work, the parameters of the differential equation were experimentally determined for the particular chamber in use and for an incident open 6 MV photon beam. The mathematical description of the ion concentration was then used to predict portal images of intensity-modulated photon beams produced by a dynamic delivery technique, the sliding window approach. Due to the nature of the differential equation, a mathematical condition for 'reliable leaf motion verification' in the sliding window technique can be formulated. It is shown that the time constants for both formation and decay of the equilibrium concentration in the chamber is in the order of seconds. In order to guarantee reliable leaf motion verification, these time constants impose a constraint on the rapidity of the image-read out for a given maximum leaf speed. For a leaf speed of 2 cm s(-1), a minimum image acquisition frequency of about 2 Hz is required. Current SLIC-EPID systems are usually too slow since they need about a second to acquire a portal image. However, if the condition is fulfilled, the memory property of the system can be used to reconstruct the leaf motion. It is shown that a simple edge detecting algorithm can be employed to determine the leaf positions. The method is also very robust against image noise.
Resumo:
OBJECT: The localization of any given target in the brain has become a challenging issue because of the increased use of deep brain stimulation to treat Parkinson disease, dystonia, and nonmotor diseases (for example, Tourette syndrome, obsessive compulsive disorders, and depression). The aim of this study was to develop an automated method of adapting an atlas of the human basal ganglia to the brains of individual patients. METHODS: Magnetic resonance images of the brain specimen were obtained before extraction from the skull and histological processing. Adaptation of the atlas to individual patient anatomy was performed by reshaping the atlas MR images to the images obtained in the individual patient using a hierarchical registration applied to a region of interest centered on the basal ganglia, and then applying the reshaping matrix to the atlas surfaces. RESULTS: Results were evaluated by direct visual inspection of the structures visible on MR images and atlas anatomy, by comparison with electrophysiological intraoperative data, and with previous atlas studies in patients with Parkinson disease. The method was both robust and accurate, never failing to provide an anatomically reliable atlas to patient registration. The registration obtained did not exceed a 1-mm mismatch with the electrophysiological signatures in the region of the subthalamic nucleus. CONCLUSIONS: This registration method applied to the basal ganglia atlas forms a powerful and reliable method for determining deep brain stimulation targets within the basal ganglia of individual patients.
Resumo:
This Letter describes a model-independent search for the production of new resonances in photon + jet events using 20 inverse fb of proton--proton LHC data recorded with the ATLAS detector at a centre-of-mass energy of s√ = 8 TeV. The photon + jet mass distribution is compared to a background model fit from data; no significant deviation from the background-only hypothesis is found. Limits are set at 95% credibility level on generic Gaussian-shaped signals and two benchmark phenomena beyond the Standard Model: non-thermal quantum black holes and excited quarks. Non-thermal quantum black holes are excluded below masses of 4.6 TeV and excited quarks are excluded below masses of 3.5 TeV.
Resumo:
his Letter presents measurements of the polarization of the top quark in top-antitop quark pair events, using 4.7 fb−1 of proton-proton collision data recorded with the ATLAS detector at the Large Hadron Collider at s√=7 TeV. Final states containing one or two isolated leptons (electrons or muons) and jets are considered. Two measurements of αℓP, the product of the leptonic spin-analyzing power and the top quark polarization, are performed assuming that the polarization is introduced by either a CP conserving or a maximally CP violating production process. The measurements obtained, αℓPCPC=−0.035±0.014(stat)±0.037(syst) and αℓPCPV=0.020±0.016(stat)+0.013−0.017(syst), are in good agreement with the standard model prediction of negligible top quark polarization.
Resumo:
The production cross-section of B+ mesons is measured as a function of transverse momentum p T and rapidity y in proton-proton collisions at centre-of-mass energy root s = 7 TeV, using 2.4 fb(-1) of data recorded with the ATLAS detector at the Large Hadron Collider. The differential production cross-sections, determined in the range 9 GeV < p(T) < 120 GeV and vertical bar y vertical bar < 2.25, are compared to next-to-leading-order theoretical predictions.
Resumo:
We present a search for a light (mass < 2 GeV) boson predicted by Hidden Valley supersymmetric models that decays into a final state consisting of collimated muons or electrons, denoted "lepton-jets". The analysis uses 5 fb(-1) of root s = 7 TeV proton-proton collision data recorded by the ATLAS detector at the Large Hadron Collider to search for the following signatures: single lepton-jets with at least four muons; pairs of lepton-jets, each with two or more muons; and pairs of lepton-jets with two or more electrons. This study finds no statistically significant deviation from the Standard Model prediction and places 95% confidence-level exclusion limits on the production cross section times branching ratio of light bosons for several parameter sets of a Hidden Valley model.
Resumo:
A measurement of the ZZ production cross section in proton-proton collisions at root s = 7 TeV using data recorded by the ATLAS experiment at the Large Hadron Collider is presented. In a data sample corresponding to an integrated luminosity of 4.6 fb(-1) collected in 2011, events are selected that are consistent either with two Z bosons decaying to electrons or muons or with one Z boson decaying to electrons or muons and a second Z boson decaying to neutrinos. The ZZ((*)) -> l(+)l(-)l'(+)l'(-) and ZZ -> l(+)l(-) nu(nu) over bar cross sections are measured in restricted phase-space regions. These results are then used to derive the total cross section for ZZ events produced with both Z bosons in the mass range 66 to 116 GeV, sigma(tot)(ZZ) = 6.7 +/- 0.7 (stat.) (+0.4)(-0.3) (syst.) +/- 0.3 (lumi.) pb, which is consistent with the Standard Model prediction of 5.89(-0.18)(+0.22) pb calculated at next-to-leading order in QCD. The normalized differential cross sections in bins of various kinematic variables are presented. Finally, the differential event yield as a function of the transverse momentum of the leading Z boson is used to set limits on anomalous neutral triple gauge boson couplings in ZZ production.
Resumo:
The ATLAS experiment at the LHC has measured the production cross section of events with two isolated photons in the final state, in proton-proton collisions at root s = 7 TeV. The full data set collected in 2011, corresponding to an integrated luminosity of 4.9 fb(-1), is used. The amount of background, from hadronic jets and isolated electrons, is estimated with data-driven techniques and subtracted. The total cross section, for two isolated photons with transverse energies above 25 GeV and 22 GeV respectively, in the acceptance of the electromagnetic calorimeter (vertical bar eta vertical bar < 1.37 and 1.52 < vertical bar eta vertical bar 2.37) and with an angular separation Delta R > 0.4, is 44.0(-4.2)(+3.2) pb. The differential cross sections as a function of the di-photon invariant mass, transverse momentum, azimuthal separation, and cosine of the polar angle of the largest transverse energy photon in the Collins-Soper di-photon rest frame are also measured. The results are compared to the prediction of leading-order parton-shower and next-to-leading-order and next-to-next-to-leading-order parton-level generators.
Resumo:
A measurement of jet shapes in top-quark pair events using 1.8 fb−1 of s√=7 TeV pp collision data recorded by the ATLAS detector at the LHC is presented. Samples of top-quark pair events are selected in both the single-lepton and dilepton final states. The differential and integrated shapes of the jets initiated by bottom-quarks from the top-quark decays are compared with those of the jets originated by light-quarks from the hadronic W-boson decays W→qq¯′ in the single-lepton channel. The light-quark jets are found to have a narrower distribution of the momentum flow inside the jet area than b-quark jets.
Resumo:
A search for new particles that decay into top quark pairs (t (t) over bar) is performed with the ATLAS experiment at the LHC using an integrated luminosity of 4.7 fb(-1) of proton-proton (pp) collision data collected at a center-of-mass energy root s = 7 TeV. In the t (t) over bar) -> WbWb decay, the lepton plus jets final state is used, where one W boson decays leptonically and the other hadronically. The t (t) over bar) system is reconstructed using both small-radius and large-radius jets, the latter being supplemented by a jet substructure analysis. A search for local excesses in the number of data events compared to the Standard Model expectation in the t (t) over bar) invariant mass spectrum is performed. No evidence for a t (t) over bar) resonance is found and 95% credibility-level limits on the production rate are determined for massive states predicted in two benchmark models. The upper limits on the cross section times branching ratio of a narrow Z' resonance range from 5.1 pb for a boson mass of 0.5 TeV to 0.03 pb for a mass of 3 TeV. A narrow leptophobic topcolor Z' resonance with a mass below 1.74 TeV is excluded. Limits are also derived for a broad color-octet resonance with m 15.3%. A Kaluza-Klein excitation of the gluon in a Randall-Sundrum model is excluded for masses below 2.07 TeV.
Resumo:
This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag and remove background contaminated events in data. Trigger-rate based monitoring of beam-related backgrounds is presented. The correlations of backgrounds with machine conditions, such as residual pressure in the beam-pipe, are discussed. Results from dedicated beam-background simulations are shown, and their qualitative agreement with data is evaluated. Data taken during the passage of unpaired, i.e. non-colliding, proton bunches is used to obtain background-enriched data samples. These are used to identify characteristic features of beam-induced backgrounds, which then are exploited to develop dedicated background tagging tools. These tools, based on observables in the Pixel detector, the muon spectrometer and the calorimeters, are described in detail and their efficiencies are evaluated. Finally an example of an application of these techniques to a monojet analysis is given, which demonstrates the importance of such event cleaning techniques for some new physics searches.