972 resultados para Multi Measurement Mode


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study we are using multi variate analysis techniques to discriminate signal from background in the fully hadronic decay channel of ttbar events. We give a brief introduction to the role of the Top quark in the standard model and a general description of the CMS Experiment at LHC. We have used the CMS experiment computing and software infrastructure to generate and prepare the data samples used in this analysis. We tested the performance of three different classifiers applied to our data samples and used the selection obtained with the Multi Layer Perceptron classifier to give an estimation of the statistical and systematical uncertainty on the cross section measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The g-factor is a constant which connects the magnetic moment $vec{mu}$ of a charged particle, of charge q and mass m, with its angular momentum $vec{J}$. Thus, the magnetic moment can be writen $ vec{mu}_J=g_Jfrac{q}{2m}vec{J}$. The g-factor for a free particle of spin s=1/2 should take the value g=2. But due to quantum electro-dynamical effects it deviates from this value by a small amount, the so called g-factor anomaly $a_e$, which is of the order of $10^{-3}$ for the free electron. This deviation is even bigger if the electron is exposed to high electric fields. Therefore highly charged ions, where electric field strength gets values on the order of $10^{13}-10^{16}$V/cm at the position of the bound electron, are an interesting field of investigations to test QED-calculations. In previous experiments [H"aff00,Ver04] using a single hydrogen-like ion confined in a Penning trap an accuracy of few parts in $10^{-9}$ was obtained. In the present work a new method for precise measurement of magnetic the electronic g-factor of hydrogen-like ions is discussed. Due to the unavoidable magnetic field inhomogeneity in a Penning trap, a very important contribution to the systematic uncertainty in the previous measurements arose from the elevated energy of the ion required for the measurement of its motional frequencies. Then it was necessary to extrapolate the result to vanishing energies. In the new method the energy in the cyclotron degree of freedom is reduced to the minimum attainable energy. This method consist in measuring the reduced cyclotron frequency $nu_{+}$ indirectly by coupling the axial to the reduced cyclotron motion by irradiation of the radio frequency $nu_{coup}=nu_{+}-nu_{ax}+delta$ where $delta$ is, in principle, an unknown detuning that can be obtained from the knowledge of the coupling process. Then the only unknown parameter is the desired value of $nu_+$. As a test, a measurement with, for simplicity, artificially increased axial energy was performed yielding the result $g_{exp}=2.000~047~020~8(24)(44)$. This is in perfect agreement with both the theoretical result $g_{theo}=2.000~047~020~2(6)$ and the previous experimental result $g_{exp1}=2.000~047~025~4(15)(44).$ In the experimental results the second error-bar is due to the uncertainty in the accepted value for the electron's mass. Thus, with the new method a higher accuracy in the g-factor could lead by comparison to the theoretical value to an improved value of the electron's mass. [H"af00] H. H"affner et al., Phys. Rev. Lett. 85 (2000) 5308 [Ver04] J. Verd'u et al., Phys. Rev. Lett. 92 (2004) 093002-1

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the thesis is presented the measurement of the neutrino velocity with the OPERA experiment in the CNGS beam, a muon neutrino beam produced at CERN. The OPERA detector observes muon neutrinos 730 km away from the source. Previous measurements of the neutrino velocity have been performed by other experiments. Since the OPERA experiment aims the direct observation of muon neutrinos oscillations into tau neutrinos, a higher energy beam is employed. This characteristic together with the higher number of interactions in the detector allows for a measurement with a much smaller statistical uncertainty. Moreover, a much more sophisticated timing system (composed by cesium clocks and GPS receivers operating in “common view mode”), and a Fast Waveform Digitizer (installed at CERN and able to measure the internal time structure of the proton pulses used for the CNGS beam), allows for a new measurement with a smaller systematic error. Theoretical models on Lorentz violating effects can be investigated by neutrino velocity measurements with terrestrial beams. The analysis has been carried out with blind method in order to guarantee the internal consistency and the goodness of each calibration measurement. The performed measurement is the most precise one done with a terrestrial neutrino beam, the statistical accuracy achieved by the OPERA measurement is about 10 ns and the systematic error is about 20 ns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis three measurements of top-antitop differential cross section at an energy in the center of mass of 7 TeV will be shown, as a function of the transverse momentum, the mass and the rapidity of the top-antitop system. The analysis has been carried over a data sample of about 5/fb recorded with the ATLAS detector. The events have been selected with a cut based approach in the "one lepton plus jets" channel, where the lepton can be either an electron or a muon. The most relevant backgrounds (multi-jet QCD and W+jets) have been extracted using data driven methods; the others (Z+ jets, diboson and single top) have been simulated with Monte Carlo techniques. The final, background-subtracted, distributions have been corrected, using unfolding methods, for the detector and selection effects. At the end, the results have been compared with the theoretical predictions. The measurements are dominated by the systematic uncertainties and show no relevant deviation from the Standard Model predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this project is to experimentally demonstrate geometrical nonlinear phenomena due to large displacements during resonant vibration of composite materials and to explain the problem associated with fatigue prediction at resonant conditions. Three different composite blades to be tested were designed and manufactured, being their difference in the composite layup (i.e. unidirectional, cross-ply, and angle-ply layups). Manual envelope bagging technique is explained as applied to the actual manufacturing of the components; problems encountered and their solutions are detailed. Forced response tests of the first flexural, first torsional, and second flexural modes were performed by means of a uniquely contactless excitation system which induced vibration by using a pulsed airflow. Vibration intensity was acquired by means of Polytec LDV system. The first flexural mode is found to be completely linear irrespective of the vibration amplitude. The first torsional mode exhibits a general nonlinear softening behaviour which is interestingly coupled with a hardening behaviour for the unidirectional layup. The second flexural mode has a hardening nonlinear behaviour for either the unidirectional and angle-ply blade, whereas it is slightly softening for the cross-ply layup. By using the same equipment as that used for forced response analyses, free decay tests were performed at different airflow intensities. Discrete Fourier Trasform over the entire decay and Sliding DFT were computed so as to visualise the presence of nonlinear superharmonics in the decay signal and when they were damped out from the vibration over the decay time. Linear modes exhibit an exponential decay, while nonlinearities are associated with a dry-friction damping phenomenon which tends to increase with increasing amplitude. Damping ratio is derived from logarithmic decrement for the exponential branch of the decay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrated all-fiber amplification of 11 ps pulses from a gain-switched laser diode at 1064 nm. The diode was driven at a repetition rate of 40 MHz and delivered 13 µW of fiber-coupled average output power. For the low output pulse energy of 325 fJ we have designed a multi-stage core pumped pre-amplifier in order to keep the contribution of undesired amplified spontaneous emission as low as possible. By using a novel time-domain approach for determining the power spectral density ratio (PSD) of signal to noise, we identified the optimal working point for our pre-amplifier. After the pre-amplifier we reduced the 40 MHz repetition rate to 1 MHz using a fiber coupled pulse-picker. The final amplification was done with a cladding pumped Yb-doped large mode area fiber and a subsequent Yb-doped rod-type fiber. With our setup we reached a total gain of 73 dB, resulting in pulse energies of >5.6 µJ and peak powers of >0.5 MW. The average PSD-ratio of signal to noise we determined to be 18/1 at the output of the final amplification stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internal combustion engines are, and will continue to be, a primary mode of power generation for ground transportation. Challenges exist in meeting fuel consumption regulations and emission standards while upholding performance, as fuel prices rise, and resource depletion and environmental impacts are of increasing concern. Diesel engines are advantageous due to their inherent efficiency advantage over spark ignition engines; however, their NOx and soot emissions can be difficult to control and reduce due to an inherent tradeoff. Diesel combustion is spray and mixing controlled providing an intrinsic link between spray and emissions, motivating detailed, fundamental studies on spray, vaporization, mixing, and combustion characteristics under engine relevant conditions. An optical combustion vessel facility has been developed at Michigan Technological University for these studies, with detailed tests and analysis being conducted. In this combustion vessel facility a preburn procedure for thermodynamic state generation is used, and validated using chemical kinetics modeling both for the MTU vessel, and institutions comprising the Engine Combustion Network international collaborative research initiative. It is shown that minor species produced are representative of modern diesel engines running exhaust gas recirculation and do not impact the autoignition of n-heptane. Diesel spray testing of a high-pressure (2000 bar) multi-hole injector is undertaken including non-vaporizing, vaporizing, and combusting tests, with sprays characterized using Mie back scatter imaging diagnostics. Liquid phase spray parameter trends agree with literature. Fluctuations in liquid length about a quasi-steady value are quantified, along with plume to plume variations. Hypotheses are developed for their causes including fuel pressure fluctuations, nozzle cavitation, internal injector flow and geometry, chamber temperature gradients, and turbulence. These are explored using a mixing limited vaporization model with an equation of state approach for thermopyhysical properties. This model is also applied to single and multi-component surrogates. Results include the development of the combustion research facility and validated thermodynamic state generation procedure. The developed equation of state approach provides application for improving surrogate fuels, both single and multi-component, in terms of diesel spray liquid length, with knowledge of only critical fuel properties. Experimental studies are coupled with modeling incorporating improved thermodynamic non-ideal gas and fuel

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: The aim of this study was to measure the radiation dose of dual-energy and single-energy multidetector computed tomographic (CT) imaging using adult liver, renal, and aortic imaging protocols. MATERIALS AND METHODS: Dual-energy CT (DECT) imaging was performed on a conventional 64-detector CT scanner using a software upgrade (Volume Dual Energy) at tube voltages of 140 and 80 kVp (with tube currents of 385 and 675 mA, respectively), with a 0.8-second gantry revolution time in axial mode. Parameters for single-energy CT (SECT) imaging were a tube voltage of 140 kVp, a tube current of 385 mA, a 0.5-second gantry revolution time, helical mode, and pitch of 1.375:1. The volume CT dose index (CTDI(vol)) value displayed on the console for each scan was recorded. Organ doses were measured using metal oxide semiconductor field-effect transistor technology. Effective dose was calculated as the sum of 20 organ doses multiplied by a weighting factor found in International Commission on Radiological Protection Publication 60. Radiation dose saving with virtual noncontrast imaging reconstruction was also determined. RESULTS: The CTDI(vol) values were 49.4 mGy for DECT imaging and 16.2 mGy for SECT imaging. Effective dose ranged from 22.5 to 36.4 mSv for DECT imaging and from 9.4 to 13.8 mSv for SECT imaging. Virtual noncontrast imaging reconstruction reduced the total effective dose of multiphase DECT imaging by 19% to 28%. CONCLUSION: Using the current Volume Dual Energy software, radiation doses with DECT imaging were higher than those with SECT imaging. Substantial radiation dose savings are possible with DECT imaging if virtual noncontrast imaging reconstruction replaces precontrast imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiocarbon offers a unique possibility for unambiguous source apportionment of carbonaceous particles due to a direct distinction of non-fossil and fossil carbon. In this work, particulate matter of different size fractions was collected at 4 sites in Switzerland to examine whether fine and coarse carbonaceous particles exhibit different fossil and contemporary sources. Elemental carbon (EC) and organic carbon (OC) as well as water-soluble OC (WSOC) and water-insoluble OC (WINSOC) were separated and determined for subsequent 14C measurement. In general, both fossil and non-fossil fractions in OC and EC were found more abundant in the fine than in the coarse mode. However, a substantial fraction (~20 ± 5%) of fossil EC was found in coarse particles, which could be attributed to traffic-induced non-exhaust emissions. The contribution of biomass burning to coarse-mode EC in winter was relatively high, which is likely associated to the coating of EC with organic and/or inorganic substances emitted from intensive wood burning. Further, fossil OC (i.e. from vehicle emissions) was found to be smaller than non-fossil OC due to the presence of primary biogenic OC and/or growing in size of wood-burning OC particles during aging processes. 14C content in WSOC indicated that the second organic carbon rather stems from non-fossil precursors for all samples. Interestingly, both fossil and non-fossil WINSOC concentrations were found to be higher in fine particles than in coarse particles in winter, which is likely due to primary wood burning emissions and/or secondary formation of WINSOC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distributions of event-by-event harmonic flow coefficients v_n for n=2-4 are measured in sqrt(s_NN)=2.76 TeV Pb+Pb collisions using the ATLAS detector at the LHC. The measurements are performed using charged particles with transverse momentum pT> 0.5 GeV and in the pseudorapidity range |eta|<2.5 in a dataset of approximately 7 ub^-1 recorded in 2010. The shapes of the v_n distributions are described by a two-dimensional Gaussian function for the underlying flow vector in central collisions for v_2 and over most of the measured centrality range for v_3 and v_4. Significant deviations from this function are observed for v_2 in mid-central and peripheral collisions, and a small deviation is observed for v_3 in mid-central collisions. It is shown that the commonly used multi-particle cumulants are insensitive to the deviations for v_2. The v_n distributions are also measured independently for charged particles with 0.51 GeV. When these distributions are rescaled to the same mean values, the adjusted shapes are found to be nearly the same for these two pT ranges. The v_n distributions are compared with the eccentricity distributions from two models for the initial collision geometry: a Glauber model and a model that includes corrections to the initial geometry due to gluon saturation effects. Both models fail to describe the experimental data consistently over most of the measured centrality range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using 1.8 fb(-1) of pp collisions at a center- of- mass energy of 7 TeV recorded by the ATLAS detector at the Large Hadron Collider, we present measurements of the production cross sections of Upsilon(1S,2S,3S) mesons. Upsilon mesons are reconstructed using the dimuon decay mode. Total production cross sections for p(T) < 70 GeV and in the rapidity interval vertical bar y(Upsilon)vertical bar < 2. 25 are measured to be, 8.01 +/- 0.02 +/- 0.36 +/- 0.31 nb, 2.05 +/- 0.01 +/- 0.12 +/- 0.08 nb, and 0.92 +/- 0.01 +/- 0.07 +/- 0.04 nb, respectively, with uncertainties separated into statistical, systematic, and luminosity measurement effects. In addition, differential cross section times dimuon branching fractions for Upsilon(1S), Upsilon(2S), and Upsilon(3S) as a function of Upsilon transverse momentum pT and rapidity are presented. These cross sections are obtained assuming unpolarized production. If the production polarization is fully transverse or longitudinal with no azimuthal dependence in the helicity frame, the cross section may vary by approximately +/- 20%. If a nontrivial azimuthal dependence is considered, integrated cross sections may be significantly enhanced by a factor of 2 or more. We compare our results to several theoretical models of Upsilon meson production, finding that none provide an accurate description of our data over the full range of Upsilon transverse momenta accessible with this data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A measurement of splitting scales, as defined by the kT clustering algorithm, is presented for final states containing a W boson produced in proton-proton collisions at a centre-of-mass energy of 7 TeV. The measurement is based on the full 2010 data sample corresponding to an integrated luminosity of 36 pb(-1) which was collected using the ATLAS detector at the CERN Large Hadron Collider. Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a k(T) cluster sequence of the hadronic activity accompanying the W boson, and ratios of these splitting scales. Backgrounds such as multi-jet and top-quark-pair production are subtracted and the results are corrected for detector effects. Predictions from various Monte Carlo event generators at particle level are compared to the data. Overall, reasonable agreement is found with all generators, but larger deviations between the predictions and the data are evident in the soft regions of the splitting scales.