983 resultados para current measurement
Resumo:
In the present study we are using multi variate analysis techniques to discriminate signal from background in the fully hadronic decay channel of ttbar events. We give a brief introduction to the role of the Top quark in the standard model and a general description of the CMS Experiment at LHC. We have used the CMS experiment computing and software infrastructure to generate and prepare the data samples used in this analysis. We tested the performance of three different classifiers applied to our data samples and used the selection obtained with the Multi Layer Perceptron classifier to give an estimation of the statistical and systematical uncertainty on the cross section measurement.
Resumo:
In 1998 a pilot experiment was carried out to study the helicity dependence of photoreaction cross sections using circularly polarized real photons on longitudinally polarized deuterons in a deuterated butanol target. The knowledge of these cross sections is required to test the validity of the Gerasimov-Drell-Hearn sum rule on the deuteron and the neutron. The focus of this thesis is on the results for the differential and total cross sections for the photodisintegration reaction for various photon energies in the range from 200 to 450 MeV using data taken with the detector system DAPHNE. The current understanding of the NN interaction as represented by the calculations by M. Schwamb could be confirmed within the given uncertainties. In addition, the detector DAPHNE has been prepared for the main experiment in 2003. The according work is presented together with results of the quality-test measurements of the renewed detector components.
Resumo:
Charmless charged two-body B decays are sensitive probes of the CKM matrix, that parameterize CP violation in the Standard Model (SM), and have the potential to reveal the presence of New Physics. The framework of CP violation within the SM, the role of the CKM matrix, with its basic formalism, and the current experimental status are presented. The theoretical tools commonly used to deal with hadronic B decays and an overview of the phenomenology of charmless two-body B decays are outlined. LHCb is one of the four main experiments operating at the Large Hadron Collider (LHC), devoted to the measurement of CP violation and rare decays of charm and beauty hadrons. The LHCb detector is described, focusing on the technologies adopted for each sub-detector and summarizing their performances. The status-of-the-art of the LHCb measurements with charmless two-body B decays is then presented. Using the 37/pb of integrated luminosity collected at sqrt(s) = 7 TeV by LHCb during 2010, the direct CP asymmetries ACP(B0 -> Kpi) = −0.074 +/- 0.033 +/- 0.008 and ACP(Bs -> piK) = 0.15 +/- 0.19 +/- 0.02 are measured. Using 320/pb of integrated luminosity collected during 2011 these measurements are updated to ACP(B0 -> Kpi) = −0.088 +/- 0.011 +/- 0.008 and ACP(Bs -> piK) = 0.27 +/- 0.08 +/- 0.02. In addition, the branching ratios BR(B0 -> K+K-) = (0.13+0.06-0.05 +/- 0.07) x 10^-6 and BR(Bs -> pi+pi-) = (0.98+0.23-0.19 +/- 0.11) x 10^-6 are measured. Finally, using a sample of 370/pb of integrated luminosity collected during 2011, the relative branching ratios BR(B0 -> pi+pi-)/BR(B0 -> Kpi) = 0.262 +/- 0.009 +/- 0.017, (fs/fd)BR(Bs -> K+K-)/BR(B0 -> Kpi)=0.316 +/- 0.009 +/- 0.019, (fs/fd)BR(Bs -> piK)/BR(B0 -> Kpi) = 0.074 +/- 0.006 +/- 0.006 and BR(Lambda_b -> ppi)/BR(Lambda_b -> pK)=0.86 +/- 0.08 +/- 0.05 are determined.
Resumo:
Volatile organic compounds play a critical role in ozone formation and drive the chemistry of the atmosphere, together with OH radicals. The simplest volatile organic compound methane is a climatologically important greenhouse gas, and plays a key role in regulating water vapour in the stratosphere and hydroxyl radicals in the troposphere. The OH radical is the most important atmospheric oxidant and knowledge of the atmospheric OH sink, together with the OH source and ambient OH concentrations is essential for understanding the oxidative capacity of the atmosphere. Oceanic emission and / or uptake of methanol, acetone, acetaldehyde, isoprene and dimethyl sulphide (DMS) was characterized as a function of photosynthetically active radiation (PAR) and a suite of biological parameters, in a mesocosm experiment conducted in the Norwegian fjord. High frequency (ca. 1 minute-1) methane measurements were performed using a gas chromatograph - flame ionization detector (GC-FID) in the boreal forests of Finland and the tropical forests of Suriname. A new on-line method (Comparative Reactivity Method - CRM) was developed to directly measure the total OH reactivity (sink) of ambient air. It was observed that under conditions of high biological activity and a PAR of ~ 450 μmol photons m-2 s-1, the ocean acted as a net source of acetone. However, if either of these criteria was not fulfilled then the ocean acted as a net sink of acetone. This new insight into the biogeochemical cycling of acetone at the ocean-air interface has helped to resolve discrepancies from earlier works such as Jacob et al. (2002) who reported the ocean to be a net acetone source (27 Tg yr-1) and Marandino et al. (2005) who reported the ocean to be a net sink of acetone (- 48 Tg yr-1). The ocean acted as net source of isoprene, DMS and acetaldehyde but net sink of methanol. Based on these findings, it is recommended that compound specific PAR and biological dependency be used for estimating the influence of the global ocean on atmospheric VOC budgets. Methane was observed to accumulate within the nocturnal boundary layer, clearly indicating emissions from the forest ecosystems. There was a remarkable similarity in the time series of the boreal and tropical forest ecosystem. The average of the median mixing ratios during a typical diel cycle were 1.83 μmol mol-1 and 1.74 μmol mol-1 for the boreal forest ecosystem and tropical forest ecosystem respectively. A flux value of (3.62 ± 0.87) x 1011 molecules cm-2 s-1 (or 45.5 ± 11 Tg CH4 yr-1 for global boreal forest area) was derived, which highlights the importance of the boreal forest ecosystem for the global budget of methane (~ 600 Tg yr-1). The newly developed CRM technique has a dynamic range of ~ 4 s-1 to 300 s-1 and accuracy of ± 25 %. The system has been tested and calibrated with several single and mixed hydrocarbon standards showing excellent linearity and accountability with the reactivity of the standards. Field tests at an urban and forest site illustrate the promise of the new method. The results from this study have improved current understanding about VOC emissions and uptake from ocean and forest ecosystems. Moreover, a new technique for directly measuring the total OH reactivity of ambient air has been developed and validated, which will be a valuable addition to the existing suite of atmospheric measurement techniques.
Resumo:
One of the most precisely measured quantities in particle physics is the magnetic moment of the muon, which describes its coupling to an external magnetic field. It is expressed in form of the anomalous magnetic moment of the muon a_mu=(g_mu-2)/2 and has been determined experimentally with a precision of 0.5 parts per million. The current direct measurement and the theoretical prediction of the standard model differ by more than 3.5 standard deviations. Concerning theory, the contribution of the QED and weak interaction to a_mu can be calculated with very high precision in a perturbative approach.rnAt low energies, however, perturbation theory cannot be used to determine the hadronic contribution a^had_mu. On the other hand, a^had_mu may be derived via a dispersion relation from the sum of measured cross sections of exclusive hadronic reactions. Decreasing the experimental uncertainty on these hadronic cross sections is of utmost importance for an improved standard model prediction of a_mu.rnrnIn addition to traditional energy scan experiments, the method of Initial State Radiation (ISR) is used to measure hadronic cross sections. This approach allows experiments at colliders running at a fixed centre-of-mass energy to access smaller effective energies by studying events which contain a high-energetic photon emitted from the initial electron or positron. Using the technique of ISR, the energy range from threshold up to 4.5GeV can be accessed at Babar.rnrnThe cross section e+e- -> pi+pi- contributes with approximately 70% to the hadronic part of the anomalous magnetic moment of the muon a_mu^had. This important channel has been measured with a precision of better than 1%. Therefore, the leading contribution to the uncertainty of a_mu^had at present stems from the invariant mass region between 1GeV and 2GeV. In this energy range, the channels e+e- -> pi+pi-pi+pi- and e+e- -> pi+pi-pi0pi0 dominate the inclusive hadronic cross section. The measurement of the process e+e- -> pi+pi-pi+pi- will be presented in this thesis. This channel has been previously measured by Babar based on 25% of the total dataset. The new analysis includes a more detailed study of the background contamination from other ISR and non-radiative background reactions. In addition, sophisticated studies of the track reconstruction as well as the photon efficiency difference between the data and the simulation of the Babar detector are performed. With these auxiliary studies, a reduction of the systematic uncertainty from 5.0% to 2.4% in the peak region was achieved.rnrnThe pi+pi-pi+pi- final state has a rich internal structure. Hints are seen for the intermediate states rho(770)^0 f_2(1270), rho(770)^0 f_0(980), as well as a_1(1260)pi. In addition, the branching ratios BR(jpsi -> pi+pi-pi+pi-) and BR(psitwos -> jpsi pi+pi-) are extracted.rn
Resumo:
In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.
Resumo:
Die vorliegende Arbeit behandelt Vorwärts- sowie Rückwärtstheorie transienter Wirbelstromprobleme. Transiente Anregungsströme induzieren elektromagnetische Felder, welche sogenannte Wirbelströme in leitfähigen Objekten erzeugen. Im Falle von sich langsam ändernden Feldern kann diese Wechselwirkung durch die Wirbelstromgleichung, einer Approximation an die Maxwell-Gleichungen, beschrieben werden. Diese ist eine lineare partielle Differentialgleichung mit nicht-glatten Koeffizientenfunktionen von gemischt parabolisch-elliptischem Typ. Das Vorwärtsproblem besteht darin, zu gegebener Anregung sowie den umgebungsbeschreibenden Koeffizientenfunktionen das elektrische Feld als distributionelle Lösung der Gleichung zu bestimmen. Umgekehrt können die Felder mit Messspulen gemessen werden. Das Ziel des Rückwärtsproblems ist es, aus diesen Messungen Informationen über leitfähige Objekte, also über die Koeffizientenfunktion, die diese beschreibt, zu gewinnen. In dieser Arbeit wird eine variationelle Lösungstheorie vorgestellt und die Wohlgestelltheit der Gleichung diskutiert. Darauf aufbauend wird das Verhalten der Lösung für verschwindende Leitfähigkeit studiert und die Linearisierbarkeit der Gleichung ohne leitfähiges Objekt in Richtung des Auftauchens eines leitfähigen Objektes gezeigt. Zur Regularisierung der Gleichung werden Modifikationen vorgeschlagen, welche ein voll parabolisches bzw. elliptisches Problem liefern. Diese werden verifiziert, indem die Konvergenz der Lösungen gezeigt wird. Zuletzt wird gezeigt, dass unter der Annahme von sonst homogenen Umgebungsparametern leitfähige Objekte eindeutig durch die Messungen lokalisiert werden können. Hierzu werden die Linear Sampling Methode sowie die Faktorisierungsmethode angewendet.
Resumo:
Measurement of bladder wall thickness (BWT) using transvaginal ultrasound has previously been shown to discriminate between women with confirmed detrusor overactivity and those with urodynamic stress incontinence. Aim of the current study was to determine if vaginally measured BWT correlates with urodynamic diagnoses in a female population.
Resumo:
Noninvasive blood flow measurements based on Doppler ultrasound studies are the main clinical tool for studying the cardiovascular status in fetuses at risk for circulatory compromise. Usually, qualitative analysis of peripheral arteries and, in particular clinical situations such as severe growth restriction or volume overload, also of venous vessels close to the heart or of flow patterns in the heart are being used to gauge the level of compensation in a fetus. Quantitative assessment of the driving force of the fetal circulation, the cardiac output, however, remains an elusive goal in fetal medicine. This article reviews the methods for direct and indirect assessment of cardiac function and explains new clinical applications. Part 1 of this review describes the concept of cardiac function and cardiac output and the techniques that have been used to quantify output. Part 2 summarizes the use of arterial and venous Doppler studies in the fetus and gives a detailed description of indirect measures of cardiac function (like indices derived from the duration of segments of the cardiac cycle) with current examples of their application.
Resumo:
There are two main types of bone in the human body, trabecular and cortical bone. Cortical bone is primarily found on the outer surface of most bones in the body while trabecular bone is found in vertebrae and at the end of long bones (Ross 2007). Osteoporosis is a condition that compromises the structural integrity of trabecular bone, greatly reducing the ability of the bone to absorb energy from falls. The current method for diagnosing osteoporosis and predicting fracture risk is measurement of bone mineral density. Limitations of this method include dependence on the bone density measurement device and dependence on type of test and measurement location (Rubin 2005). Each year there are approximately 250,000 hip fractures in the United States due to osteoporosis (Kleerekoper 2006). Currently, the most common method for repairing a hip fracture is a hip fixation surgery. During surgery, a temporary guide wire is inserted to guide the permanent screw into place and then removed. It is believed that directly measuring this screw pullout force may result in a better assessment of bone quality than current indirect measurement techniques (T. Bowen 2008-2010, pers. comm.). The objective of this project is to design a device that can measure the force required to extract this guide wire. It is believed that this would give the surgeon a direct, quantitative measurement of bone quality at the site of the fixation. A first generation device was designed by a Bucknell Biomedical Engineering Senior Design team during the 2008- 2009 Academic Year. The first step of this project was to examine the device, conduct a thorough design analysis, and brainstorm new concepts. The concept selected uses a translational screw to extract the guide wire. The device was fabricated and underwent validation testing to ensure that the device was functional and met the required engineering specifications. Two tests were conducted, one to test the functionality of the device by testing if the device gave repeatable results, and the other to test the sensitivity of the device to misalignment. Guide wires were extracted from 3 materials, low density polyethylene, ultra high molecular weight polyethylene, and polypropylene and the force of extraction was measured. During testing, it was discovered that the spring in the device did not have a high enough spring constant to reach the high forces necessary for extracting the wires without excessive deflection of the spring. The test procedure was modified slightly so the wires were not fully threaded into the material. The testing results indicate that there is significant variation in the screw pullout force, up to 30% of the average value. This significant variation was attributed to problems in the testing and data collection, and a revised set of tests was proposed to better evaluate the performance of the device. The fabricated device is a fully-functioning prototype and further refinements and testing of the device may lead to a 3rd generation version capable of measuring the screw pullout force during hip fixation surgery.
Resumo:
This project had a threefold aim and sought to provide answers to several different questions. Kossowska first focused on the relationship between Openness to Experience and ideological variables such as authoritarianism and conservatism. The main questions here were (1) whether there are differences between the Polish and Belgian samples studied with respect to the relationship between political ideology and Openness to Experience, and (2) whether this relationship applies to all facets of Openness to Experience. The study showed significant negative correlations between Openness and right-wing ideology in both adult samples, and that Fantasy and Actions were the most robust correlates of political ideology. A second problem examined concerned the relationship between ideology and cognitive functioning. The important questions here were about the conceptualisation and measurement of cognitive variables such as rigidity, intolerance of ambiguity, or the need for closure, which determine individuals' attitudes to politics. The results confirmed the significance of the need for closure construct in both samples for understanding the process of formulating and holding political beliefs. The last aspect of the study was the differences in political beliefs between the Polish and Belgian samples in relation to the social, political and economic situation in the two countries. The most important question here was the changes in the political mentality of Poles during the period of system transition. Kossowska expected to find differences between Poles and Belgians with respect to the level of conservatism and authoritarianism, but in fact both samples showed comparable levels of right-wing political beliefs.
Resumo:
A non-intrusive interferometric measurement technique has been successfully developed to measure fluid compressibility in both gas and liquid phases via refractive index (RI) changes. The technique, consisting of an unfocused laser beam impinging a glass channel, can be used to separate and quantify cell deflection, fluid flow rates, and pressure variations in microchannels. Currently in fields such as microfluidics, pressure and flow rate measurement devices are orders of magnitude larger than the channel cross-sections making direct pressure and fluid flow rate measurements impossible. Due to the non-intrusive nature of this technique, such measurements are now possible, opening the door for a myriad of new scientific research and experimentation. This technique, adapted from the concept of Micro Interferometric Backscatter Detection (MIBD), boasts the ability to provide comparable sensitivities in a variety of channel types and provides quantification capability not previously demonstrated in backscatter detection techniques. Measurement sensitivity depends heavily on experimental parameters such as beam impingement angle, fluid volume, photodetector sensitivity, and a channel’s dimensional tolerances. The current apparatus readily quantifies fluid RI changes of 10-5 refractive index units (RIU) corresponding to pressures of approximately 14 psi and 1 psi in water and air, respectively. MIBD reports detection capability as low as 10-9 RIU and the newly adapted technique has the potential to meet and exceed this limit providing quantification in the place of detection. Specific device sensitivities are discussed and suggestions are provided on how the technique may be refined to provide optimal quantification capabilities based on experimental conditions.
Resumo:
QUESTION UNDER STUDY: Purpose was to validate accuracy and reliability of automated oscillometric ankle-brachial (ABI) measurement prospectively against the current gold standard of Doppler-assisted ABI determination. METHODS: Oscillometric ABI was measured in 50 consecutive patients with peripheral arterial disease (n = 100 limbs, mean age 65 +/- 6 years, 31 men, 19 diabetics) after both high and low ABI had been determined conventionally by Doppler under standardised conditions. Correlation was assessed by linear regression and Pearson product moment correlation. Degree of inter-modality agreement was quantified by use of Bland and Altman method. RESULTS: Oscillometry was performed significantly faster than Doppler-assisted ABI (3.9 +/- 1.3 vs 11.4 +/- 3.8 minutes, P <0.001). Mean readings were 0.62 +/- 0.25, 0.70 +/- 0.22 and 0.63 +/- 0.39 for low, high and oscillometric ABI, respectively. Correlation between oscillometry and Doppler ABI was good overall (r = 0.76 for both low and high ABI) and excellent in oligo-symptomatic, non-diabetic patients (r = 0.81; 0.07 +/- 0.23); it was, however, limited in diabetic patients and in patients with critical limb ischaemia. In general, oscillometric ABI readings were slightly higher (+0.06), but linear regression analysis showed that correlation was sustained over the whole range of measurements. CONCLUSIONS: Results of automated oscillometric ABI determination correlated well with Doppler-assisted measurements and could be obtained in shorter time. Agreement was particularly high in oligo-symptomatic non-diabetic patients.
Resumo:
RATIONALE AND OBJECTIVES: The aim of this study was to measure the radiation dose of dual-energy and single-energy multidetector computed tomographic (CT) imaging using adult liver, renal, and aortic imaging protocols. MATERIALS AND METHODS: Dual-energy CT (DECT) imaging was performed on a conventional 64-detector CT scanner using a software upgrade (Volume Dual Energy) at tube voltages of 140 and 80 kVp (with tube currents of 385 and 675 mA, respectively), with a 0.8-second gantry revolution time in axial mode. Parameters for single-energy CT (SECT) imaging were a tube voltage of 140 kVp, a tube current of 385 mA, a 0.5-second gantry revolution time, helical mode, and pitch of 1.375:1. The volume CT dose index (CTDI(vol)) value displayed on the console for each scan was recorded. Organ doses were measured using metal oxide semiconductor field-effect transistor technology. Effective dose was calculated as the sum of 20 organ doses multiplied by a weighting factor found in International Commission on Radiological Protection Publication 60. Radiation dose saving with virtual noncontrast imaging reconstruction was also determined. RESULTS: The CTDI(vol) values were 49.4 mGy for DECT imaging and 16.2 mGy for SECT imaging. Effective dose ranged from 22.5 to 36.4 mSv for DECT imaging and from 9.4 to 13.8 mSv for SECT imaging. Virtual noncontrast imaging reconstruction reduced the total effective dose of multiphase DECT imaging by 19% to 28%. CONCLUSION: Using the current Volume Dual Energy software, radiation doses with DECT imaging were higher than those with SECT imaging. Substantial radiation dose savings are possible with DECT imaging if virtual noncontrast imaging reconstruction replaces precontrast imaging.
Resumo:
The last two years a discussion on reforming the public sector has emerged. At its very heart we find important concepts like ‘quality reform’, ‘democracy’, and ‘development’. Recently I have presented an example of the ‘quality reform’ in SocMag, and this leads me to prolong that discussion on central themes on welfare state and democracy. Much energy is invested in arguing about management of the public sector: Do we need more competition from private companies? Do we need more control? Are more contracts concerning outcome needed? Can we be sure about the accountability needed from politicians? How much documentation, effectiveness measurement, bureaucracy, and evidence-based policy and practice are we looking for? A number of interesting questions – but strange enough we do not discuss the purpose of ‘keeping a welfare state’. What sort of understanding is lying behind the welfare state, and what kind of democracy are we drawing upon?