902 resultados para Multivariate measurement model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributions sensitive to the underlying event in QCD jet events have been measured with the ATLAS detector at the LHC, based on 37 pb−1 of proton–proton collision data collected at a centre-of-mass energy of 7 TeV. Chargedparticle mean pT and densities of all-particle ET and chargedparticle multiplicity and pT have been measured in regions azimuthally transverse to the hardest jet in each event. These are presented both as one-dimensional distributions and with their mean values as functions of the leading-jet transverse momentum from 20 to 800 GeV. The correlation of chargedparticle mean pT with charged-particle multiplicity is also studied, and the ET densities include the forward rapidity region; these features provide extra data constraints for Monte Carlo modelling of colour reconnection and beamremnant effects respectively. For the first time, underlying event observables have been computed separately for inclusive jet and exclusive dijet event selections, allowing more detailed study of the interplay of multiple partonic scattering and QCD radiation contributions to the underlying event. Comparisonsto the predictions of different Monte Carlo models show a need for further model tuning, but the standard approach is found to generally reproduce the features of the underlying event in both types of event selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A measurement of event-plane correlations involving two or three event planes of different order is presented as a function of centrality for 7 μb −1 Pb+Pb collision data at √s NN =2.76 TeV, recorded by the ATLAS experiment at the Large Hadron Collider. Fourteen correlators are measured using a standard event-plane method and a scalar-product method, and the latter method is found to give a systematically larger correlation signal. Several different trends in the centrality dependence of these correlators are observed. These trends are not reproduced by predictions based on the Glauber model, which includes only the correlations from the collision geometry in the initial state. Calculations that include the final-state collective dynamics are able to describe qualitatively, and in some cases also quantitatively, the centrality dependence of the measured correlators. These observations suggest that both the fluctuations in the initial geometry and the nonlinear mixing between different harmonics in the final state are important for creating these correlations in momentum space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of fiducial cross sections for the electroweak production of two jets in association with a Z-boson are presented. The measurements are performed using 20.3 fb−1 of proton-proton collision data collected at a centre-of-mass energy of p s = 8TeV by the ATLAS experiment at the Large Hadron Collider. The electroweak component is extracted by a fit to the dijet invariant mass distribution in a fiducial region chosen to enhance the electroweak contribution over the dominant background in which the jets are produced via the strong interaction. The electroweak cross sections measured in two fiducial regions are in good agreement with the Standard Model expectations and the background-only hypothesis is rejected with significance above the 5ơ level. The electroweak process includes the vector boson fusion production of a Z-boson and the data are used to place limits on anomalous triple gauge boson couplings. In addition, measurements of cross sections and differential distributions for inclusive Z-boson-plus-dijet production are performed in five fiducial regions, each with different sensitivity to the electroweak contribution. The results are corrected for detector effects and compared to predictions from the Sherpa and Powheg event generators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Double-differential dijet cross-sections measured in pp collisions at the LHC with a 7TeV centre-of-mass energy are presented as functions of dijet mass and half the rapidity separation of the two highest-pT jets. These measurements are obtained using data corresponding to an integrated luminosity of 4.5 fb−1, recorded by the ATLAS detector in 2011. The data are corrected for detector effects so that cross-sections are presented at the particle level. Cross-sections are measured up to 5TeV dijet mass using jets reconstructed with the anti-kt algorithm for values of the jet radius parameter of 0.4 and 0.6. The cross-sections are compared with next-to-leading-order perturbative QCD calculations by NLOJet++ corrected to account for non-perturbative effects. Comparisons with POWHEG predictions, using a next-to-leading-order matrix element calculation interfaced to a partonshower Monte Carlo simulation, are also shown. Electroweak effects are accounted for in both cases. The quantitative comparison of data and theoretical predictions obtained using various parameterizations of the parton distribution functions is performed using a frequentist method. In general, good agreement with data is observed for the NLOJet++ theoretical predictions when using the CT10, NNPDF2.1 and MSTW 2008 PDF sets. Disagreement is observed when using the ABM11 and HERAPDF1.5 PDF sets for some ranges of dijet mass and half the rapidity separation. An example setting a lower limit on the compositeness scale for a model of contact interactions is presented, showing that the unfolded results can be used to constrain contributions to dijet production beyond that predicted by the Standard Model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The OPERA detector, designed to search for νμ → ντ oscillations in the CNGS beam, is located in the underground Gran Sasso laboratory, a privileged location to study TeV-scale cosmic rays. For the analysis here presented, the detector was used to measure the atmospheric muon charge ratio in the TeV region. OPERA collected chargeseparated cosmic ray data between 2008 and 2012. More than 3 million atmospheric muon events were detected and reconstructed, among which about 110000 multiple muon bundles. The charge ratio Rμ ≡ Nμ+/Nμ− was measured separately for single and for multiple muon events. The analysis exploited the inversion of the magnet polarity which was performed on purpose during the 2012 Run. The combination of the two data sets with opposite magnet polarities allowedminimizing systematic uncertainties and reaching an accurate determination of the muon charge ratio. Data were fitted to obtain relevant parameters on the composition of primary cosmic rays and the associated kaon production in the forward fragmentation region. In the surface energy range 1–20 TeV investigated by OPERA, Rμ is well described by a parametric model including only pion and kaon contributions to themuon flux, showing no significant contribution of the prompt component. The energy independence supports the validity of Feynman scaling in the fragmentation region up to 200 TeV/nucleon primary energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ARGONTUBE is a liquid argon time projection chamber (LAr TPC) with a drift field generated in-situ by a Greinacher voltage multiplier circuit. We present results on the measurement of the drift-field distribution inside ARGONTUBE using straight ionization tracks generated by an intense UV laser beam. Our analysis is based on a simplified model of the charging of a multi-stage Greinacher circuit to describe the voltages on the field cage rings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New data from the T2K neutrino oscillation experiment produce the most precise measurement of the neutrino mixing parameter θ 23 . Using an off-axis neutrino beam with a peak energy of 0.6 GeV and a data set corresponding to 6.57×10 20 protons on target, T2K has fit the energy-dependent ν μ oscillation probability to determine oscillation parameters. The 68% confidence limit on sin 2 (θ 23 ) is 0.514 +0.055 −0.056 (0.511±0.055 ), assuming normal (inverted) mass hierarchy. The best-fit mass-squared splitting for normal hierarchy is Δm 2 32 =(2.51±0.10)×10 −3   eV 2 /c 4 (inverted hierarchy: Δm 2 13 =(2.48±0.10)×10 −3   eV 2 /c 4 ). Adding a model of multinucleon interactions that affect neutrino energy reconstruction is found to produce only small biases in neutrino oscillation parameter extraction at current levels of statistical uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stratospheric ozone is of major interest as it absorbs most harmful UV radiation from the sun, allowing life on Earth. Ground-based microwave remote sensing is the only method that allows for the measurement of ozone profiles up to the mesopause, over 24 hours and under different weather conditions with high time resolution. In this paper a novel ground-based microwave radiometer is presented. It is called GROMOS-C (GRound based Ozone MOnitoring System for Campaigns), and it has been designed to measure the vertical profile of ozone distribution in the middle atmosphere by observing ozone emission spectra at a frequency of 110.836 GHz. The instrument is designed in a compact way which makes it transportable and suitable for outdoor use in campaigns, an advantageous feature that is lacking in present day ozone radiometers. It is operated through remote control. GROMOS-C is a total power radiometer which uses a pre-amplified heterodyne receiver, and a digital fast Fourier transform spectrometer for the spectral analysis. Among its main new features, the incorporation of different calibration loads stands out; this includes a noise diode and a new type of blackbody target specifically designed for this instrument, based on Peltier elements. The calibration scheme does not depend on the use of liquid nitrogen; therefore GROMOS-C can be operated at remote places with no maintenance requirements. In addition, the instrument can be switched in frequency to observe the CO line at 115 GHz. A description of the main characteristics of GROMOS-C is included in this paper, as well as the results of a first campaign at the High Altitude Research Station at Jungfraujoch (HFSJ), Switzerland. The validation is performed by comparison of the retrieved profiles against equivalent profiles from MLS (Microwave Limb Sounding) satellite data, ECMWF (European Centre for Medium-Range Weather Forecast) model data, as well as our nearby NDACC (Network for the Detection of Atmospheric Composition Change) ozone radiometer measuring at Bern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Microwave Emission Model of Layered Snowpacks (MEMLS) was originally developed for microwave emissions of snowpacks in the frequency range 5–100 GHz. It is based on six-flux theory to describe radiative transfer in snow including absorption, multiple volume scattering, radiation trapping due to internal reflection and a combination of coherent and incoherent superposition of reflections between horizontal layer interfaces. Here we introduce MEMLS3&a, an extension of MEMLS, which includes a backscatter model for active microwave remote sensing of snow. The reflectivity is decomposed into diffuse and specular components. Slight undulations of the snow surface are taken into account. The treatment of like- and cross-polarization is accomplished by an empirical splitting parameter q. MEMLS3&a (as well as MEMLS) is set up in a way that snow input parameters can be derived by objective measurement methods which avoid fitting procedures of the scattering efficiency of snow, required by several other models. For the validation of the model we have used a combination of active and passive measurements from the NoSREx (Nordic Snow Radar Experiment) campaign in Sodankylä, Finland. We find a reasonable agreement between the measurements and simulations, subject to uncertainties in hitherto unmeasured input parameters of the backscatter model. The model is written in Matlab and the code is publicly available for download through the following website: http://www.iapmw.unibe.ch/research/projects/snowtools/memls.html.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clinical oncologists and cancer researchers benefit from information on the vascularization or non-vascularization of solid tumors because of blood flow's influence on three popular treatment types: hyperthermia therapy, radiotherapy, and chemotherapy. The objective of this research is the development of a clinically useful tumor blood flow measurement technique. The designed technique is sensitive, has good spatial resolution, in non-invasive and presents no risk to the patient beyond his usual treatment (measurements will be subsequent only to normal patient treatment).^ Tumor blood flow was determined by measuring the washout of positron emitting isotopes created through neutron therapy treatment. In order to do this, several technical and scientific questions were addressed first. These questions were: (1) What isotopes are created in tumor tissue when it is irradiated in a neutron therapy beam and how much of each isotope is expected? (2) What are the chemical states of the isotopes that are potentially useful for blood flow measurements and will those chemical states allow these or other isotopes to be washed out of the tumor? (3) How should isotope washout by blood flow be modeled in order to most effectively use the data? These questions have been answered through both theoretical calculation and measurement.^ The first question was answered through the measurement of macroscopic cross sections for the predominant nuclear reactions in the body. These results correlate well with an independent mathematical prediction of tissue activation and measurements of mouse spleen neutron activation. The second question was addressed by performing cell suspension and protein precipitation techniques on neutron activated mouse spleens. The third and final question was answered by using first physical principles to develop a model mimicking the blood flow system and measurement technique.^ In a final set of experiments, the above were applied to flow models and animals. The ultimate aim of this project is to apply its methodology to neutron therapy patients. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lovell and Rouse (LR) have recently proposed a modification of the standard DEA model that overcomes the infeasibility problem often encountered in computing super-efficiency. In the LR procedure one appropriately scales up the observed input vector (scale down the output vector) of the relevant super-efficient firm thereby usually creating its inefficient surrogate. An alternative procedure proposed in this paper uses the directional distance function introduced by Chambers, Chung, and Färe and the resulting Nerlove-Luenberger (NL) measure of super-efficiency. The fact that the directional distance function combines features of both an input-oriented and an output-oriented model, generally leads to a more complete ranking of the observations than either of the oriented models. An added advantage of this approach is that the NL super-efficiency measure is unique and does not depend on any arbitrary choice of a scaling parameter. A data set on international airlines from Coelli, Perelman, and Griffel-Tatje (2002) is utilized in an illustrative empirical application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With substance abuse treatment expanding in prisons and jails, understanding how behavior change interacts with a restricted setting becomes more essential. The Transtheoretical Model (TTM) has been used to understand intentional behavior change in unrestricted settings, however, evidence indicates restrictive settings can affect the measurement and structure of the TTM constructs. The present study examined data from problem drinkers at baseline and end-of-treatment from three studies: (1) Project CARE (n = 187) recruited inmates from a large county jail; (2) Project Check-In (n = 116) recruited inmates from a state prison; (3) Project MATCH, a large multi-site alcohol study had two recruitment arms, aftercare (n = 724 pre-treatment and 650 post-treatment) and outpatient (n = 912 pre-treatment and 844 post-treatment). The analyses were conducted using cross-sectional data to test for non-invariance of measures of the TTM constructs: readiness, confidence, temptation, and processes of change (Structural Equation Modeling, SEM) across restricted and unrestricted settings. Two restricted (jail and aftercare) and one unrestricted group (outpatient) entering treatment and one restricted (prison) and two unrestricted groups (aftercare and outpatient) at end-of-treatment were contrasted. In addition TTM end-of-treatment profiles were tested as predictors of 12 month drinking outcomes (Profile Analysis). Although SEM did not indicate structural differences in the overall TTM construct model across setting types, there were factor structure differences on the confidence and temptation constructs at pre-treatment and in the factor structure of the behavioral processes at the end-of-treatment. For pre-treatment temptation and confidence, differences were found in the social situations factor loadings and in the variance for the confidence and temptation latent factors. For the end-of-treatment behavioral processes, differences across the restricted and unrestricted settings were identified in the counter-conditioning and stimulus control factor loadings. The TTM end-of-treatment profiles were not predictive of drinking outcomes in the prison sample. Both pre and post-treatment differences in structure across setting types involved constructs operationalized with behaviors that are limited for those in restricted settings. These studies suggest the TTM is a viable model for explicating addictive behavior change in restricted settings but calls for modification of subscale items that refer to specific behaviors and caution in interpreting the mean differences across setting types for problem drinkers. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The airliner cabin environment and its effects on occupant health have not been fully characterized. This dissertation is: (1) A review of airliner environmental control systems (ECSs) that modulate the ventilation, temperature, relative humidity (RH), and barometric pressure (PB) of the cabin environment---variables related to occupant comfort and health. (2) A review and assessment of the methods and findings of key cabin air quality (CAQ) investigations. Several significant deficiencies impede the drawing of inferences about CAQ, e.g., lack of detail about investigative methods, differences in methods between investigations, limited assessment of CAQ variables, small sample sizes, and technological deficiencies of data collection. (3) A comprehensive evaluation of the methods used in the subsequent NIOSH-FAA Airliner CAQ Exposure Assessment Feasibility Study (STUDY) in which this author participated. A number of problems were identified which limit the usefulness of the data. (4) An analysis of the reliable 10-flight STUDY data. Univariate and multivariate methods applied to CO2 (a surrogate for air contaminants), temperature, RH, and PB, in association with percent passenger load, ventilation system, flight duration, airliner body type, and measurement location within the cabin, revealed neither the measured values nor their variability exceeded established health-based exposure limits. Regression analyses suggest CO2, temperature, and RH were affected by percent passenger load. In-flight measurements of CO2 and RH were relatively independent of ventilation system type or flight duration. Cabin temperature was associated with percent passenger load, ventilation system type, and flight duration. (5) A synthesis of the implications of the airliner ECS and cabin O2 environment on occupant health. A model was developed to predict consequences of the airliner cabin pressure altitude 8,000 ft limit and resulting model-estimated PO2 on cardiopulmonary status. Based on the PB, altitude, and environmental data derived from the 10 STUDY flights, the predicted PaO2 of adults with COPD, or elderly adults with or without COPD, breathing ambient cabin air could be < 55 mm Hg (SaO2 < 88%). Reduction in cabin PB found in the STUDY flights could aggravate various medical conditions and require the use of in-flight supplemental O2. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Retail clinics, also called convenience care clinics, have become a rapidly growing trend since their initial development in 2000. These clinics are coupled within a larger retail operation and are generally located in "big-box" discount stores such as Wal-mart or Target, grocery stores such as Publix or H-E-B, or in retail pharmacies such as CVS or Walgreen's (Deloitte Center for Health Solutions, 2008). Care is typically provided by nurse practitioners. Research indicates that this new health care delivery system reduces cost, raises quality, and provides a means of access to the uninsured population (e.g., Deloitte Center for Health Solutions, 2008; Convenient Care Association, 2008a, 2008b, 2008c; Hansen-Turton, Miller, Nash, Ryan, Counts, 2007; Salinsky, 2009; Scott, 2006; Ahmed & Fincham, 2010). Some healthcare analysts even suggest that retail clinics offer a feasible solution to the shortage of primary care physicians facing the nation (AHRQ Health Care Innovations Exchange, 2010). ^ The development and performance of retail clinics is heavily dependent upon individual state policies regulating NPs. Texas currently has one of the most highly regulated practice environments for NPs (Stout & Elton, 2007; Hammonds, 2008). In September 2009, Texas passed Senate Bill 532 addressing the scope of practice of nurse practitioners in the convenience care model. In comparison to other states, this law still heavily regulates nurse practitioners. However, little research has been conducted to evaluate the impact of state laws regulating nurse practitioners on the development and performance of retail clinics. ^ Objectives. (1). To describe the potential impact that SB 532 has on retail clinic performance. (2). To discuss the effectiveness, efficiency, and equity of the convenience care model. (3). To describe possible alternatives to Texas' nurse practitioner scope of practice guidelines as delineated in Texas Senate Bill 532. (4). To describe the type of nurse practitioner state regulation (i.e. independent, light, moderate, or heavy) that best promotes the convenience care model. ^ Methods. State regulations governing nurse practitioners can be characterized as independent, light, moderate, and heavy. Four state NP regulatory types and retail clinic performance were compared and contrasted to that of Texas regulations using Dunn and Aday's theoretical models for conducting policy analysis and evaluating healthcare systems. Criteria for measurement included effectiveness, efficiency, and equity. Comparison states were Arizona (Independent), Minnesota (Light), Massachusetts (Moderate), and Florida (Heavy). ^ Results. A comparative states analysis of Texas SB 532 and alternative NP scope of practice guidelines among the four states: Arizona, Florida, Massachusetts, and Minnesota, indicated that SB 532 has minimal potential to affect the shortage of primary care providers in the state. Although SB 532 may increase the number of NPs a physician may supervise, NPs are still heavily restricted in their scope of practice and limited in their ability to act as primary care providers. Arizona's example of independent NP practice provided the best alternative to affect the shortage of PCPs in Texas as evidenced by a lower uninsured rate and less ED visits per 1,000 population. A survey of comparison states suggests that retail clinics thrive in states that more heavily restrict NP scope of practice as opposed to those that are more permissive, with the exception of Arizona. An analysis of effectiveness, efficiency, and equity of the convenience care model indicates that retail clinics perform well in the areas of effectiveness and efficiency; but, fall short in the area of equity. ^ Conclusion. Texas Senate 532 represents an incremental step towards addressing the problem of a shortage of PCPs in the state. A comparative policy analysis of the other four states with varying degrees of NP scope of practice indicate that a more aggressive policy allowing for independent NP practice will be needed to achieve positive changes in health outcomes. Retail clinics pose a temporary solution to the shortage of PCPs and will need to expand their locations to poorer regions and incorporate some chronic care to obtain measurable health outcomes. ^