939 resultados para measurement error models
Resumo:
Accurate three-dimensional (3D) models of lumbar vertebrae are required for image-based 3D kinematics analysis. MRI or CT datasets are frequently used to derive 3D models but have the disadvantages that they are expensive, time-consuming or involving ionizing radiation (e.g., CT acquisition). In this chapter, we present an alternative technique that can reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image and a statistical shape model. Cadaveric studies are conducted to verify the reconstruction accuracy by comparing the surface models reconstructed from a single lateral fluoroscopic image to the ground truth data from 3D CT segmentation. A mean reconstruction error between 0.7 and 1.4 mm was found.
Resumo:
A measurement of charged-particle distributions sensitive to the properties of the underlying event is presented for an inclusive sample of events containing a Z-boson, decaying to an electron or muon pair. The measurement is based on data collected using the ATLAS detector at the LHC in proton–proton collisions at a centre-of-mass energy of 7 TeV with an integrated luminosity of 4.6fb−1. Distributions of the charged particle multiplicity and of the charged particle transverse momentum are measured in regions of azimuthal angle defined with respect to the Z-boson direction. The measured distributions are compared to similar distributions measured in jet events, and to the predictions of various Monte Carlo generators implementing different underlying event models.
Resumo:
A measurement of the total pp cross section at the LHC at √s = 7 TeV is presented. In a special run with high-β* beam optics, an integrated luminosity of 80 μb−1 was accumulated in order to measure the differential elastic cross section as a function of the Mandelstam momentum transfer variable t . The measurement is performed with the ALFA sub-detector of ATLAS. Using a fit to the differential elastic cross section in the |t | range from 0.01 GeV2 to 0.1 GeV2 to extrapolate to |t | →0, the total cross section, σtot(pp→X), is measured via the optical theorem to be: σtot(pp→X) = 95.35± 0.38 (stat.)± 1.25 (exp.)± 0.37 (extr.) mb, where the first error is statistical, the second accounts for all experimental systematic uncertainties and the last is related to uncertainties in the extrapolation to |t | → 0. In addition, the slope of the elastic cross section at small |t | is determined to be B = 19.73 ±0.14 (stat.) ±0.26 (syst.) GeV−2.
Resumo:
ATLAS measurements of the azimuthal anisotropy in lead–lead collisions at √sNN = 2.76 TeV are shown using a dataset of approximately 7μb−1 collected at the LHC in 2010. The measurements are performed for charged particles with transversemomenta 0.5 < pT < 20 GeV and in the pseudorapidity range |η| < 2.5. The anisotropy is characterized by the Fourier coefficients, vn, of the charged-particle azimuthal angle distribution for n = 2–4. The Fourier coefficients are evaluated using multi-particle cumulants calculated with the generating function method. Results on the transverse momentum, pseudorapidity and centrality dependence of the vn coefficients are presented. The elliptic flow, v2, is obtained from the two-, four-, six- and eight-particle cumulants while higher-order coefficients, v3 and v4, are determined with two- and four-particle cumulants. Flow harmonics vn measured with four-particle cumulants are significantly reduced compared to the measurement involving two-particle cumulants. A comparison to vn measurements obtained using different analysis methods and previously reported by the LHC experiments is also shown. Results of measurements of flow fluctuations evaluated with multiparticle cumulants are shown as a function of transverse momentum and the collision centrality. Models of the initial spatial geometry and its fluctuations fail to describe the flow fluctuations measurements.
Resumo:
The prompt and non-prompt production cross-sections for ψ(2S) mesons are measured using 2.1 fb−1 of pp collision data at a centre-of-mass energy of 7TeV recorded by the ATLAS experiment at the LHC. The measurement exploits the ψ(2S) → J/ψ (→μ+μ−)π+π− decay mode, and probes ψ(2S) mesons with transverse momenta in the range10 ≤ pT < 100 GeV and rapidity |y| < 2.0. The results are compared to other measurements of ψ(2S) production at the LHC and to various theoretical models for prompt and non-prompt quarkonium production.
Resumo:
Distributions sensitive to the underlying event in QCD jet events have been measured with the ATLAS detector at the LHC, based on 37 pb−1 of proton–proton collision data collected at a centre-of-mass energy of 7 TeV. Chargedparticle mean pT and densities of all-particle ET and chargedparticle multiplicity and pT have been measured in regions azimuthally transverse to the hardest jet in each event. These are presented both as one-dimensional distributions and with their mean values as functions of the leading-jet transverse momentum from 20 to 800 GeV. The correlation of chargedparticle mean pT with charged-particle multiplicity is also studied, and the ET densities include the forward rapidity region; these features provide extra data constraints for Monte Carlo modelling of colour reconnection and beamremnant effects respectively. For the first time, underlying event observables have been computed separately for inclusive jet and exclusive dijet event selections, allowing more detailed study of the interplay of multiple partonic scattering and QCD radiation contributions to the underlying event. Comparisonsto the predictions of different Monte Carlo models show a need for further model tuning, but the standard approach is found to generally reproduce the features of the underlying event in both types of event selection.
Resumo:
A measurement of the parity-violating decay asymmetry parameter, αb , and the helicity amplitudes for the decay Λ 0 b →J/ψ(μ + μ − )Λ 0 (pπ − ) is reported. The analysis is based on 1400 Λ 0 b and Λ ¯ 0 b baryons selected in 4.6 fb −1 of proton–proton collision data with a center-of-mass energy of 7 TeV recorded by the ATLAS experiment at the LHC. By combining the Λ 0 b and Λ ¯ 0 b samples under the assumption of CP conservation, the value of α b is measured to be 0.30±0.16(stat)±0.06(syst) . This measurement provides a test of theoretical models based on perturbative QCD or heavy-quark effective theory.
Resumo:
A measurement of the cross section for the production of isolated prompt photons in pp collisions at a center-of-mass energy s √ =7 TeV is presented. The results are based on an integrated luminosity of 4.6 fb −1 collected with the ATLAS detector at the LHC. The cross section is measured as a function of photon pseudorapidity η γ and transverse energy E γ T in the kinematic range 100≤E γ T <1000 GeV and in the regions |η γ |<1.37 and 1.52≤|η γ |<2.37 . The results are compared to leading-order parton-shower Monte Carlo models and next-to-leading-order perturbative QCD calculations. Next-to-leading-order perturbative QCD calculations agree well with the measured cross sections as a function of E γ T and η γ .
Resumo:
A detailed characterization of air quality in the megacity of Paris (France) during two 1-month intensive campaigns and from additional 1-year observations revealed that about 70% of the urban background fine particulate matter (PM) is transported on average into the megacity from upwind regions. This dominant influence of regional sources was confirmed by in situ measurements during short intensive and longer-term campaigns, aerosol optical depth (AOD) measurements from ENVISAT, and modeling results from PMCAMx and CHIMERE chemistry transport models. While advection of sulfate is well documented for other megacities, there was surprisingly high contribution from long-range transport for both nitrate and organic aerosol. The origin of organic PM was investigated by comprehensive analysis of aerosol mass spectrometer (AMS), radiocarbon and tracer measurements during two intensive campaigns. Primary fossil fuel combustion emissions constituted less than 20%in winter and 40%in summer of carbonaceous fine PM, unexpectedly small for a megacity. Cooking activities and, during winter, residential wood burning are the major primary organic PM sources. This analysis suggests that the major part of secondary organic aerosol is of modern origin, i.e., from biogenic precursors and from wood burning. Black carbon concentrations are on the lower end of values encountered in megacities worldwide, but still represent an issue for air quality. These comparatively low air pollution levels are due to a combination of low emissions per inhabitant, flat terrain, and a meteorology that is in general not conducive to local pollution build-up. This revised picture of a megacity only being partially responsible for its own average and peak PM levels has important implications for air pollution regulation policies.
Resumo:
OBJECTIVES
To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.
METHODS
Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.
RESULTS
There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.79
Resumo:
Surveys on voting behavior typically overestimate turnout rates substantially. To disentangle different sources of bias - coverage error, nonresponse bias, and overreporting - we conducted a validation study in which respondents' self-reported voting behavior was compared to administrative voting records (N = 2000). Our results show that all three sources of error inflate the survey estimate of the turnout rate and also bias estimates from political participation models, although coverage error is only moderate compared to the more pronounced biases due to nonresponse and overreporting. Furthermore, results from a wording experiment do not provide evidence that revised wording reduces measurement bias.
Resumo:
Chironomid-temperature inference models based on North American, European and combined surface sediment training sets were compared to assess the overall reliability of their predictions. Between 67 and 76 of the major chironomid taxa in each data set showed a unimodal response to July temperature, whereas between 5 and 22 of the common taxa showed a sigmoidal response. July temperature optima were highly correlated among the training sets, but the correlations for other taxon parameters such as tolerances and weighted averaging partial least squares (WA-PLS) and partial least squares (PLS) regression coefficients were much weaker. PLS, weighted averaging, WA-PLS, and the Modern Analogue Technique, all provided useful and reliable temperature inferences. Although jack-knifed error statistics suggested that two-component WA-PLS models had the highest predictive power, intercontinental tests suggested that other inference models performed better. The various models were able to provide good July temperature inferences, even where neither good nor close modern analogues for the fossil chironomid assemblages existed. When the models were applied to fossil Lateglacial assemblages from North America and Europe, the inferred rates and magnitude of July temperature changes varied among models. All models, however, revealed similar patterns of Lateglacial temperature change. Depending on the model used, the inferred Younger Dryas July temperature decrease ranged between 2.5 and 6°C.
Resumo:
Clinical oncologists and cancer researchers benefit from information on the vascularization or non-vascularization of solid tumors because of blood flow's influence on three popular treatment types: hyperthermia therapy, radiotherapy, and chemotherapy. The objective of this research is the development of a clinically useful tumor blood flow measurement technique. The designed technique is sensitive, has good spatial resolution, in non-invasive and presents no risk to the patient beyond his usual treatment (measurements will be subsequent only to normal patient treatment).^ Tumor blood flow was determined by measuring the washout of positron emitting isotopes created through neutron therapy treatment. In order to do this, several technical and scientific questions were addressed first. These questions were: (1) What isotopes are created in tumor tissue when it is irradiated in a neutron therapy beam and how much of each isotope is expected? (2) What are the chemical states of the isotopes that are potentially useful for blood flow measurements and will those chemical states allow these or other isotopes to be washed out of the tumor? (3) How should isotope washout by blood flow be modeled in order to most effectively use the data? These questions have been answered through both theoretical calculation and measurement.^ The first question was answered through the measurement of macroscopic cross sections for the predominant nuclear reactions in the body. These results correlate well with an independent mathematical prediction of tissue activation and measurements of mouse spleen neutron activation. The second question was addressed by performing cell suspension and protein precipitation techniques on neutron activated mouse spleens. The third and final question was answered by using first physical principles to develop a model mimicking the blood flow system and measurement technique.^ In a final set of experiments, the above were applied to flow models and animals. The ultimate aim of this project is to apply its methodology to neutron therapy patients. ^
Resumo:
Research has shown that physical activity serves a preventive function against the development of several major chronic diseases. However, studying physical activity and its health benefits is difficult due to the complexity of measuring physical activity. The overall aim of this research is to contribute to the knowledge of both correlates and measurement of physical activity. Data from the Women On The Move study were used for this study (n = 260), and the results are presented in three papers. The first paper focuses on the measurement of physical activity and compares an alternate coding method with the standard coding method for calculating energy expenditure from a 7-day activity diary. Results indicate that the alternative coding scheme could produce similar results to the standard coding in terms of total activity expenditure. Even though agreement could not be achieved by dimension, the study lays the groundwork for a coding system that saves considerable amount of time in coding activity and has the ability to estimate expenditure more accurately for activities that can be performed at varying intensity levels. The second paper investigates intra-day variability in physical activity by estimating the variation in energy expenditure for workers and non-workers and identifying the number of days of diary self-report necessary to reliably estimate activity. The results indicate that 8 days of activity are needed to reliably estimate total activity for individuals who don't work and 12 days of activity are needed to reliably estimate total activity for those who work. Days of diary self-report required by dimension for those who don't work range from 6 to 16 and for those who work from 6 to 113. The final paper presents findings on the relationship between daily living activity and Type A behavior pattern. Significant findings are observed for total activity and leisure activity with the Temperament Scale summary score. Significant findings are also observed for total activity, household chores, work, leisure activity, exercise, and inactivity with one or more of the individual items on the Temperament Scale. However, even though some significant findings were observed, the overall models did not reveal meaningful associations. ^
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^