853 resultados para measurement model
A robust Bayesian approach to null intercept measurement error model with application to dental data
Resumo:
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We present the first model-independent measurement of the helicity of W bosons produced in top quark decays, based on a 1 fb(-1) sample of candidate t (t) over bar events in the dilepton and lepton plus jets channels collected by the D0 detector at the Fermilab Tevatron p (p) over bar Collider. We reconstruct the angle theta(*) between the momenta of the down-type fermion and the top quark in the W boson rest frame for each top quark decay. A fit of the resulting cos theta(*) distribution finds that the fraction of longitudinal W bosons f(0)=0.425 +/- 0.166(stat)+/- 0.102(syst) and the fraction of right-handed W bosons f(+)=0.119 +/- 0.090(stat)+/- 0.053(syst), which is consistent at the 30% C.L. with the standard model.
Resumo:
Background: Early trauma care is dependent on subjective assessments and sporadic vital sign assessments. We hypothesized that near-infrared spectroscopy-measured cerebral oxygenation (regional oxygen saturation [rSO 2]) would provide a tool to detect cardiovascular compromise during active hemorrhage. We compared rSO 2 with invasively measured mixed venous oxygen saturation (SvO2), mean arterial pressure (MAP), cardiac output, heart rate, and calculated pulse pressure. Methods: Six propofol-anesthetized instrumented swine were subjected to a fixed-rate hemorrhage until cardiovascular collapse. rSO 2 was monitored with noninvasively measured cerebral oximetry; SvO2 was measured with a fiber optic pulmonary arterial catheter. As an assessment of the time responsiveness of each variable, we recorded minutes from start of the hemorrhage for each variable achieving a 5%, 10%, 15%, and 20% change compared with baseline. Results: Mean time to cardiovascular collapse was 35 minutes ± 11 minutes (54 ± 17% total blood volume). Cerebral rSO 2 began a steady decline at an average MAP of 78 mm Hg ± 17 mm Hg, well above the expected autoregulatory threshold of cerebral blood flow. The 5%, 10%, and 15% decreases in rSO 2 during hemorrhage occurred at a similar times to SvO2, but rSO 2 lagged 6 minutes behind the equivalent percentage decreases in MAP. There was a higher correlation between rSO 2 versus MAP (R =0.72) than SvO2 versus MAP (R =0.55). Conclusions: Near-infrared spectroscopy- measured rSO 2 provided reproducible decreases during hemorrhage that were similar in time course to invasively measured cardiac output and SvO2 but delayed 5 to 9 minutes compared with MAP and pulse pressure. rSO 2 may provide an earlier warning of worsening hemorrhagic shock for prompt interventions in patients with trauma when continuous arterial BP measurements are unavailable. © 2012 Lippincott Williams & Wilkins.
Resumo:
In most studies on beef cattle longevity, only the cows reaching a given number of calvings by a specific age are considered in the analyses. With the aim of evaluating all cows with productive life in herds, taking into consideration the different forms of management on each farm, it was proposed to measure cow longevity from age at last calving (ALC), that is, the most recent calving registered in the files. The objective was to characterize this trait in order to study the longevity of Nellore cattle, using the Kaplan-Meier estimators and the Cox model. The covariables and class effects considered in the models were age at first calving (AFC), year and season of birth of the cow and farm. The variable studied (ALC) was classified as presenting complete information (uncensored = 1) or incomplete information (censored = 0), using the criterion of the difference between the date of each cow's last calving and the date of the latest calving at each farm. If this difference was >36 months, the cow was considered to have failed. If not, this cow was censored, thus indicating that future calving remained possible for this cow. The records of 11 791 animals from 22 farms within the Nellore Breed Genetic Improvement Program ('Nellore Brazil') were used. In the estimation process using the Kaplan-Meier model, the variable of AFC was classified into three age groups. In individual analyses, the log-rank test and the Wilcoxon test in the Kaplan-Meier model showed that all covariables and class effects had significant effects (P < 0.05) on ALC. In the analysis considering all covariables and class effects, using the Wald test in the Cox model, only the season of birth of the cow was not significant for ALC (P > 0.05). This analysis indicated that each month added to AFC diminished the risk of the cow's failure in the herd by 2%. Nonetheless, this does not imply that animals with younger AFC had less profitability. Cows with greater numbers of calvings were more precocious than those with fewer calvings. Copyright © The Animal Consortium 2012.
Resumo:
Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.
Resumo:
Temperature dependent transient curves of excited levels of a model Eu3+ complex have been measured for the first time. A coincidence between the temperature dependent rise time of the 5D0 emitting level and decay time of the 5D1 excited level in the [Eu(tta)3(H2O)2] complex has been found, which unambiguously proves the T1→5D1→5D0 sensitization pathway. A theoretical approach for the temperature dependent energy transfer rates has been successfully applied to the rationalization of the experimental data.
Resumo:
Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.
Resumo:
Experience with anidulafungin against Candida krusei is limited. Immunosuppressed mice were injected with 1.3 x 10(7) to 1.5 x 10(7) CFU of C. krusei. Animals were treated with saline, 40 mg/kg fluconazole, 1 mg/kg amphotericin B, or 10 and 20 mg/kg anidulafungin for 5 days. Anidulafungin improved survival and significantly reduced the number of CFU/g in kidneys and serum beta-glucan levels.
Resumo:
A detailed characterization of air quality in the megacity of Paris (France) during two 1-month intensive campaigns and from additional 1-year observations revealed that about 70% of the urban background fine particulate matter (PM) is transported on average into the megacity from upwind regions. This dominant influence of regional sources was confirmed by in situ measurements during short intensive and longer-term campaigns, aerosol optical depth (AOD) measurements from ENVISAT, and modeling results from PMCAMx and CHIMERE chemistry transport models. While advection of sulfate is well documented for other megacities, there was surprisingly high contribution from long-range transport for both nitrate and organic aerosol. The origin of organic PM was investigated by comprehensive analysis of aerosol mass spectrometer (AMS), radiocarbon and tracer measurements during two intensive campaigns. Primary fossil fuel combustion emissions constituted less than 20%in winter and 40%in summer of carbonaceous fine PM, unexpectedly small for a megacity. Cooking activities and, during winter, residential wood burning are the major primary organic PM sources. This analysis suggests that the major part of secondary organic aerosol is of modern origin, i.e., from biogenic precursors and from wood burning. Black carbon concentrations are on the lower end of values encountered in megacities worldwide, but still represent an issue for air quality. These comparatively low air pollution levels are due to a combination of low emissions per inhabitant, flat terrain, and a meteorology that is in general not conducive to local pollution build-up. This revised picture of a megacity only being partially responsible for its own average and peak PM levels has important implications for air pollution regulation policies.
Resumo:
In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^
Resumo:
Computing the modal parameters of structural systems often requires processing data from multiple non-simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors, which are fixed for all measurements, while the other sensors change their position from one setup to the next. One possibility is to process the setups separately resulting in different modal parameter estimates for each setup. Then, the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global mode shapes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a new state space model that processes all setups at once. The result is that the global mode shapes are obtained automatically, and only a value for the natural frequency and damping ratio of each mode is estimated. We also investigate the estimation of this model using maximum likelihood and the Expectation Maximization algorithm, and apply this technique to simulated and measured data corresponding to different structures.
Resumo:
The solvation energies of salt bridges formed between the terminal carboxyl of the host pentapeptide AcWL- X-LL and the side chains of Arg or Lys in the guest (X) position have been measured. The energies were derived from octanol-to-buffer transfer free energies determined between pH 1 and pH 9. 13C NMR measurements show that the salt bridges form in the octanol phase, but not in the buffer phase, when the side chains and the terminal carboxyl group are charged. The free energy of salt-bridge formation in octanol is approximately -4 kcal/mol (1 cal = 4.184 J), which is equal to or slightly larger than the sum of the solvation energies of noninteracting pairs of charged side chains. This is about one-half the free energy that would result from replacing a charge pair in octanol with a pair of hydrophobic residues of moderate size. Therefore, salt bridging in octanol can change the favorable aqueous solvation energy of a pair of oppositely charged residues to neutral or slightly unfavorable but cannot provide the same free energy decrease as hydrophobic residues. This is consistent with recent computational and experimental studies of protein stability.