938 resultados para inverse probability weights
Resumo:
OBJECTIVE: Computed tomography (CT) and magnetic resonance imaging (MRI) are introduced as an alternative to traditional autopsy. The purpose of this study was to investigate their accuracy in mass estimation of liver and spleen. METHODS: In 44 cases, the weights of spleen and liver were estimated based on MRI and CT data using a volume-analysis software and a postmortem tissue-specific density factor. In a blinded approach, the results were compared with the weights noted at autopsy. RESULTS: Excellent correlation between estimated and real weights (r = 0.997 for MRI, r = 0.997 for CT) was found. Putrefaction gas and venous air embolism led to an overestimation. Venous congestion and drowning caused higher estimated weights. CONCLUSION: Postmortem weights of liver and spleen can accurately be assessed by nondestructive imaging. Multislice CT overcomes the limitation of putrefaction and venous air embolism by the possibility to exclude gas. Congestion seems to be even better assessed.
Resumo:
Mendelian models can predict who carries an inherited deleterious mutation of known disease genes based on family history. For example, the BRCAPRO model is commonly used to identify families who carry mutations of BRCA1 and BRCA2, based on familial breast and ovarian cancers. These models incorporate the age of diagnosis of diseases in relatives and current age or age of death. We develop a rigorous foundation for handling multiple diseases with censoring. We prove that any disease unrelated to mutations can be excluded from the model, unless it is sufficiently common and dependent on a mutation-related disease time. Furthermore, if a family member has a disease with higher probability density among mutation carriers, but the model does not account for it, then the carrier probability is deflated. However, even if a family only has diseases the model accounts for, if the model excludes a mutation-related disease, then the carrier probability will be inflated. In light of these results, we extend BRCAPRO to account for surviving all non-breast/ovary cancers as a single outcome. The extension also enables BRCAPRO to extract more useful information from male relatives. Using 1500 familes from the Cancer Genetics Network, accounting for surviving other cancers improves BRCAPRO’s concordance index from 0.758 to 0.762 (p = 0.046), improves its positive predictive value from 35% to 39% (p < 10−6) without impacting its negative predictive value, and improves its overall calibration, although calibration slightly worsens for those with carrier probability < 10%. Copyright c 2000 John Wiley & Sons, Ltd.
Resumo:
The concordance probability is used to evaluate the discriminatory power and the predictive accuracy of nonlinear statistical models. We derive an analytic expression for the concordance probability in the Cox proportional hazards model. The proposed estimator is a function of the regression parameters and the covariate distribution only and does not use the observed event and censoring times. For this reason it is asymptotically unbiased, unlike Harrell's c-index based on informative pairs. The asymptotic distribution of the concordance probability estimate is derived using U-statistic theory and the methodology is applied to a predictive model in lung cancer.
Resumo:
Marshall's (1970) lemma is an analytical result which implies root-n-consistency of the distribution function corresponding to the Grenander (1956) estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,\infty).
Resumo:
To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.
Resumo:
Assessment of regional blood flow changes is difficult in the clinical setting. We tested whether conventional pulmonary artery catheters (PACs) can be used to measure regional venous blood flows by inverse thermodilution (ITD). Inverse thermodilution was tested in vitro and in vivo using perivascular ultrasound Doppler (USD) flow probes as a reference. In anesthetized pigs, PACs were inserted in jugular, hepatic, renal, and femoral veins, and their measurements were compared with simultaneous USD flow measurements from carotid, hepatic, renal, and femoral arteries and from portal vein. Fluid boluses were injected through the PAC's distal port, and temperature changes were recorded from the proximally located thermistor. Injectates of 2 and 5 mL at 22 degrees C and 4 degrees C were used. Flows were altered by using a roller pump (in vitro), and infusion of dobutamine and induction of cardiac tamponade, respectively. In vitro: At blood flows between 400 mL . min-1 and 700 mL . min-1 (n = 50), ITD and USD correlated well (r = 0.86, P < 0.0001), with bias and limits of agreement of 3 +/- 101 mL . min-1. In vivo: 514 pairs of measurements had to be excluded from analysis for technical reasons, and 976 were analyzed. Best correlations were r = 0.87 (P < 0.0001) for renal flow and r = 0.46 (P < 0.0001) for hepatic flow. No significant correlation was found for cerebral and femoral flows. Inverse thermodilution using conventional PAC compared moderately well with USD for renal but not for other flows despite good in vitro correlation in various conditions. In addition, this method has significant technical limitations.
Resumo:
Complex human diseases are a major challenge for biological research. The goal of my research is to develop effective methods for biostatistics in order to create more opportunities for the prevention and cure of human diseases. This dissertation proposes statistical technologies that have the ability of being adapted to sequencing data in family-based designs, and that account for joint effects as well as gene-gene and gene-environment interactions in the GWA studies. The framework includes statistical methods for rare and common variant association studies. Although next-generation DNA sequencing technologies have made rare variant association studies feasible, the development of powerful statistical methods for rare variant association studies is still underway. Chapter 2 demonstrates two adaptive weighting methods for rare variant association studies based on family data for quantitative traits. The results show that both proposed methods are robust to population stratification, robust to the direction and magnitude of the effects of causal variants, and more powerful than the methods using weights suggested by Madsen and Browning [2009]. In Chapter 3, I extended the previously proposed test for Testing the effect of an Optimally Weighted combination of variants (TOW) [Sha et al., 2012] for unrelated individuals to TOW &ndash F, TOW for Family &ndash based design. Simulation results show that TOW &ndash F can control for population stratification in wide range of population structures including spatially structured populations, is robust to the directions of effect of causal variants, and is relatively robust to percentage of neutral variants. In GWA studies, this dissertation consists of a two &ndash locus joint effect analysis and a two-stage approach accounting for gene &ndash gene and gene &ndash environment interaction. Chapter 4 proposes a novel two &ndash stage approach, which is promising to identify joint effects, especially for monotonic models. The proposed approach outperforms a single &ndash marker method and a regular two &ndash stage analysis based on the two &ndash locus genotypic test. In Chapter 5, I proposed a gene &ndash based two &ndash stage approach to identify gene &ndash gene and gene &ndash environment interactions in GWA studies which can include rare variants. The two &ndash stage approach is applied to the GAW 17 dataset to identify the interaction between KDR gene and smoking status.
Resumo:
A basic approach to study a NVH problem is to break down the system in three basic elements – source, path and receiver. While the receiver (response) and the transfer path can be measured, it is difficult to measure the source (forces) acting on the system. It becomes necessary to predict these forces to know how they influence the responses. This requires inverting the transfer path. Singular Value Decomposition (SVD) method is used to decompose the transfer path matrix into its principle components which is required for the inversion. The usual approach to force prediction requires rejecting the small singular values obtained during SVD by setting a threshold, as these small values dominate the inverse matrix. This assumption of the threshold may be subjected to rejecting important singular values severely affecting force prediction. The new approach discussed in this report looks at the column space of the transfer path matrix which is the basis for the predicted response. The response participation is an indication of how the small singular values influence the force participation. The ability to accurately reconstruct the response vector is important to establish a confidence in force vector prediction. The goal of this report is to suggest a solution that is mathematically feasible, physically meaningful, and numerically more efficient through examples. This understanding adds new insight to the effects of current code and how to apply algorithms and understanding to new codes.