6 resultados para "Bootstrap"

em DigitalCommons@The Texas Medical Center


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, disaster preparedness through assessment of medical and special needs persons (MSNP) has taken a center place in public eye in effect of frequent natural disasters such as hurricanes, storm surge or tsunami due to climate change and increased human activity on our planet. Statistical methods complex survey design and analysis have equally gained significance as a consequence. However, there exist many challenges still, to infer such assessments over the target population for policy level advocacy and implementation. ^ Objective. This study discusses the use of some of the statistical methods for disaster preparedness and medical needs assessment to facilitate local and state governments for its policy level decision making and logistic support to avoid any loss of life and property in future calamities. ^ Methods. In order to obtain precise and unbiased estimates for Medical Special Needs Persons (MSNP) and disaster preparedness for evacuation in Rio Grande Valley (RGV) of Texas, a stratified and cluster-randomized multi-stage sampling design was implemented. US School of Public Health, Brownsville surveyed 3088 households in three counties namely Cameron, Hidalgo, and Willacy. Multiple statistical methods were implemented and estimates were obtained taking into count probability of selection and clustering effects. Statistical methods for data analysis discussed were Multivariate Linear Regression (MLR), Survey Linear Regression (Svy-Reg), Generalized Estimation Equation (GEE) and Multilevel Mixed Models (MLM) all with and without sampling weights. ^ Results. Estimated population for RGV was 1,146,796. There were 51.5% female, 90% Hispanic, 73% married, 56% unemployed and 37% with their personal transport. 40% people attained education up to elementary school, another 42% reaching high school and only 18% went to college. Median household income is less than $15,000/year. MSNP estimated to be 44,196 (3.98%) [95% CI: 39,029; 51,123]. All statistical models are in concordance with MSNP estimates ranging from 44,000 to 48,000. MSNP estimates for statistical methods are: MLR (47,707; 95% CI: 42,462; 52,999), MLR with weights (45,882; 95% CI: 39,792; 51,972), Bootstrap Regression (47,730; 95% CI: 41,629; 53,785), GEE (47,649; 95% CI: 41,629; 53,670), GEE with weights (45,076; 95% CI: 39,029; 51,123), Svy-Reg (44,196; 95% CI: 40,004; 48,390) and MLM (46,513; 95% CI: 39,869; 53,157). ^ Conclusion. RGV is a flood zone, most susceptible to hurricanes and other natural disasters. People in the region are mostly Hispanic, under-educated with least income levels in the U.S. In case of any disaster people in large are incapacitated with only 37% have their personal transport to take care of MSNP. Local and state government’s intervention in terms of planning, preparation and support for evacuation is necessary in any such disaster to avoid loss of precious human life. ^ Key words: Complex Surveys, statistical methods, multilevel models, cluster randomized, sampling weights, raking, survey regression, generalized estimation equations (GEE), random effects, Intracluster correlation coefficient (ICC).^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^