12 resultados para Random utility

em DigitalCommons@The Texas Medical Center


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perceptual learning is a training induced improvement in performance. Mechanisms underlying the perceptual learning of depth discrimination in dynamic random dot stereograms were examined by assessing stereothresholds as a function of decorrelation. The inflection point of the decorrelation function was defined as the level of decorrelation corresponding to 1.4 times the threshold when decorrelation is 0%. In general, stereothresholds increased with increasing decorrelation. Following training, stereothresholds and standard errors of measurement decreased systematically for all tested decorrelation values. Post training decorrelation functions were reduced by a multiplicative constant (approximately 5), exhibiting changes in stereothresholds without changes in the inflection points. Disparity energy model simulations indicate that a post-training reduction in neuronal noise can sufficiently account for the perceptual learning effects. In two subjects, learning effects were retained over a period of six months, which may have application for training stereo deficient subjects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The selection of a model to guide the understanding and resolution of community problems is an important issue relating to the foundation of public health practice: assessment, policy development, and assurance. Many assessment models produce a diagnosis of community weaknesses, but fail to promote planning and interventions. Rapid Participatory Appraisal (RPA) is a participatory action research model which regards assessment as the first step in the problem solving process, and claims to achieve assessment and policy development within limited resources of time and money. Literature documenting the fulfillment of these claims, and thereby supporting the utility of the model, is relatively sparse and difficult to obtain. Very few articles discuss the changes resulting from RPA assessments in urban areas, and those that do describe studies conducted outside the U.S.A. ^ This study examines the utility of the RPA model and its underlying theories: systems theory, grounded theory, and principles of participatory change, as illustrated by the case study of a community assessment conducted for the Texas Diabetes Institute (TDI), San Antonio, Texas, and subsequent outcomes. Diabetes has a high prevalence and is a major issue in San Antonio. Faculty and students conducted the assessment by informal collaboration between two nursing and public health assessment courses, providing practical student experiences. The study area was large, and the flexibility of the model tested by its use in contiguous sub-regions, reanalyzing aggregated results for the study area. Official TDI reports, and a mail survey of agency employees, described policy development resulting from community diagnoses revealed by the assessment. ^ The RPA model met the criteria for utility from the perspectives of merit, worth, efficiency, and effectiveness. The RPA model best met the agencies' criteria (merit), met the data needs of TDI in this particular situation (worth), provided valid results within budget, time, and personnel constraints (efficiency), and stimulated policy development by TDI (effectiveness). ^ The RPA model appears to have utility for community assessment, diagnosis, and policy development in circumstances similar to the TDI diabetes study. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cancer is a result of defects in the coordination of cell proliferation and programmed cell death. The extent of cell death is physiologically controlled by the activation of a programmed suicide pathway that results in a morphologically recognizable form of death termed apoptosis. Inducing apoptosis in tumor cells by gene therapy provides a potentially effective means to treat human cancers. The p84N5 is a novel nuclear death domain containing protein that has been shown to bind an amino terminal domain of retinoblastoma tumor suppressor gene product (pRb). Expression of N5 can induce apoptosis that is dependent upon its intact death domain and is inhibited by pRb. In many human cancer cells the functions of pRb are either lost through gene mutation or inactivated by different mechanisms. N5 based gene therapy may induce cell death preferentially in tumor cells relative to normal cells. We have demonstrated that N5 gene therapy is less toxic to normal cells than to tumor cells. To test the possibility that N5 could be used in gene therapy of cancer, we have generated a recombinant adenovirus engineered to express N5 and test the effects of viral infection on growth and tumorigenicity of human cancer cells. Adenovirus N5 infection significantly reduced the proliferation and tumorigenicity of breast, ovarian, and osteosarcoma tumor cell lines. Reduced proliferation and tumorigenicity were mediated by an induction of apoptosis as indicated by DNA fragmentation in infected cells. We also test the potential utility of N5 for gene therapy of pancreatic carcinoma that typically respond poorly to conventional treatment. Adenoviral mediated N5 gene transfer inhibits the growth of pancreatic cancer cell lines in vitro. N5 gene transfer also reduces the growth and metastasis of human pancreatic adenocarcinoma in subcutaneous and orthotopic mouse model. Interestingly, the pancreatic adenocarcinoma cells are more sensitive to N5 than they are to p53, suggesting that N5 gene therapy may be effective in tumors resistant to p53. We also test the possibilities of the use of N5 and p53 together on the inhibition of pancreatic cancer cell growth in vitro and vivo. Simultaneous use of N5 and RbΔCDK has been found to exert a greater extent on the inhibition of pancreatic cancer cell growth in vitro and in vivo. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research has shown that disease-specific health related quality of life (HRQoL) instruments are more responsive than generic instruments to particular disease conditions. However, only a few studies have used disease-specific instruments to measure HRQoL in hemophilia. The goal of this project was to develop a disease-specific utility instrument that measures patient preferences for various hemophilia health states. The visual analog scale (VAS), a ranking method, and the standard gamble (SG), a choice-based method incorporating risk, were used to measure patient preferences. Study participants (n = 128) were recruited from the UT/Gulf States Hemophilia and Thrombophilia Center and stratified by age: 0–18 years and 19+. ^ Test retest reliability was demonstrated for both VAS and SG instruments: overall within-subject correlation coefficients were 0.91 and 0.79, respectively. Results showed statistically significant differences in responses between pediatric and adult participants when using the SG (p = .045). However, no significant differences were shown between these groups when using the VAS (p = .636). When responses to VAS and SG instruments were compared, statistically significant differences in both pediatric (p < .0001) and adult (p < .0001) groups were observed. Data from this study also demonstrated that persons with hemophilia with varying severity of disease, as well as those who were HIV infected, were able to evaluate a range of health states for hemophilia. This has important implications for the study of quality of life in hemophilia and the development of disease-specific HRQoL instruments. ^ The utility measures obtained from this study can be applied in economic evaluations that analyze the cost/utility of alternative hemophilia treatments. Results derived from the SG indicate that age can influence patients' preferences regarding their state of health. This may have implications for considering treatment options based on the mean age of the population under consideration. Although both instruments independently demonstrated reliability and validity, results indicate that the two measures may not be interchangeable. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective. To determine the accuracy of the urine protein:creatinine ratio (pr:cr) in predicting 300 mg of protein in 24-hour urine collection in pregnant patients with suspected preeclampsia. ^ Methods. A systematic review was performed. Articles were identified through electronic databases and the relevant citations were hand searching of textbooks and review articles. Included studies evaluated patients for suspected preeclampsia with a 24-hour urine sample and a pr:cr. Only English language articles were included. The studies that had patients with chronic illness such as chronic hypertension, diabetes mellitus or renal impairment were excluded from the review. Two researchers extracted accuracy data for pr:cr relative to a gold standard of 300 mg of protein in 24-hour sample as well as population and study characteristics. The data was analyzed and summarized in tabular and graphical form. ^ Results. Sixteen studies were identified and only three studies met our inclusion criteria with 510 total patients. The studies evaluated different cut-points for positivity of pr:cr from 130 mg/g to 700 mg/g. Sensitivities and specificities for pr:cr of 130mg/g -150 mg/g were 90-93% and 33-65%, respectively; for a pr:cr of 300 mg/g were 81-95% and 52-80%, respectively; for a pr:cr of 600-700mg/g were 85-87% and 96-97%, respectively. ^ Conclusion. The value of a random pr:cr to exclude pre-eclampsia is limited because even low levels of pr:cr (130-150 mg/g) may miss up to 10% of patients with significant proteinuria. A pr:cr of more than 600 mg/g may obviate a 24-hour collection.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gastroesophageal reflux disease is a common condition affecting 25 to 40% of the population and causes significant morbidity in the U.S., accounting for at least 9 million office visits to physicians with estimated annual costs of $10 billion. Previous research has not clearly established whether infection with Helicobacter pylori, a known cause of peptic ulcer, atrophic gastritis and non cardia adenocarcinoma of the stomach, is associated with gastroesophageal reflux disease. This study is a secondary analysis of data collected in a cross-sectional study of a random sample of adult residents of Ciudad Juarez, Mexico, that was conducted in 2004 (Prevalence and Determinants of Chronic Atrophic Gastritis Study or CAG study, Dr. Victor M. Cardenas, Principal Investigator). In this study, the presence of gastroesophageal reflux disease was based on responses to the previously validated Spanish Language Dyspepsia Questionnaire. Responses to this questionnaire indicating the presence of gastroesophageal reflux symptoms and disease were compared with the presence of H. pylori infection as measured by culture, histology and rapid urease test, and with findings of upper endoscopy (i.e., hiatus hernia and erosive and atrophic esophagitis). The prevalence ratio was calculated using bivariate, stratified and multivariate negative binomial logistic regression analyses in order to assess the relation between active H. pylori infection and the prevalence of gastroesophageal reflux typical syndrome and disease, while controlling for known risk factors of gastroesophageal reflux disease such as obesity. In a random sample of 174 adults 48 (27.6%) of the study participants had typical reflux syndrome and only 5% (or 9/174) had gastroesophageal reflux disease per se according to the Montreal consensus, which defines reflux syndromes and disease based on whether the symptoms are perceived as troublesome by the subject. There was no association between H. pylori infection and typical reflux syndrome or gastroesophageal reflux disease. However, we found that in this Northern Mexican population, there was a moderate association (Prevalence Ratio=2.5; 95% CI=1.3, 4.7) between obesity (≥30 kg/m2) and typical reflux syndrome. Management and prevention of obesity will significantly curb the growing numbers of persons affected by gastroesophageal reflux symptoms and disease in Northern Mexico. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Lynch Syndrome (LS) is a familial cancer syndrome with a high prevalence of colorectal and endometrial carcinomas among affected family members. Clinical criteria, developed from information obtained from familial colorectal cancer registries, have been generated to identify individuals at elevated risk for having LS. In 2007, the Society of Gynecologic Oncology (SGO) codified criteria to assist in identifying women presenting with gynecologic cancers at elevated risk for having LS. These criteria have not been validated in a population-based setting. Materials and Methods: We retrospectively identified 412, unselected endometrial cancer cases. Clinical and pathologic information were obtained from the electronic medical record, and all tumors were tested for expression of the DNA mismatch repair proteins through immunohistochemistry. Tumors exhibiting loss of MSH2, MSH6 and PMS2 were designated as probable Lynch Syndrome (PLS). For tumors exhibiting immunohistochemical loss of MLH1, we used the PCR-based MLH1 methylation assay to delineate PLS tumors from sporadic tumors. Samples lacking methylation of the MLH1 promoter were also designated as PLS. The sensitivity and specificity for SGO criteria for detecting PLS tumors was calculated. We compared clinical and pathologic features of sporadic tumors and PLS tumors. A simplified cost-effectiveness analysis was also performed comparing the direct costs of utilizing SGO criteria vs. universal tumor testing. Results: In our cohort, 43/408 (10.5%) of endometrial carcinomas were designated as PLS. The sensitivity and specificity of SGO criteria to identify PLS cases were 32.7 and 77%, respectively. Multivariate analysis of clinical and pathologic parameters failed to identify statistically significant differences between sporadic and PLS tumors with the exception of tumors arising from the lower uterine segment. These tumors were more likely to occur in PLS tumors. Cost-effectiveness analysis showed clinical criteria and universal testing strategies cost $6,235.27/PLS case identified and $5,970.38/PLS case identified, respectively. Conclusions: SGO 5-10% criteria successfully identify PLS cases among women who are young or have significant family history of LS related tumors. However, a larger proportion of PLS cases occurring at older ages with less significant family history are not detected by this screening strategy. Compared to SGO clinical criteria, universal tumor testing is a cost effective strategy to identify women presenting with endometrial cancer who are at elevated risk for having LS.