10 resultados para Random numbers

em DigitalCommons@The Texas Medical Center


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gastroesophageal reflux disease is a common condition affecting 25 to 40% of the population and causes significant morbidity in the U.S., accounting for at least 9 million office visits to physicians with estimated annual costs of $10 billion. Previous research has not clearly established whether infection with Helicobacter pylori, a known cause of peptic ulcer, atrophic gastritis and non cardia adenocarcinoma of the stomach, is associated with gastroesophageal reflux disease. This study is a secondary analysis of data collected in a cross-sectional study of a random sample of adult residents of Ciudad Juarez, Mexico, that was conducted in 2004 (Prevalence and Determinants of Chronic Atrophic Gastritis Study or CAG study, Dr. Victor M. Cardenas, Principal Investigator). In this study, the presence of gastroesophageal reflux disease was based on responses to the previously validated Spanish Language Dyspepsia Questionnaire. Responses to this questionnaire indicating the presence of gastroesophageal reflux symptoms and disease were compared with the presence of H. pylori infection as measured by culture, histology and rapid urease test, and with findings of upper endoscopy (i.e., hiatus hernia and erosive and atrophic esophagitis). The prevalence ratio was calculated using bivariate, stratified and multivariate negative binomial logistic regression analyses in order to assess the relation between active H. pylori infection and the prevalence of gastroesophageal reflux typical syndrome and disease, while controlling for known risk factors of gastroesophageal reflux disease such as obesity. In a random sample of 174 adults 48 (27.6%) of the study participants had typical reflux syndrome and only 5% (or 9/174) had gastroesophageal reflux disease per se according to the Montreal consensus, which defines reflux syndromes and disease based on whether the symptoms are perceived as troublesome by the subject. There was no association between H. pylori infection and typical reflux syndrome or gastroesophageal reflux disease. However, we found that in this Northern Mexican population, there was a moderate association (Prevalence Ratio=2.5; 95% CI=1.3, 4.7) between obesity (≥30 kg/m2) and typical reflux syndrome. Management and prevention of obesity will significantly curb the growing numbers of persons affected by gastroesophageal reflux symptoms and disease in Northern Mexico. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perceptual learning is a training induced improvement in performance. Mechanisms underlying the perceptual learning of depth discrimination in dynamic random dot stereograms were examined by assessing stereothresholds as a function of decorrelation. The inflection point of the decorrelation function was defined as the level of decorrelation corresponding to 1.4 times the threshold when decorrelation is 0%. In general, stereothresholds increased with increasing decorrelation. Following training, stereothresholds and standard errors of measurement decreased systematically for all tested decorrelation values. Post training decorrelation functions were reduced by a multiplicative constant (approximately 5), exhibiting changes in stereothresholds without changes in the inflection points. Disparity energy model simulations indicate that a post-training reduction in neuronal noise can sufficiently account for the perceptual learning effects. In two subjects, learning effects were retained over a period of six months, which may have application for training stereo deficient subjects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerous co-factors, genetic, environmental and physical, play an important role in development and prognosis of cancer. Each year in the USA, more than 31,000 cases of oral and 13,000 cases of cervical cancer are diagnosed. Substantial epidemiological data supports a high correlation between development of these cancers and the presence of specific types of human papillomaviruses (HPV). Molecular biological studies show that not only are several of the viral genes necessary and sufficient to cause transformation but they also function synergistically with other co-factors. Evidence suggests that prevention of infection or inhibition of viral gene expression may alter the course of malignant transition. The main objective of this project was to test the hypothesis that some human carcinoma cells, containing HPV, behave in malignant manner because the viral genes function in the maintenance of some aspect of the transformed phenotype.^ The specific aims were (1) to select oral and cervical cancer cell lines which were HPV-negative or which harbored transcriptionally active HPV-18, (2) to construct and determine the effects of recombinant sense or antisense expressing vectors, (3) to test the effects of synthetic antisense oligodeoxynucleotides on the transformed behavior of these cells.^ To screen cells, we performed Southern and Northern analysis and polymerase chain reactions. When antisense-expressing vectors were used, cells harboring low numbers of HPV-18 where unable to survive transfection but they were readily transfected with all other constructs. Rare antisense transfectants obtained from HPV-positive cells showed significantly altered characteristics including malignant potential in nude mice. The HPV-negative cells showed no differences in transfection efficiencies or growth characteristics with any construct.^ In addition, treatment of the HPV-positive cells with antisense, but not random oligodeoxynucleotides, resulted in decreased cell proliferation and even cell death. These effects were dose-dependent, synergistic and HPV-specific.^ These results suggest that expression of viral genes play an important role in the maintenance of the transformed phenotype which implies that inhibition of expression, by antisense molecules, may be therapeutic in HPV-induced tumors. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ethnic violence appears to be the major source of violence in the world. Ethnic hostilities are potentially all-pervasive because most countries in the world are multi-ethnic. Public health's focus on violence documents its increasing role in this issue.^ The present study is based on a secondary analysis of a dataset of responses by 272 individuals from four ethnic groups (Anglo, African, Mexican, and Vietnamese Americans) who answered questions regarding variables related to ethnic violence from a general questionnaire which was distributed to ethnically diverse purposive, nonprobability, self-selected groups of individuals in Houston, Texas, in 1993.^ One goal was psychometric: learning about issues in analysis of datasets with modest numbers, comparison of two approaches to dealing with missing observations not missing at random (conducting analysis on two datasets), transformation analysis of continuous variables for logistic regression, and logistic regression diagnostics.^ Regarding the psychometric goal, it was concluded that measurement model analysis was not possible with a relatively small dataset with nonnormal variables, such as Likert-scaled variables; therefore, exploratory factor analysis was used. The two approaches to dealing with missing values resulted in comparable findings. Transformation analysis suggested that the continuous variables were in the correct scale, and diagnostics that the model fit was adequate.^ The substantive portion of the analysis included the testing of four hypotheses. Hypothesis One proposed that attitudes/efficacy regarding alternative approaches to resolving grievances from the general questionnaire represented underlying factors: nonpunitive social norms and strategies for addressing grievances--using the political system, organizing protests, using the system to punish offenders, and personal mediation. Evidence was found to support all but one factor, nonpunitive social norms.^ Hypothesis Two proposed that the factor variables and the other independent variables--jail, grievance, male, young, and membership in a particular ethnic group--were associated with (non)violence. Jail, grievance, and not using the political system to address grievances were associated with a greater likelihood of intergroup violence.^ No evidence was found to support Hypotheses Three and Four, which proposed that grievance and ethnic group membership would interact with other variables (i.e., age, gender, etc.) to produce variant levels of subgroup (non)violence.^ The generalizability of the results of this study are constrained by the purposive self-selected nature of the sample and small sample size (n = 272).^ Suggestions for future research include incorporating other possible variables or factors predictive of intergroup violence in models of the kind tested here, and the development and evaluation of interventions that promote electoral and nonelectoral political participation as means of reducing interethnic conflict. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective. To determine the accuracy of the urine protein:creatinine ratio (pr:cr) in predicting 300 mg of protein in 24-hour urine collection in pregnant patients with suspected preeclampsia. ^ Methods. A systematic review was performed. Articles were identified through electronic databases and the relevant citations were hand searching of textbooks and review articles. Included studies evaluated patients for suspected preeclampsia with a 24-hour urine sample and a pr:cr. Only English language articles were included. The studies that had patients with chronic illness such as chronic hypertension, diabetes mellitus or renal impairment were excluded from the review. Two researchers extracted accuracy data for pr:cr relative to a gold standard of 300 mg of protein in 24-hour sample as well as population and study characteristics. The data was analyzed and summarized in tabular and graphical form. ^ Results. Sixteen studies were identified and only three studies met our inclusion criteria with 510 total patients. The studies evaluated different cut-points for positivity of pr:cr from 130 mg/g to 700 mg/g. Sensitivities and specificities for pr:cr of 130mg/g -150 mg/g were 90-93% and 33-65%, respectively; for a pr:cr of 300 mg/g were 81-95% and 52-80%, respectively; for a pr:cr of 600-700mg/g were 85-87% and 96-97%, respectively. ^ Conclusion. The value of a random pr:cr to exclude pre-eclampsia is limited because even low levels of pr:cr (130-150 mg/g) may miss up to 10% of patients with significant proteinuria. A pr:cr of more than 600 mg/g may obviate a 24-hour collection.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^