16 resultados para Selection Analysis
em DigitalCommons@The Texas Medical Center
Resumo:
In the Practice Change Model, physicians act as key stakeholders, people who have both an investment in the practice and the capacity to influence how the practice performs. This leadership role is critical to the development and change of the practice. Leadership roles and effectiveness are an important factor in quality improvement in primary care practices.^ The study conducted involved a comparative case study analysis to identify leadership roles and the relationship between leadership roles and the number and type of quality improvement strategies adopted during a Practice Change Model-based intervention study. The research utilized secondary data from four primary care practices with various leadership styles. The practices are located in the San Antonio region and serve a large Hispanic population. The data was collected by two ABC Project Facilitators from each practice during a 12-month period including Key Informant Interviews (all staff members), MAP (Multi-method Assessment Process), and Practice Facilitation field notes. This data was used to evaluate leadership styles, management within the practice, and intervention tools that were implemented. The chief steps will be (1) to analyze if the leader-member relations contribute to the type of quality improvement strategy or strategies selected (2) to investigate if leader-position power contributes to the number of strategies selected and the type of strategy selected (3) and to explore whether the task structure varies across the four primary care practices.^ The research found that involving more members of the clinic staff in decision-making, building bridges between organizational staff and clinical staff, and task structure are all associated with the direct influence on the number and type of quality improvement strategies implemented in primary care practice.^ Although this research only investigated leadership styles of four different practices, it will offer future guidance on how to establish the priorities and implementation of quality improvement strategies that will have the greatest impact on patient care improvement. ^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
The discovery of grid cells in the medial entorhinal cortex (MEC) permits the characterization of hippocampal computation in much greater detail than previously possible. The present study addresses how an integrate-and-fire unit driven by grid-cell spike trains may transform the multipeaked, spatial firing pattern of grid cells into the single-peaked activity that is typical of hippocampal place cells. Previous studies have shown that in the absence of network interactions, this transformation can succeed only if the place cell receives inputs from grids with overlapping vertices at the location of the place cell's firing field. In our simulations, the selection of these inputs was accomplished by fast Hebbian plasticity alone. The resulting nonlinear process was acutely sensitive to small input variations. Simulations differing only in the exact spike timing of grid cells produced different field locations for the same place cells. Place fields became concentrated in areas that correlated with the initial trajectory of the animal; the introduction of feedback inhibitory cells reduced this bias. These results suggest distinct roles for plasticity of the perforant path synapses and for competition via feedback inhibition in the formation of place fields in a novel environment. Furthermore, they imply that variability in MEC spiking patterns or in the rat's trajectory is sufficient for generating a distinct population code in a novel environment and suggest that recalling this code in a familiar environment involves additional inputs and/or a different mode of operation of the network.
Resumo:
We have developed a novel way to assess the mutagenicity of environmentally important metal carcinogens, such as nickel, by creating a positive selection system based upon the conditional expression of a retroviral transforming gene. The target gene is the v-mos gene in MuSVts110, a murine retrovirus possessing a growth temperature dependent defect in expression of the transforming gene due to viral RNA splicing. In normal rat kidney cells infected with MuSVts110 (6m2 cells), splicing of the MuSVts110 RNA to form the mRNA from which the transforming protein, p85$\sp{\rm gag-mos}$, is translated is growth-temperature dependent, occurring at 33 C and below but not at 39 C and above. This splicing "defect" is mediated by cis-acting viral sequences. Nickel chloride treatment of 6m2 cells followed by growth at 39 C, allowed the selection of "revertant" cells which constitutively express p85$\sp{\rm gag-mos}$ due to stable changes in the viral RNA splicing phenotype, suggesting that nickel, a carcinogen whose mutagenicity has not been well established, could induce mutations in mammalian genes. We also show by direct sequencing of PCR-amplified integrated MuSVts110 DNA from a 6m2 nickel-revertant cell line that the nickel-induced mutation affecting the splicing phenotype is a cis-acting 70-base duplication of a region of the viral DNA surrounding the 3$\sp\prime$ splice site. These findings provide the first example of the molecular basis for a nickel-induced DNA lesion and establish the mutagenicity of this potent carcinogen. ^
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
The human choriocarcinoma cell line JEG-3 is heterozygous at the adenosine deaminase (ADA) gene locus. Both allelic genes are under strong but incomplete repression causing a very low level expression of the gene locus. Because cytotoxic adenosine analogues such as 9-(beta)-D arabinofuranosyladenine (ara-A) and 9-(beta)-D xylofuranosyladenine (xyl-A) can be specifically detoxified by the action of ADA, these analogues were used to select for JEG-3 derived cells which had increased ADA expression. When JEG-3 cells were subjected to a multi-step, successively increasing dosage of either ara-A or xyl-A, resistant cells with increased ADA expression were generated. This increased ADA expression in the resistant cells was unstable, so that when the selective pressure was removed, cellular ADA expression would decrease. Subclone analysis of xyl-A resistant cells revealed that compared to parental JEG-3 cells, individual resistant cells had either elevated ADA levels or decreased adenosine kinase (ADK) levels or both. This altered ADA and ADK expression in the resistant cells were found to be independent events. Because of high endogenous tissue conversion factor (TCF) expression in the JEG-3 cells, the allelic nature of the increased ADA expression in most of the resistant cells could not be determined. However, several resistant subcloned cells were found to have lost TCF expression. These TCF('-) cells expressed only the ADA*2 allelic gene product. Cell fusion experiments demonstrated that the ADA*1 allelic gene was intact and functional in the A3-1A7 cell line. Chromosomal analysis of the A3-1A7 cells showed that they had no double-minutes or homogeneously staining chromosomal regions, although a pair of new chromosomes were found in these cells. Segregation analysis of the hybrid cells indicated that an ADA*2 allelic gene was probably located on this new chromosome. The analysis of the A3-1A7 cell line suggested that the expression of only ADA 2 in these cells was the result of possibly a cis-deregulation of the ADA gene locus or more probably an amplification of the ADA*2 allelic gene. Two effective positive selection systems for ADA('+) cells were also developed and tested. These selection systems should eventually lead to the isolation of the ADA gene.^
Resumo:
With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^
Resumo:
Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^
Resumo:
In the United States, “binge” drinking among college students is an emerging public health concern due to the significant physical and psychological effects on young adults. The focus is on identifying interventions that can help decrease high-risk drinking behavior among this group of drinkers. One such intervention is Motivational interviewing (MI), a client-centered therapy that aims at resolving client ambivalence by developing discrepancy and engaging the client in change talk. Of late, there is a growing interest in determining the active ingredients that influence the alliance between the therapist and the client. This study is a secondary analysis of the data obtained from the Southern Methodist Alcohol Research Trial (SMART) project, a dismantling trial of MI and feedback among heavy drinking college students. The present project examines the relationship between therapist and client language in MI sessions on a sample of “binge” drinking college students. Of the 126 SMART tapes, 30 tapes (‘MI with feedback’ group = 15, ‘MI only’ group = 15) were randomly selected for this study. MISC 2.1, a mutually exclusive and exhaustive coding system, was used to code the audio/videotaped MI sessions. Therapist and client language were analyzed for communication characteristics. Overall, therapists adopted a MI consistent style and clients were found to engage in change talk. Counselor acceptance, empathy, spirit, and complex reflections were all significantly related to client change talk (p-values ranged from 0.001 to 0.047). Additionally, therapist ‘advice without permission’ and MI Inconsistent therapist behaviors were strongly correlated with client sustain talk (p-values ranged from 0.006 to 0.048). Simple linear regression models showed a significant correlation between MI consistent (MICO) therapist language (independent variable) and change talk (dependent variable) and MI inconsistent (MIIN) therapist language (independent variable) and sustain talk (dependent variable). The study has several limitations such as small sample size, self-selection bias, poor inter-rater reliability for the global scales and the lack of a temporal measure of therapist and client language. Future studies might consider a larger sample size to obtain more statistical power. In addition the correlation between therapist language, client language and drinking outcome needs to be explored.^
Resumo:
Context. Despite the rapid growth of disease management programs, there are still questions about their efficacy and effectiveness for improving patient outcomes and their ability to reduce costs associated with chronic disease. ^ Objective. To determine the effectiveness of disease management programs on improving the results of HbA1c tests, lipid profiles and systolic blood pressure (SBP) readings among diabetics. These three quantitative measures are widely accepted methods of determining the quality of a patient's diabetes management and the potential for future complications. ^ Data Sources. MEDLINE and CINAHL were searched from 1950 to June 2008 using MeSH terms designed to capture all relevant studies. Scopus pearling and hand searching were also done. Only English language articles were selected. ^ Study Selection. Titles and abstracts for the 2347 articles were screened against predetermined inclusion and exclusion criteria, yielding 217 articles for full screening. After full article screening, 29 studies were selected for inclusion in the review. ^ Data Extraction. From the selected studies, data extraction included sample size, mean change over baseline, and standard deviation for each control and experimental arm. ^ Results. The pooled results show a mean HbA1c reduction of 0.64%, 95% CI (-0.83 to -0.44), mean SBP reduction of 7.39 mmHg (95% CI to -11.58 to -3.2), mean total cholesterol reduction of 5.74 mg/dL (95% CI, -10.01 to -1.43), and mean LDL cholesterol reduction of 3.74 mg/dL (95% CI, -8.34 to 0.87). Results for HbA1c, SBP and total cholesterol were statistically significant, while the results for LDL cholesterol were not. ^ Conclusions. The findings suggest that disease management programs utilizing five hallmarks of care can be effective at improving intermediate outcomes among diabetics. However, given the significant heterogeneity present, there may be fundamental differences with respect to study-specific interventions and populations that render them inappropriate for meta-analysis. ^
Resumo:
Background. A community-wide outbreak of cryptosporidiosis occurred in Dallas County during the summer of 2008. A subset of cases occurring with onset of illness within a 2 week interval was epidemiologically linked to 2 neighborhood interactive water fountain parks. ^ Methods. A case control study was conducted to evaluate risk factors associated with developing illness with cryptosporidiosis from the fountain parks. Cases were selected from a line list from the epidemiological study. The selection for the controls was either healthy family members or a daycare center nearby. Cases and controls were not matched. ^ Results. Interviews were completed for 44 fountain park attendees who met case definition and 54 community controls. Twenty-seven percent (27.3%) of the cases and 13.0% of the controls were between the ages of 0–4 years. Thirty-nine percent (38.6%) of the cases and 24.1% of the controls were between the ages of 5–13 years. Fourteen percent (13.6%) of the cases and 33.3% of the controls were between the ages of 14–31 years. Twenty percent (20.5%) of the cases and 29.6% of the controls were between the ages of 32–63 years. 47.7% of the cases and 42.6% of the controls were males. Fountain park attendees who reported having been splashed in the face with water were 10 times more likely to become ill than controls (OR = 10.0, 95% CI = 2.8–35.1). Persons who reported having swallowed water from the interactive fountains were 34 times more likely to become ill than controls (OR = 34.3, 95%CI = 9.3–125.7). ^ Conclusion. Prompt reporting of cases, identification of outbreak sources, and immediate implementation of remediation measures were critical in curtailing further transmission from these particular sites through the remainder of the season. This investigation underscores the potential for cryptosporidiosis outbreaks to occur in interactive fountain parks, and the need for enhanced preventive measures in these settings. Education of the public regarding avoidance of behaviors such as drinking water from interactive fountains is also an important component of public health prevention efforts. ^
Resumo:
The Advisory Committee on Immunization Practices (ACIP) develops written recommendations for the routine administration of vaccines to children and adults in the U.S. civilian population. The ACIP is the only entity in the federal government that makes such recommendations. ACIP elaborates on selection of its members and rules out concerns regarding its integrity, but fails to provide information about the importance of economic analysis in vaccine selection. ACIP recommendations can have large health and economic consequences. Emphasis on economic evaluation in health is a likely response to severe pressures of the federal and state health budget. This study describes the economic aspects considered by the ACIP while sanctioning a vaccine, and reviews the economic evaluations (our economic data) provided for vaccine deliberations. A five year study period from 2004 to 2009 is adopted. Publicly available data from ACIP web database is used. Drummond et al. (2005) checklist serves as a guide to assess the quality of economic evaluations presented. Drummond et al.'s checklist is a comprehensive hence it is unrealistic to expect every ACIP deliberation to meet all of their criteria. For practical purposes we have selected seven criteria that we judge to be significant criteria provided by Drummond et al. Twenty-four data points were obtained in a five year period. Our results show that out of the total twenty-four data point‘s (economic evaluations) only five data points received a score of six; that is six items on the list of seven were met. None of the data points received a perfect score of seven. Seven of the twenty-four data points received a score of five. A minimum of a two score was received by only one of the economic analyses. The type of economic evaluation along with the model criteria and ICER/QALY criteria met at 0.875 (87.5%). These three criteria were met at the highest rate among the seven criteria studied. Our study findings demonstrate that the perspective criteria met at 0.583 (58.3%) followed by source and sensitivity analysis criteria both tied at 0.541 (54.1%). The discount factor was met at 0.250 (25.0%).^ Economic analysis is not a novel concept to the ACIP. It has been practiced and presented at these meetings on a regular basis for more than five years. ACIP‘s stated goal is to utilize good quality epidemiologic, clinical and economic analyses to help policy makers choose among alternatives presented and thus achieve a better informed decision. As seen in our study the economic analyses over the years are inconsistent. The large variability coupled with lack of a standardized format may compromise the utility of the economic information for decision-making. While making recommendations, the ACIP takes into account all available information about a vaccine. Thus it is vital that standardized high quality economic information is provided at the ACIP meetings. Our study may provide a call for the ACIP to further investigate deficiencies within the system and thereby to improve economic evaluation data presented. ^
Resumo:
The association between Social Support, Health Status, and Health Services Utilization of the elderly, was explored based on the analysis of data from the Supplement on Aging to the National Health Interview Survey, 1984 (N = 11,497) using a modified framework of Aday and Andersen's Expanded Behavioral Model. The results suggested that Social Support as operationalized in this study was an independent determinant of the use of health services. The quantity of social activities and the use of community services were the two most consistent determinants across different types of health services use.^ The effects of social support on the use of health services were broken down into three components to facilitate explanations of the mechanisms through which social support operated. The Predisposing and Enabling component of Social Support had independent, although not uniform, effects on the use of health services. Only slight substitute effects of social support were detected. These included the substitution of the use of senior centers for longer stay in the hospital and the substitution of help with IADL problems for the use of formal home care services.^ The effect of financial support on the use of health services was found to be different for middle and low income populations. This differential effect was also found for the presence of intimate networks, the frequencies of interaction with children and the perceived availability of support among urban/rural, male/female and white/non-white subgroups.^ The study also suggested that the selection of appropriate Health Status measures should be based on the type of Health Services Utilization in which a researcher is interested. The level of physical function limitation and role activity limitation were the two most consistent predictors of the volume of physician visits, number of hospital days, and average length of stay in the hospital during the past year.^ Some alternative hypotheses were also raised and evaluated, when possible. The impacts of the complex sample design, the reliability and validity of the measures and other limitations of this analysis were also discussed. Finally, a revised framework was proposed and discussed based on the analysis. Some policy implications and suggestions for future study were also presented. ^
Resumo:
Body fat distribution is a cardiovascular health risk factor in adults. Body fat distribution can be measured through various methods including anthropometry. It is not clear which anthropometric index is suitable for epidemiologic studies of fat distribution and cardiovascular disease. The purpose of the present study was to select a measure of body fat distribution from among a series of indices (those traditionally used in the literature and others constructed from the analysis) that is most highly correlated with lipid-related variables and is independent of overall fatness. Subjects were Mexican-American men and women (N = 1004) from a study of gallbladder disease in Starr County, Texas. Multivariate associations were sought between lipid profile measures (lipids, lipoproteins, and apolipoproteins) and two sets of anthropometric variables (4 circumferences and 6 skinfolds). This was done to assess the association between lipid-related measures and the two sets of anthropometric variables and guide the construction of indices.^ Two indices emerged from the analysis that seemed to be highly correlated with lipid profile measures independent of obesity. These indices are: 2*arm circumference-thigh skinfold in pre- and post-menopausal women and arm/thigh circumference ratio in men. Next, using the sum of all skinfolds to represent obesity and the selected body fat distribution indices, the following hypotheses were tested: (1) state of obesity and centrally/upper distributed body fat are equally predictive of lipids, lipoproteins and apolipoproteins, and (2) the correlation among the lipid-related measures is not altered by obesity and body fat distribution.^ With respect to the first hypothesis, the present study found that most lipids, lipoproteins and apolipoproteins were significantly associated with both overall fatness and anatomical location of body fat in both sex and menopausal groups. However, within men and post-menopausal women, certain lipid profile measures (triglyceride and HDLT among post-menopausal women and apos C-II, CIII, and E among men) had substantially higher correlation with body fat distribution as compared with overall fatness.^ With respect to the second hypothesis, both obesity and body fat distribution were found to alter the association among plasma lipid variables in men and women. There was a suggestion from the data that the pattern of correlations among men and post-menopausal women are more comparable. Among men correlations involving apo A-I, HDLT, and HDL$\sb2$ seemed greatly influenced by obesity, and A-II by fat distribution; among post-menopausal women correlations involving apos A-I and A-II were highly affected by the location of body fat.^ Thus, these data point out that not only can obesity and fat distribution affect levels of single measures, they also can markedly influence the pattern of relationship among measures. The fact that such changes are seen for both obesity and fat distribution is significant, since the indices employed were chosen because they were independent of one another. ^
Resumo:
When choosing among models to describe categorical data, the necessity to consider interactions makes selection more difficult. With just four variables, considering all interactions, there are 166 different hierarchical models and many more non-hierarchical models. Two procedures have been developed for categorical data which will produce the "best" subset or subsets of each model size where size refers to the number of effects in the model. Both procedures are patterned after the Leaps and Bounds approach used by Furnival and Wilson for continuous data and do not generally require fitting all models. For hierarchical models, likelihood ratio statistics (G('2)) are computed using iterative proportional fitting and "best" is determined by comparing, among models with the same number of effects, the Pr((chi)(,k)('2) (GREATERTHEQ) G(,ij)('2)) where k is the degrees of freedom for ith model of size j. To fit non-hierarchical as well as hierarchical models, a weighted least squares procedure has been developed.^ The procedures are applied to published occupational data relating to the occurrence of byssinosis. These results are compared to previously published analyses of the same data. Also, the procedures are applied to published data on symptoms in psychiatric patients and again compared to previously published analyses.^ These procedures will make categorical data analysis more accessible to researchers who are not statisticians. The procedures should also encourage more complex exploratory analyses of epidemiologic data and contribute to the development of new hypotheses for study. ^