19 resultados para Asymptotic Variance, Bayesian Models, Burn-in, Ergodic Average, Ising Model
em DigitalCommons@The Texas Medical Center
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
There are two practical challenges in the phase I clinical trial conduct: lack of transparency to physicians, and the late onset toxicity. In my dissertation, Bayesian approaches are used to address these two problems in clinical trial designs. The proposed simple optimal designs cast the dose finding problem as a decision making process for dose escalation and deescalation. The proposed designs minimize the incorrect decision error rate to find the maximum tolerated dose (MTD). For the late onset toxicity problem, a Bayesian adaptive dose-finding design for drug combination is proposed. The dose-toxicity relationship is modeled using the Finney model. The unobserved delayed toxicity outcomes are treated as missing data and Bayesian data augment is employed to handle the resulting missing data. Extensive simulation studies have been conducted to examine the operating characteristics of the proposed designs and demonstrated the designs' good performances in various practical scenarios.^
Resumo:
Corynebacterium diphtheriae is the causative agent of cutaneous and pharyngeal diphtheria in humans. While lethality is certainly caused by diphtheria toxin, corynebacterial colonization may primarily require proteinaceous fibers called pili, which mediate adherence to specific tissues. The type strain of C. diphtheriae possesses three distinct pilus structures, namely the SpaA, SpaD, and SpaH-type pili, which are encoded by three distinct pilus gene clusters. The pilus is assembled onto the bacterial peptidoglycan by a specific transpeptidase enzyme called sortase. Although the SpaA pili are shown to be specific for pharyngeal cells in vitro, little is known about functions of the three pili in bacterial pathogenesis. This is mainly due to lack of in vivo models of corynebacterial infection. As an alternative to mouse models as mice do not have functional receptors for diphtheria toxin, in this study I use Caenorhabditis elegans as a model host for C. diphtheriae. A simple C. elegans model would be beneficial in determining the specific role of each pilus-type and the literature suggests that C. elegans infection model can be used to study a variety of bacterial species giving insight into bacterial virulence and host-pathogen interactions. My study examines the hypothesis that pili and toxin are major virulent determinants of C. diphtheriae in the C. elegans model host.
Resumo:
In the field of chemical carcinogenesis the use of animal models has proved to be a useful tool in dissecting the multistage process of tumor formation. In this regard the outbred SENCAR mouse has been the strain of choice in the analysis of skin carcinogenesis given its high sensitivity to the chemically induced acquisition of premalignant lesions, papillomas, and the later progression of these lesions into squamous cell carcinomas (SCC).^ The derivation of an inbred strain from the SENCAR stock called SSIN, that in spite of a high sensitivity to the development of papillomas lack the ability to transform these premalignant lesions into SCC, suggested that tumor promotion and progression were under the genetic control of different sets of genes.^ In the present study the nature of susceptibility to tumor progression was investigated. Analysis of F1 hybrids between the outbred SENCAR and SSIN mice suggested that there is at least one dominant gene responsible for susceptibility to tumor progression.^ Later development of another inbred strain from the outbred SENCAR stock, that had sensitivity to both tumor promotion and progression, allowed the formulation of a more accurate genetic model. Using this newly derived line, SENCAR B/Pt. and SSIN it was determined that there is one dominant tumor progression susceptibility gene. Linkage analysis showed that this gene maps to mouse chromosome 14 and it was possible to narrow the region to a 16 cM interval.^ In order to better characterize the nature of the progression susceptibility differences between these two strains, their proliferative pattern was investigated. It was found that SENCAR B/Pt, have an enlarged proliferative compartment with overexpression of cyclin D1, p16 and p21. Further studies showed an aberrant overexpression of TGF-$\beta$ in the susceptible strain, an increase in apoptosis, p53 protein accumulation and early loss of connexin 26. These results taken together suggest that papillomas in the SENCAR B/Pt. mice have higher proliferation and may have an increase in genomic instability, these two factors would contribute to a higher sensitivity to tumor progression. ^
Resumo:
Lung cancer is a devastating disease with very poor prognosis. The design of better treatments for patients would be greatly aided by mouse models that closely resemble the human disease. The most common type of human lung cancer is adenocarcinoma with frequent metastasis. Unfortunately, current models for this tumor are inadequate due to the absence of metastasis. Based on the molecular findings in human lung cancer and metastatic potential of osteosarcomas in mutant p53 mouse models, I hypothesized that mice with both K-ras and p53 missense mutations might develop metastatic lung adenocarcinomas. Therefore, I incorporated both K-rasLA1 and p53RI72HΔg alleles into mouse lung cells to establish a more faithful model for human lung adenocarcinoma and for translational and mechanistic studies. Mice with both mutations ( K-rasLA1/+ p53R172HΔg/+) developed advanced lung adenocarcinomas with similar histopathology to human tumors. These lung adenocarcinomas were highly aggressive and metastasized to multiple intrathoracic and extrathoracic sites in a pattern similar to that seen in lung cancer patients. This mouse model also showed gender differences in cancer related death and developed pleural mesotheliomas in 23.2% of them. In a preclinical study, the new drug Erlotinib (Tarceva) decreased the number and size of lung lesions in this model. These data demonstrate that this mouse model most closely mimics human metastatic lung adenocarcinoma and provides an invaluable system for translational studies. ^ To screen for important genes for metastasis, gene expression profiles of primary lung adenocarcinomas and metastases were analyzed. Microarray data showed that these two groups were segregated in gene expression and had 79 highly differentially expressed genes (more than 2.5 fold changes and p<0.001). Microarray data of Bub1b, Vimentin and CCAM1 were validated in tumors by quantitative real-time PCR (QPCR). Bub1b , a mitotic checkpoint gene, was overexpressed in metastases and this correlated with more chromosomal abnormalities in metastatic cells. Vimentin, a marker of epithelial-mesenchymal transition (EMT), was also highly expressed in metastases. Interestingly, Twist, a key EMT inducer, was also highly upregulated in metastases by QPCR, and this significantly correlated with the overexpression of Vimentin in the same tumors. These data suggest EMT occurs in lung adenocarcinomas and is a key mechanism for the development of metastasis in K-ras LA1/+ p53R172HΔg/+ mice. Thus, this mouse model provides a unique system to further probe the molecular basis of metastatic lung cancer.^
Resumo:
Background. Retail clinics, also called convenience care clinics, have become a rapidly growing trend since their initial development in 2000. These clinics are coupled within a larger retail operation and are generally located in "big-box" discount stores such as Wal-mart or Target, grocery stores such as Publix or H-E-B, or in retail pharmacies such as CVS or Walgreen's (Deloitte Center for Health Solutions, 2008). Care is typically provided by nurse practitioners. Research indicates that this new health care delivery system reduces cost, raises quality, and provides a means of access to the uninsured population (e.g., Deloitte Center for Health Solutions, 2008; Convenient Care Association, 2008a, 2008b, 2008c; Hansen-Turton, Miller, Nash, Ryan, Counts, 2007; Salinsky, 2009; Scott, 2006; Ahmed & Fincham, 2010). Some healthcare analysts even suggest that retail clinics offer a feasible solution to the shortage of primary care physicians facing the nation (AHRQ Health Care Innovations Exchange, 2010). ^ The development and performance of retail clinics is heavily dependent upon individual state policies regulating NPs. Texas currently has one of the most highly regulated practice environments for NPs (Stout & Elton, 2007; Hammonds, 2008). In September 2009, Texas passed Senate Bill 532 addressing the scope of practice of nurse practitioners in the convenience care model. In comparison to other states, this law still heavily regulates nurse practitioners. However, little research has been conducted to evaluate the impact of state laws regulating nurse practitioners on the development and performance of retail clinics. ^ Objectives. (1). To describe the potential impact that SB 532 has on retail clinic performance. (2). To discuss the effectiveness, efficiency, and equity of the convenience care model. (3). To describe possible alternatives to Texas' nurse practitioner scope of practice guidelines as delineated in Texas Senate Bill 532. (4). To describe the type of nurse practitioner state regulation (i.e. independent, light, moderate, or heavy) that best promotes the convenience care model. ^ Methods. State regulations governing nurse practitioners can be characterized as independent, light, moderate, and heavy. Four state NP regulatory types and retail clinic performance were compared and contrasted to that of Texas regulations using Dunn and Aday's theoretical models for conducting policy analysis and evaluating healthcare systems. Criteria for measurement included effectiveness, efficiency, and equity. Comparison states were Arizona (Independent), Minnesota (Light), Massachusetts (Moderate), and Florida (Heavy). ^ Results. A comparative states analysis of Texas SB 532 and alternative NP scope of practice guidelines among the four states: Arizona, Florida, Massachusetts, and Minnesota, indicated that SB 532 has minimal potential to affect the shortage of primary care providers in the state. Although SB 532 may increase the number of NPs a physician may supervise, NPs are still heavily restricted in their scope of practice and limited in their ability to act as primary care providers. Arizona's example of independent NP practice provided the best alternative to affect the shortage of PCPs in Texas as evidenced by a lower uninsured rate and less ED visits per 1,000 population. A survey of comparison states suggests that retail clinics thrive in states that more heavily restrict NP scope of practice as opposed to those that are more permissive, with the exception of Arizona. An analysis of effectiveness, efficiency, and equity of the convenience care model indicates that retail clinics perform well in the areas of effectiveness and efficiency; but, fall short in the area of equity. ^ Conclusion. Texas Senate 532 represents an incremental step towards addressing the problem of a shortage of PCPs in the state. A comparative policy analysis of the other four states with varying degrees of NP scope of practice indicate that a more aggressive policy allowing for independent NP practice will be needed to achieve positive changes in health outcomes. Retail clinics pose a temporary solution to the shortage of PCPs and will need to expand their locations to poorer regions and incorporate some chronic care to obtain measurable health outcomes. ^
Resumo:
Background. The United Nations' Millennium Development Goal (MDG) 4 aims for a two-thirds reduction in death rates for children under the age of five by 2015. The greatest risk of death is in the first week of life, yet most of these deaths can be prevented by such simple interventions as improved hygiene, exclusive breastfeeding, and thermal care. The percentage of deaths in Nigeria that occur in the first month of life make up 28% of all deaths under five years, a statistic that has remained unchanged despite various child health policies. This paper will address the challenges of reducing the neonatal mortality rate in Nigeria by examining the literature regarding efficacy of home-based, newborn care interventions and policies that have been implemented successfully in India. ^ Methods. I compared similarities and differences between India and Nigeria using qualitative descriptions and available quantitative data of various health indicators. The analysis included identifying policy-related factors and community approaches contributing to India's newborn survival rates. Databases and reference lists of articles were searched for randomized controlled trials of community health worker interventions shown to reduce neonatal mortality rates. ^ Results. While it appears that Nigeria spends more money than India on health per capita ($136 vs. $132, respectively) and as percent GDP (5.8% vs. 4.2%, respectively), it still lags behind India in its neonatal, infant, and under five mortality rates (40 vs. 32 deaths/1000 live births, 88 vs. 48 deaths/1000 live births, 143 vs. 63 deaths/1000 live births, respectively). Both countries have comparably low numbers of healthcare providers. Unlike their counterparts in Nigeria, Indian community health workers receive training on how to deliver postnatal care in the home setting and are monetarily compensated. Gender-related power differences still play a role in the societal structure of both countries. A search of randomized controlled trials of home-based newborn care strategies yielded three relevant articles. Community health workers trained to educate mothers and provide a preventive package of interventions involving clean cord care, thermal care, breastfeeding promotion, and danger sign recognition during multiple postnatal visits in rural India, Bangladesh, and Pakistan reduced neonatal mortality rates by 54%, 34%, and 15–20%, respectively. ^ Conclusion. Access to advanced technology is not necessary to reduce neonatal mortality rates in resource-limited countries. To address the urgency of neonatal mortality, countries with weak health systems need to start at the community level and invest in cost-effective, evidence-based newborn care interventions that utilize available human resources. While more randomized controlled studies are urgently needed, the current available evidence of models of postnatal care provision demonstrates that home-based care and health education provided by community health workers can reduce neonatal mortality rates in the immediate future.^
Resumo:
Developing a Model Interruption is a known human factor that contributes to errors and catastrophic events in healthcare as well as other high-risk industries. The landmark Institute of Medicine (IOM) report, To Err is Human, brought attention to the significance of preventable errors in medicine and suggested that interruptions could be a contributing factor. Previous studies of interruptions in healthcare did not offer a conceptual model by which to study interruptions. As a result of the serious consequences of interruptions investigated in other high-risk industries, there is a need to develop a model to describe, understand, explain, and predict interruptions and their consequences in healthcare. Therefore, the purpose of this study was to develop a model grounded in the literature and to use the model to describe and explain interruptions in healthcare. Specifically, this model would be used to describe and explain interruptions occurring in a Level One Trauma Center. A trauma center was chosen because this environment is characterized as intense, unpredictable, and interrupt-driven. The first step in developing the model began with a review of the literature which revealed that the concept interruption did not have a consistent definition in either the healthcare or non-healthcare literature. Walker and Avant’s method of concept analysis was used to clarify and define the concept. The analysis led to the identification of five defining attributes which include (1) a human experience, (2) an intrusion of a secondary, unplanned, and unexpected task, (3) discontinuity, (4) externally or internally initiated, and (5) situated within a context. However, before an interruption could commence, five conditions known as antecedents must occur. For an interruption to take place (1) an intent to interrupt is formed by the initiator, (2) a physical signal must pass a threshold test of detection by the recipient, (3) the sensory system of the recipient is stimulated to respond to the initiator, (4) an interruption task is presented to recipient, and (5) the interruption task is either accepted or rejected by v the recipient. An interruption was determined to be quantifiable by (1) the frequency of occurrence of an interruption, (2) the number of times the primary task has been suspended to perform an interrupting task, (3) the length of time the primary task has been suspended, and (4) the frequency of returning to the primary task or not returning to the primary task. As a result of the concept analysis, a definition of an interruption was derived from the literature. An interruption is defined as a break in the performance of a human activity initiated internal or external to the recipient and occurring within the context of a setting or location. This break results in the suspension of the initial task by initiating the performance of an unplanned task with the assumption that the initial task will be resumed. The definition is inclusive of all the defining attributes of an interruption. This is a standard definition that can be used by the healthcare industry. From the definition, a visual model of an interruption was developed. The model was used to describe and explain the interruptions recorded for an instrumental case study of physicians and registered nurses (RNs) working in a Level One Trauma Center. Five physicians were observed for a total of 29 hours, 31 minutes. Eight registered nurses were observed for a total of 40 hours 9 minutes. Observations were made on either the 0700–1500 or the 1500-2300 shift using the shadowing technique. Observations were recorded in the field note format. The field notes were analyzed by a hybrid method of categorizing activities and interruptions. The method was developed by using both a deductive a priori classification framework and by the inductive process utilizing line-byline coding and constant comparison as stated in Grounded Theory. The following categories were identified as relative to this study: Intended Recipient - the person to be interrupted Unintended Recipient - not the intended recipient of an interruption; i.e., receiving a phone call that was incorrectly dialed Indirect Recipient – the incidental recipient of an interruption; i.e., talking with another, thereby suspending the original activity Recipient Blocked – the intended recipient does not accept the interruption Recipient Delayed – the intended recipient postpones an interruption Self-interruption – a person, independent of another person, suspends one activity to perform another; i.e., while walking, stops abruptly and talks to another person Distraction – briefly disengaging from a task Organizational Design – the physical layout of the workspace that causes a disruption in workflow Artifacts Not Available – supplies and equipment that are not available in the workspace causing a disruption in workflow Initiator – a person who initiates an interruption Interruption by Organizational Design and Artifacts Not Available were identified as two new categories of interruption. These categories had not previously been cited in the literature. Analysis of the observations indicated that physicians were found to perform slightly fewer activities per hour when compared to RNs. This variance may be attributed to differing roles and responsibilities. Physicians were found to have more activities interrupted when compared to RNs. However, RNs experienced more interruptions per hour. Other people were determined to be the most commonly used medium through which to deliver an interruption. Additional mediums used to deliver an interruption vii included the telephone, pager, and one’s self. Both physicians and RNs were observed to resume an original interrupted activity more often than not. In most interruptions, both physicians and RNs performed only one or two interrupting activities before returning to the original interrupted activity. In conclusion the model was found to explain all interruptions observed during the study. However, the model will require an even more comprehensive study in order to establish its predictive value.
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^
Neocortical hyperexcitability defect in a mutant mouse model of spike-wave epilepsy, {\it stargazer}
Resumo:
Single-locus mutations in mice can express epileptic phenotypes and provide critical insights into the naturally occurring defects that alter excitability and mediate synchronization in the central nervous system (CNS). One such recessive mutation (on chromosome (Chr) 15), stargazer(stg/stg) expresses frequent bilateral 6-7 cycles per second (c/sec) spike-wave seizures associated with behavioral arrest, and provides a valuable opportunity to examine the inherited lesion associated with spike-wave synchronization.^ The existence of distinct and heterogeneous defects mediating spike-wave discharge (SWD) generation has been demonstrated by the presence of multiple genetic loci expressing generalized spike-wave activity and the differential effects of pharmacological agents on SWDs in different spike-wave epilepsy models. Attempts at understanding the different basic mechanisms underlying spike-wave synchronization have focused on $\gamma$-aminobutyric acid (GABA) receptor-, low threshold T-type Ca$\sp{2+}$ channel-, and N-methyl-D-aspartate receptor (NMDA-R)-mediated transmission. It is believed that defects in these modes of transmission can mediate the conversion of normal oscillations in a trisynaptic circuit, which includes the neocortex, reticular nucleus and thalamus, into spike-wave activity. However, the underlying lesions involved in spike-wave synchronization have not been clearly identified.^ The purpose of this research project was to locate and characterize a distinct neuronal hyperexcitability defect favoring spike-wave synchronization in the stargazer brain. One experimental approach for anatomically locating areas of synchronization and hyperexcitability involved an attempt to map patterns of hypersynchronous activity with antibodies to activity-induced proteins.^ A second approach to characterizing the neuronal defect involved examining the neuronal responses in the mutant following application of pharmacological agents with well known sites of action.^ In order to test the hypothesis that an NMDA receptor mediated hyperexcitability defect exists in stargazer neocortex, extracellular field recordings were used to examine the effects of CPP and MK-801 on coronal neocortical brain slices of stargazer and wild type perfused with 0 Mg$\sp{2+}$ artificial cerebral spinal fluid (aCSF).^ To study how NMDA receptor antagonists might promote increased excitability in stargazer neocortex, two basic hypotheses were tested: (1) NMDA receptor antagonists directly activate deep layer principal pyramidal cells in the neocortex of stargazer, presumably by opening NMDA receptor channels altered by the stg mutation; and (2) NMDA receptor antagonists disinhibit the neocortical network by blocking recurrent excitatory synaptic inputs onto inhibitory interneurons in the deep layers of stargazer neocortex.^ In order to test whether CPP might disinhibit the 0 Mg$\sp{2+}$ bursting network in the mutant by acting on inhibitory interneurons, the inhibitory inputs were pharmacologically removed by application of GABA receptor antagonists to the cortical network, and the effects of CPP under 0 Mg$\sp{2+}$aCSF perfusion in layer V of stg/stg were then compared with those found in +/+ neocortex using in vitro extracellular field recordings. (Abstract shortened by UMI.) ^
Resumo:
Background: Despite effective solutions to reduce teen birth rates, Texas teen birth rates are among the highest in the nation. School districts can impact youth sexual behavior through implementation of evidence-based programs (EBPs); however, teen pregnancy prevention is a complex and controversial issue for school districts. Subsequently, very few districts in Texas implement EBPs for pregnancy prevention. Additionally, school districts receive little guidance on the process for finding, adopting, and implementing EBPs. Purpose: The purpose of this report is to present the CHoosing And Maintaining Programs for Sex education in Schools (CHAMPSS) Model, a practical and realistic framework to help districts find, adopt, and implement EBPs. Methods: Model development occurred in four phases using the core processes of Intervention Mapping: 1) knowledge acquisition, 2) knowledge engineering, 3) model representation, and 4) knowledge development. Results: The CHAMPSS Model provides seven steps, tailored for school-based settings, which encompass phases of assessment, preparation, implementation, and maintenance: Prioritize, Asses, Select, Approve, Prepare, Implement, and Maintain. Advocacy and eliciting support for adolescent sexual health are also core elements of the model. Conclusion: This systematic framework may help schools increase adoption, implementation, and maintenance for EBPs.
Resumo:
Using a human terato-carcinoma cell line, PA-1, the functional role of the oncogenes and tumor suppressor gene involved in the multistep process of carcinogenesis have been analyzed. The expression of AP-2 was strongly correlated with the susceptibility to ras transformation. The differential responsiveness to growth factors between stage 1 ras resistant cells and stage 2 ras susceptible cells was observed, indicating that the ability of stage 2 cells to respond to the mutated ras oncogenes in transformation correlated with the ability to be stimulated by certain growth factors. Using differential screening of cDNA libraries, a number of differentially expressed cDNA clones was isolated. One of those, clone 12, is overexpressed in ras transformed stage 3 cells. The amino acid sequence of clone 12 is almost identical to a mouse LLrep3 gene that was growth-regulated, and 78% similar to a yeast ribosomal protein S4. These results suggest that the S4 gene may be involved in regulation of growth. Clone 9 is expressed in stage 1 ras resistant cells (3.5-kb and 3.0-kb transcripts) but the expression of this clone in stage 2 ras susceptible cells and stage 3 ras-transformed cells is greatly diminished. The expression of this cDNA clone was increased to at least five fold in ras resistant cells and nontumorigenic hybrids treated with retinoic acid but not increased in retinoic acid treated ras susceptible cells, ras transformed cells and the tumorigenic segregants. Partial sequence of this clone showed no homology to the sequences in Genbank. These findings suggest that clone 9 could be a suppressor gene or the genes that are involved in the biochemical pathway of tumor suppression or neurogenic differentiation. The apparent pleiotropic effect of the loss of this suppressor gene function support Harris' proposal that tumor suppressor genes regulate differentiation. The tumor suppressor gene may act as negative regulator of tumor growth by controlling gene expression in differentiation. ^
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
Many public health agencies and researchers are interested in comparing hospital outcomes, for example, morbidity, mortality, and hospitalization across areas and hospitals. However, since there is variation of rates in clinical trials among hospitals because of several biases, we are interested in controlling for the bias and assessing real differences in clinical practices. In this study, we compared the variations between hospitals in rates of severe Intraventricular Haemorrhage (IVH) infant using Frequentist statistical approach vs. Bayesian hierarchical model through simulation study. The template data set for simulation study was included the number of severe IVH infants of 24 intensive care units in Australian and New Zealand Neonatal Network from 1995 to 1997 in severe IVH rate in preterm babies. We evaluated the rates of severe IVH for 24 hospitals with two hierarchical models in Bayesian approach comparing their performances with the shrunken rates in Frequentist method. Gamma-Poisson (BGP) and Beta-Binomial (BBB) were introduced into Bayesian model and the shrunken estimator of Gamma-Poisson (FGP) hierarchical model using maximum likelihood method were calculated as Frequentist approach. To simulate data, the total number of infants in each hospital was kept and we analyzed the simulated data for both Bayesian and Frequentist models with two true parameters for severe IVH rate. One was the observed rate and the other was the expected severe IVH rate by adjusting for five predictors variables for the template data. The bias in the rate of severe IVH infant estimated by both models showed that Bayesian models gave less variable estimates than Frequentist model. We also discussed and compared the results from three models to examine the variation in rate of severe IVH by 20th centile rates and avoidable number of severe IVH cases. ^
Resumo:
In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^