867 resultados para Multiple-trait model
Resumo:
DNA methylation at promoter CpG islands (CGI) is an epigenetic modification associated with inappropriate gene silencing in multiple tumor types. In the absence of a human pituitary tumor cell line, small interfering RNA-mediated knockdown of the maintenance methyltransferase DNA methyltransferase (cytosine 5)-1 (Dnmt1) was used in the murine pituitary adenoma cell line AtT-20. Sustained knockdown induced reexpression of the fully methylated and normally imprinted gene neuronatin (Nnat) in a time-dependent manner. Combined bisulfite restriction analysis (COBRA) revealed that reexpression of Nnat was associated with partial CGI demethylation, which was also observed at the H19 differentially methylated region. Subsequent genome-wide microarray analysis identified 91 genes that were significantly differentially expressed in Dnmt1 knockdown cells (10% false discovery rate). The analysis showed that genes associated with the induction of apoptosis, signal transduction, and developmental processes were significantly overrepresented in this list (P < 0.05). Following validation by reverse transcription-PCR and detection of inappropriate CGI methylation by COBRA, four genes (ICAM1, NNAT, RUNX1, and S100A10) were analyzed in primary human pituitary tumors, each displaying significantly reduced mRNA levels relative to normal pituitary (P < 0.05). For two of these genes, NNAT and S100A10, decreased expression was associated with increased promoter CGI methylation. Induced expression of Nnat in stable transfected AtT-20 cells inhibited cell proliferation. To our knowledge, this is the first report of array-based "epigenetic unmasking" in combination with Dnmt1 knockdown and reveals the potential of this strategy toward identifying genes silenced by epigenetic mechanisms across species boundaries.
Resumo:
The world of classical ballet exerts considerable physical and psychological stress upon those who participate, and yet the process of coping with such stressors is not well understood. The purpose of the present investigation was to examine relationships between coping strategies and competitive trait anxiety among ballet dancers. Participants were 104 classical dancers (81 females and 23 males) ranging in age from 15 to 35 years (M = 19.4 yr., SD = 3.8 yr.) from three professional ballet companies, two private dance schools, and two full-time, university dance courses in Australia. Participants had a mean of 11.5 years of classical dance training (SD = 5.2 yr.), having started dance training at 6.6 years of age (SD = 3.4 yr.). Coping strategies were assessed using the Modified COPE scale (MCOPE: Crocker & Graham, 1995), a 48-item measure comprising 12 coping subscales (Seeking Social Support for Instrumental Reasons, Seeking Social Support for Emotional Reasons, Behavioral Disengagement, Planning, Suppression of Competing Activities, Venting of Emotions, Humor, Active Coping, Denial, Self-Blame, Effort, and Wishful Thinking). Competitive trait anxiety was assessed using the Sport Anxiety Scale (SAS: Smith, Smoll, & Schutz, 1990), a 21-item measure comprising three anxiety subscales (Somatic Anxiety, Worry, Concentration Disruption). Standard multiple regression analyses showed that trait anxiety scores, in particular for Somatic Anxiety and Worry, were significant predictors of seven of the 12 coping strategies (Suppression of Competing Activities: R2 = 27.1%; Venting of Emotions: R2 = 23.2%; Active Coping: R2 = 14.3%; Denial: R2 = 17.7%; Self-Blame: R2 = 35.7%; Effort: R2 = 16.6%; Wishful Thinking: R2 = 42.3%). High trait anxious dancers reported more frequent use of all categories of coping strategies. A separate two-way MANOVA showed no significant main effect for gender nor status (professional versus students) and no significant interaction effect. The present findings are generally consistent with previous research in the sport psychology domain (Crocker & Graham, 1995; Giacobbi & Weinberg, 2000) which has shown that high trait anxious athletes tend, in particular, to use more maladaptive, emotion-focused coping strategies when compared to low trait anxious athletes; a tendency which has been proposed to lead to negative performance effects. The present results emphasize the need for the effectiveness of specific coping strategies to be considered during the process of preparing young classical dancers for a career in professional ballet. In particular, the results suggest that dancers who are, by nature, anxious about performance may need special attention to help them to learn to cope with performance-related stress. Given the absence of differences in coping strategies between student and professional dancers and between males and females, it appears that such educational efforts should begin at an early career stage for all dancers.
Resumo:
Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1–28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.
Resumo:
Conceptual modelling continues to be an important means for graphically capturing the requirements of an information system. Observations of modelling practice suggest that modellers often use multiple conceptual models in combination, because they articulate different aspects of real-world domains. Yet, the available empirical as well as theoretical research in this area has largely studied the use of single models, or single modelling grammars. We develop a Theory of Combined Ontological Coverage by extending an existing theory of ontological expressiveness of conceptual modelling grammars. Our new theory posits that multiple conceptual models are used to increase the maximum coverage of the real-world domain being modelled, whilst trying to minimize the ontological overlap between the models. We illustrate how the theory can be applied to analyse sets of conceptual models. We develop three propositions of the theory about evaluations of model combinations in terms of users’ selection, understandability and usefulness of conceptual models.
Resumo:
Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This article studies the broader problem of approximate clone detection in process models. The article proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hi- erarchical Agglomerative Clustering (HAC). The article also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable.
Resumo:
Computational neuroscience aims to elucidate the mechanisms of neural information processing and population dynamics, through a methodology of incorporating biological data into complex mathematical models. Existing simulation environments model at a particular level of detail; none allow a multi-level approach to neural modelling. Moreover, most are not engineered to produce compute-efficient solutions, an important issue because sufficient processing power is a major impediment in the field. This project aims to apply modern software engineering techniques to create a flexible high performance neural modelling environment, which will allow rigorous exploration of model parameter effects, and modelling at multiple levels of abstraction.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
QUT has enacted a university-wide Peer Program’s Strategy which aims to improve student success and graduate outcomes. A component of this strategy is a training model providing relevant, quality-assured and timely training for all students who take on leadership roles. The training model is designed to meet the needs of the growing scale and variety of peer programs, and to recognise the multiple roles and programs in which students may be involved during their peer leader journey. The model builds peer leader capacity by offering centralised, beginning and ongoing training modules, delivered by in-house providers, covering topics which prepare students to perform their role safely, inclusively, accountably and skilfully. The model also provides efficiencies by differentiating between ‘core competency' and ‘program-specific’ modules, thus avoiding training duplication across multiple programs, and enabling training to be individually and flexibly formatted to suit the specific and unique needs of each program.
Resumo:
‘Complexity’ is a term that is increasingly prevalent in conversations about building capacity for 21st Century professional engineers. Society is grappling with the urgent and challenging reality of accommodating seven billion people, meeting needs and innovating lifestyle improvements in ways that do not destroy atmospheric, biological and oceanic systems critical to life. Over the last two decades in particular, engineering educators have been active in attempting to build capacity amongst professionals to deliver ‘sustainable development’ in this rapidly changing global context. However curriculum literature clearly points to a lack of significant progress, with efforts best described as ad hoc and highly varied. Given the limited timeframes for action to curb environmental degradation proposed by scientists and intergovernmental agencies, the authors of this paper propose it is imperative that curriculum renewal towards education for sustainable development proceeds rapidly, systemically, and in a transformational manner. Within this context, the paper discusses the need to consider a multiple track approach to building capacity for 21st Century engineering, including priorities and timeframes for undergraduate and postgraduate curriculum renewal. The paper begins with a contextual discussion of the term complexity and how it relates to life in the 21st Century. The authors then present a whole of system approach for planning and implementing rapid curriculum renewal that addresses the critical roles of several generations of engineering professionals over the next three decades. The paper concludes with observations regarding engaging with this approach in the context of emerging accreditation requirements and existing curriculum renewal frameworks.
Resumo:
Large multisite efforts (e.g., the ENIGMA Consortium), have shown that neuroimaging traits including tract integrity (from DTI fractional anisotropy, FA) and subcortical volumes (from T1-weighted scans) are highly heritable and promising phenotypes for discovering genetic variants associated with brain structure. However, genetic correlations (rg) among measures from these different modalities for mapping the human genome to the brain remain unknown. Discovering these correlations can help map genetic and neuroanatomical pathways implicated in development and inherited risk for disease. We use structural equation models and a twin design to find rg between pairs of phenotypes extracted from DTI and MRI scans. When controlling for intracranial volume, the caudate as well as related measures from the limbic system - hippocampal volume - showed high rg with the cingulum FA. Using an unrelated sample and a Seemingly Unrelated Regression model for bivariate analysis of this connection, we show that a multivariate GWAS approach may be more promising for genetic discovery than a univariate approach applied to each trait separately.
Resumo:
Several common genetic variants have recently been discovered that appear to influence white matter microstructure, as measured by diffusion tensor imaging (DTI). Each genetic variant explains only a small proportion of the variance in brain microstructure, so we set out to explore their combined effect on the white matter integrity of the corpus callosum. We measured six common candidate single-nucleotide polymorphisms (SNPs) in the COMT, NTRK1, BDNF, ErbB4, CLU, and HFE genes, and investigated their individual and aggregate effects on white matter structure in 395 healthy adult twins and siblings (age: 20-30 years). All subjects were scanned with 4-tesla 94-direction high angular resolution diffusion imaging. When combined using mixed-effects linear regression, a joint model based on five of the candidate SNPs (COMT, NTRK1, ErbB4, CLU, and HFE) explained ∼ 6% of the variance in the average fractional anisotropy (FA) of the corpus callosum. This predictive model had detectable effects on FA at 82% of the corpus callosum voxels, including the genu, body, and splenium. Predicting the brain's fiber microstructure from genotypes may ultimately help in early risk assessment, and eventually, in personalized treatment for neuropsychiatric disorders in which brain integrity and connectivity are affected.
Resumo:
To classify each stage for a progressing disease such as Alzheimer’s disease is a key issue for the disease prevention and treatment. In this study, we derived structural brain networks from diffusion-weighted MRI using whole-brain tractography since there is growing interest in relating connectivity measures to clinical, cognitive, and genetic data. Relatively little work has usedmachine learning to make inferences about variations in brain networks in the progression of the Alzheimer’s disease. Here we developed a framework to utilize generalized low rank approximations of matrices (GLRAM) and modified linear discrimination analysis for unsupervised feature learning and classification of connectivity matrices. We apply the methods to brain networks derived from DWI scans of 41 people with Alzheimer’s disease, 73 people with EMCI, 38 people with LMCI, 47 elderly healthy controls and 221 young healthy controls. Our results show that this new framework can significantly improve classification accuracy when combining multiple datasets; this suggests the value of using data beyond the classification task at hand to model variations in brain connectivity.
Resumo:
Fusing data from multiple sensing modalities, e.g. laser and radar, is a promising approach to achieve resilient perception in challenging environmental conditions. However, this may lead to \emph{catastrophic fusion} in the presence of inconsistent data, i.e. when the sensors do not detect the same target due to distinct attenuation properties. It is often difficult to discriminate consistent from inconsistent data across sensing modalities using local spatial information alone. In this paper we present a novel consistency test based on the log marginal likelihood of a Gaussian process model that evaluates data from range sensors in a relative manner. A new data point is deemed to be consistent if the model statistically improves as a result of its fusion. This approach avoids the need for absolute spatial distance threshold parameters as required by previous work. We report results from object reconstruction with both synthetic and experimental data that demonstrate an improvement in reconstruction quality, particularly in cases where data points are inconsistent yet spatially proximal.
Resumo:
Mutations of UDP-N-acetyl-alpha-D-galactosamine polypeptide N-acetyl galactosaminyl transferase 3 (GALNT3) result in familial tumoural calcinosis (FTC) and the hyperostosis-hyperphosphataemia syndrome (HHS), which are autosomal recessive disorders characterised by soft-tissue calcification and hyperphosphataemia. To facilitate in vivo studies of these heritable disorders of phosphate homeostasis, we embarked on establishing a mouse model by assessing progeny of mice treated with the chemical mutagen N-ethyl-N-nitrosourea (ENU), and identified a mutant mouse, TCAL, with autosomal recessive inheritance of ectopic calcification, which involved multiple tissues, and hyperphosphataemia; the phenotype was designated TCAL and the locus, Tcal. TCAL males were infertile with loss of Sertoli cells and spermatozoa, and increased testicular apoptosis. Genetic mapping localized Tcal to chromosome 2 (62.64-71.11 Mb) which contained the Galnt3. DNA sequence analysis identified a Galnt3 missense mutation (Trp589Arg) in TCAL mice. Transient transfection of wild-type and mutant Galnt3-enhanced green fluorescent protein (EGFP) constructs in COS-7 cells revealed endoplasmic reticulum retention of the Trp589Arg mutant and Western blot analysis of kidney homogenates demonstrated defective glycosylation of Galnt3 in Tcal/Tcal mice. Tcal/Tcal mice had normal plasma calcium and parathyroid hormone concentrations; decreased alkaline phosphatase activity and intact Fgf23 concentrations; and elevation of circulating 1,25-dihydroxyvitamin D. Quantitative reverse transcriptase-PCR (qRT-PCR) revealed that Tcal/Tcal mice had increased expression of Galnt3 and Fgf23 in bone, but that renal expression of Klotho, 25-hydroxyvitamin D-1α-hydroxylase (Cyp27b1), and the sodium-phosphate co-transporters type-IIa and -IIc was similar to that in wild-type mice. Thus, TCAL mice have the phenotypic features of FTC and HHS, and provide a model for these disorders of phosphate metabolism. © 2012 Esapa et al.
Resumo:
In this study, 1,833 systemic sclerosis (SSc) cases and 3,466 controls were genotyped with the Immunochip array. Classical alleles, amino acid residues, and SNPs across the human leukocyte antigen (HLA) region were imputed and tested. These analyses resulted in a model composed of six polymorphic amino acid positions and seven SNPs that explained the observed significant associations in the region. In addition, a replication step comprising 4,017 SSc cases and 5,935 controls was carried out for several selected non-HLA variants, reaching a total of 5,850 cases and 9,401 controls of European ancestry. Following this strategy, we identified and validated three SSc risk loci, including DNASE1L3 at 3p14, the SCHIP1-IL12A locus at 3q25, and ATG5 at 6q21, as well as a suggested association of the TREH-DDX6 locus at 11q23. The associations of several previously reported SSc risk loci were validated and further refined, and the observed peak of association in PXK was related to DNASE1L3. Our study has increased the number of known genetic associations with SSc, provided further insight into the pleiotropic effects of shared autoimmune risk factors, and highlighted the power of dense mapping for detecting previously overlooked susceptibility loci.