911 resultados para Query complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to assess a population of patients with diabetes mellitus by means of the INTERMED, a classification system for case complexity integrating biological, psychosocial and health care related aspects of disease. The main hypothesis was that the INTERMED would identify distinct clusters of patients with different degrees of case complexity and different clinical outcomes. Patients (n=61) referred to a tertiary reference care centre were evaluated with the INTERMED and followed 9 months for HbA1c values and 6 months for health care utilisation. Cluster analysis revealed two clusters: cluster 1 (62%) consisting of complex patients with high INTERMED scores and cluster 2 (38%) consisting of less complex patients with lower INTERMED. Cluster 1 patients showed significantly higher HbA1c values and a tendency for increased health care utilisation. Total INTERMED scores were significantly related to HbA1c and explained 21% of its variance. In conclusion, different clusters of patients with different degrees of case complexity were identified by the INTERMED, allowing the detection of highly complex patients at risk for poor diabetes control. The INTERMED therefore provides an objective basis for clinical and scientific progress in diabetes mellitus. Ongoing intervention studies will have to confirm these preliminary data and to evaluate if management strategies based on the INTERMED profiles will improve outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the extent of genomic transcription and its functional relevance is a central goal in genomics research. However, detailed genome-wide investigations of transcriptome complexity in major mammalian organs have been scarce. Here, using extensive RNA-seq data, we show that transcription of the genome is substantially more widespread in the testis than in other organs across representative mammals. Furthermore, we reveal that meiotic spermatocytes and especially postmeiotic round spermatids have remarkably diverse transcriptomes, which explains the high transcriptome complexity of the testis as a whole. The widespread transcriptional activity in spermatocytes and spermatids encompasses protein-coding and long noncoding RNA genes but also poorly conserves intergenic sequences, suggesting that it may not be of immediate functional relevance. Rather, our analyses of genome-wide epigenetic data suggest that this prevalent transcription, which most likely promoted the birth of new genes during evolution, is facilitated by an overall permissive chromatin in these germ cells that results from extensive chromatin remodeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low-complexity regions (LCRs) in proteins are tracts that are highly enriched in one or a few aminoacids. Given their high abundance, and their capacity to expand in relatively short periods of time through replication slippage, they can greatly contribute to increase protein sequence space and generate novel protein functions. However, little is known about the global impact of LCRs on protein evolution. We have traced back the evolutionary history of 2,802 LCRs from a large set of homologous protein families from H.sapiens, M.musculus, G.gallus, D.rerio and C.intestinalis. Transcriptional factors and other regulatory functions are overrepresented in proteins containing LCRs. We have found that the gain of novel LCRs is frequently associated with repeat expansion whereas the loss of LCRs is more often due to accumulation of amino acid substitutions as opposed to deletions. This dichotomy results in net protein sequence gain over time. We have detected a significant increase in the rate of accumulation of novel LCRs in the ancestral Amniota and mammalian branches, and a reduction in the chicken branch. Alanine and/or glycine-rich LCRs are overrepresented in recently emerged LCR sets from all branches, suggesting that their expansion is better tolerated than for other LCR types. LCRs enriched in positively charged amino acids show the contrary pattern, indicating an important effect of purifying selection in their maintenance. We have performed the first large-scale study on the evolutionary dynamics of LCRs in protein families. The study has shown that the composition of an LCR is an important determinant of its evolutionary pattern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The "one-gene, one-protein" rule, coined by Beadle and Tatum, has been fundamental to molecular biology. The rule implies that the genetic complexity of an organism depends essentially on its gene number. The discovery, however, that alternative gene splicing and transcription are widespread phenomena dramatically altered our understanding of the genetic complexity of higher eukaryotic organisms; in these, a limited number of genes may potentially encode a much larger number of proteins. Here we investigate yet another phenomenon that may contribute to generate additional protein diversity. Indeed, by relying on both computational and experimental analysis, we estimate that at least 4%-5% of the tandem gene pairs in the human genome can be eventually transcribed into a single RNA sequence encoding a putative chimeric protein. While the functional significance of most of these chimeric transcripts remains to be determined, we provide strong evidence that this phenomenon does not correspond to mere technical artifacts and that it is a common mechanism with the potential of generating hundreds of additional proteins in the human genome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Différentes organisations et différents pays aboutissent souvent à des conclusions différentes quant à la pertinence d'introduire un test de dépistage génétique dans la population générale. Cet article décrit la complexité du dépistage basé sur des tests génétiques. Utilisant l'exemple de la mucoviscidose - pour laquelle un groupe de travail national est en train d'évaluer la pertinence d'un dépistage génétique - les auteurs relèvent les situaions où les recommandations de dépistage sont parfois basées sur l'émergence de nouvelles technologies (par exemple, test génétique) et d'opinion publique plutôt que sur la base d'évidences. Ils présentent également les enjeux éthiques et économiques du dépistage génétique de la mucoviscidose. [Abstract] Various institutions and countries often reach different conclusions about the utility of introducing a newborn screening test in the general population. This paper highlights the complexity of population screening including genetic tests. Using the example of cystic fibrosis genetic screening, for which a Swiss Working Group for Cystic Fibrosis is currently evaluating the pertinence, we outline that screening recommendations are often based more on expert opinion and emerging new technologies rather than on evidence. We also present some ethical and economic issues related to cystic fibrosis genetic screening.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a programming environment for supporting learning in STEM, particularly mobile robotic learning. It was designed to maintain progressive learning for people with and without previous knowledge of programming and/or robotics. The environment was multi platform and built with open source tools. Perception, mobility, communication, navigation and collaborative behaviour functionalities can be programmed for different mobile robots. A learner is able to programme robots using different programming languages and editor interfaces: graphic programming interface (basic level), XML-based meta language (intermediate level) or ANSI C language (advanced level). The environment supports programme translation transparently into different languages for learners or explicitly on learners’ demand. Learners can access proposed challenges and learning interfaces by examples. The environment was designed to allow characteristics such as extensibility, adaptive interfaces, persistence and low software/hardware coupling. Functionality tests were performed to prove programming environment specifications. UV BOT mobile robots were used in these tests

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe a taxonomy of task demands which distinguishes between Task Complexity, Task Condition and Task Difficulty. We then describe three theoretical claims and predictions of the Cognition Hypothesis (Robinson 2001, 2003b, 2005a) concerning the effects of task complexity on: (a) language production; (b) interaction and uptake of information available in the input to tasks; and (c) individual differences-task interactions. Finally we summarize the findings of the empirical studies in this special issue which all address one or more of these predictions and point to some directions for continuing, future research into the effects of task complexity on learning and performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase of publicly available sequencing data has allowed for rapid progress in our understanding of genome composition. As new information becomes available we should constantly be updating and reanalyzing existing and newly acquired data. In this report we focus on transposable elements (TEs) which make up a significant portion of nearly all sequenced genomes. Our ability to accurately identify and classify these sequences is critical to understanding their impact on host genomes. At the same time, as we demonstrate in this report, problems with existing classification schemes have led to significant misunderstandings of the evolution of both TE sequences and their host genomes. In a pioneering publication Finnegan (1989) proposed classifying all TE sequences into two classes based on transposition mechanisms and structural features: the retrotransposons (class I) and the DNA transposons (class II). We have retraced how ideas regarding TE classification and annotation in both prokaryotic and eukaryotic scientific communities have changed over time. This has led us to observe that: (1) a number of TEs have convergent structural features and/or transposition mechanisms that have led to misleading conclusions regarding their classification, (2) the evolution of TEs is similar to that of viruses by having several unrelated origins, (3) there might be at least 8 classes and 12 orders of TEs including 10 novel orders. In an effort to address these classification issues we propose: (1) the outline of a universal TE classification, (2) a set of methods and classification rules that could be used by all scientific communities involved in the study of TEs, and (3) a 5-year schedule for the establishment of an International Committee for Taxonomy of Transposable Elements (ICTTE).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses the effects of manipulating the cognitive complexity of L2 oral tasks on language production. It specifically focuses on self-repairs, which are taken as a measure of accuracy since they denote both attention to form and an attempt at being accurate. By means of a repeated measures de- sign, 42 lower-intermediate students were asked to perform three different tasks types (a narrative, and instruction-giving task, and a decision-making task) for which two degrees of cognitive complexity were established. The narrative task was manipulated along +/− Here-and-Now, an instruction-giving task ma- nipulated along +/− elements, and the decision-making task which is manipu- lated along +/− reasoning demands. Repeated measures ANOVAs are used for the calculation of differences between degrees of complexity and among task types. One-way ANOVA are used to detect potential differences between low- proficiency and high-proficiency participants. Results show an overall effect of Task Complexity on self-repairs behavior across task types, with different be- haviors existing among the three task types. No differences are found between the self-repair behavior between low and high proficiency groups. Results are discussed in the light of theories of cognition and L2 performance (Robin- son 2001a, 2001b, 2003, 2005, 2007), L1 and L2 language production models (Levelt 1989, 1993; Kormos 2000, 2006), and attention during L2 performance (Skehan 1998; Robinson, 2002).