885 resultados para Incremental Information-content


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Closely related species may be very difficult to distinguish morphologically, yet sometimes morphology is the only reasonable possibility for taxonomic classification. Here we present learning-vector-quantization artificial neural networks as a powerful tool to classify specimens on the basis of geometric morphometric shape measurements. As an example, we trained a neural network to distinguish between field and root voles from Procrustes transformed landmark coordinates on the dorsal side of the skull, which is so similar in these two species that the human eye cannot make this distinction. Properly trained neural networks misclassified only 3% of specimens. Therefore, we conclude that the capacity of learning vector quantization neural networks to analyse spatial coordinates is a powerful tool among the range of pattern recognition procedures that is available to employ the information content of geometric morphometrics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During the last few years, next-generation sequencing (NGS) technologies have accelerated the detection of genetic variants resulting in the rapid discovery of new disease-associated genes. However, the wealth of variation data made available by NGS alone is not sufficient to understand the mechanisms underlying disease pathogenesis and manifestation. Multidisciplinary approaches combining sequence and clinical data with prior biological knowledge are needed to unravel the role of genetic variants in human health and disease. In this context, it is crucial that these data are linked, organized, and made readily available through reliable online resources. The Swiss-Prot section of the Universal Protein Knowledgebase (UniProtKB/Swiss-Prot) provides the scientific community with a collection of information on protein functions, interactions, biological pathways, as well as human genetic diseases and variants, all manually reviewed by experts. In this article, we present an overview of the information content of UniProtKB/Swiss-Prot to show how this knowledgebase can support researchers in the elucidation of the mechanisms leading from a molecular defect to a disease phenotype.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES: The purpose of this study was to evaluate the association between inflammation and heart failure (HF) risk in older adults. BACKGROUND: Inflammation is associated with HF risk factors and also directly affects myocardial function. METHODS: The association of baseline serum concentrations of interleukin (IL)-6, tumor necrosis factor-alpha, and C-reactive protein (CRP) with incident HF was assessed with Cox models among 2,610 older persons without prevalent HF enrolled in the Health ABC (Health, Aging, and Body Composition) study (age 73.6 +/- 2.9 years; 48.3% men; 59.6% white). RESULTS: During follow-up (median 9.4 years), HF developed in 311 (11.9%) participants. In models controlling for clinical characteristics, ankle-arm index, and incident coronary heart disease, doubling of IL-6, tumor necrosis factor-alpha, and CRP concentrations was associated with 29% (95% confidence interval: 13% to 47%; p < 0.001), 46% (95% confidence interval: 17% to 84%; p = 0.001), and 9% (95% confidence interval: -1% to 24%; p = 0.087) increase in HF risk, respectively. In models including all 3 markers, IL-6, and tumor necrosis factor-alpha, but not CRP, remained significant. These associations were similar across sex and race and persisted in models accounting for death as a competing event. Post-HF ejection fraction was available in 239 (76.8%) cases; inflammatory markers had stronger association with HF with preserved ejection fraction. Repeat IL-6 and CRP determinations at 1-year follow-up did not provide incremental information. Addition of IL-6 to the clinical Health ABC HF model improved model discrimination (C index from 0.717 to 0.734; p = 0.001) and fit (decreased Bayes information criterion by 17.8; p < 0.001). CONCLUSIONS: Inflammatory markers are associated with HF risk among older adults and may improve HF risk stratification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The GO annotation dataset provided by the UniProt Consortium (GOA: http://www.ebi.ac.uk/GOA) is a comprehensive set of evidenced-based associations between terms from the Gene Ontology resource and UniProtKB proteins. Currently supplying over 100 million annotations to 11 million proteins in more than 360,000 taxa, this resource has increased 2-fold over the last 2 years and has benefited from a wealth of checks to improve annotation correctness and consistency as well as now supplying a greater information content enabled by GO Consortium annotation format developments. Detailed, manual GO annotations obtained from the curation of peer-reviewed papers are directly contributed by all UniProt curators and supplemented with manual and electronic annotations from 36 model organism and domain-focused scientific resources. The inclusion of high-quality, automatic annotation predictions ensures the UniProt GO annotation dataset supplies functional information to a wide range of proteins, including those from poorly characterized, non-model organism species. UniProt GO annotations are freely available in a range of formats accessible by both file downloads and web-based views. In addition, the introduction of a new, normalized file format in 2010 has made for easier handling of the complete UniProt-GOA data set.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms which propel biological evolution. Our previous article presented a histogram model [1] consisting in populations of individuals whose number changed under the influence of variation and/or fitness, the total population remaining constant. Individuals are classified into bins, and the content of each bin is calculated generation after generation by an Excel spreadsheet. Here, we apply the histogram model to a stable population with fitness F(1)=1.00 in which one or two fitter mutants emerge. In a first scenario, a single mutant emerged in the population whose fitness was greater than 1.00. The simulations ended when the original population was reduced to a single individual. The histogram model was validated by excellent agreement between its predictions and those of a classical continuous function (Eqn. 1) which predicts the number of generations needed for a favorable mutation to spread throughout a population. But in contrast to Eqn. 1, our histogram model is adaptable to more complex scenarios, as demonstrated here. In the second and third scenarios, the original population was present at time zero together with two mutants which differed from the original population by two higher and distinct fitness values. In the fourth scenario, the large original population was present at time zero together with one fitter mutant. After a number of generations, when the mutant offspring had multiplied, a second mutant was introduced whose fitness was even greater. The histogram model also allows Shannon entropy (SE) to be monitored continuously as the information content of the total population decreases or increases. The results of these simulations illustrate, in a graphically didactic manner, the influence of natural selection, operating through relative fitness, in the emergence and dominance of a fitter mutant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms which propel biological evolution. Our previous reports presented a histogram model to simulate the evolution of populations of individuals classified into bins according to an unspecified, quantifiable phenotypic character, and whose number in each bin changed generation after generation under the influence of fitness, while the total population was maintained constant. The histogram model also allowed Shannon entropy (SE) to be monitored continuously as the information content of the total population decreased or increased. Here, a simple Perl (Practical Extraction and Reporting Language) application was developed to carry out these computations, with the critical feature of an added random factor in the percent of individuals whose offspring moved to a vicinal bin. The results of the simulations demonstrate that the random factor mimicking variation increased considerably the range of values covered by Shannon entropy, especially when the percentage of changed offspring was high. This increase in information content is interpreted as facilitated adaptability of the population.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forensic science is generally defined as the application of science to address questions related to the law. Too often, this view restricts the contribution of science to one single process which eventually aims at bringing individuals to court while minimising risk of miscarriage of justice. In order to go beyond this paradigm, we propose to refocus the attention towards traces themselves, as remnants of a criminal activity, and their information content. We postulate that traces contribute effectively to a wide variety of other informational processes that support decision making inmany situations. In particular, they inform actors of new policing strategies who place the treatment of information and intelligence at the centre of their systems. This contribution of forensic science to these security oriented models is still not well identified and captured. In order to create the best condition for the development of forensic intelligence, we suggest a framework that connects forensic science to intelligence-led policing (part I). Crime scene attendance and processing can be envisaged within this view. This approach gives indications abouthowto structure knowledge used by crime scene examiners in their effective practice (part II).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last few years, a need to account for molecular flexibility in drug-design methodologies has emerged, even if the dynamic behavior of molecular properties is seldom made explicit. For a flexible molecule, it is indeed possible to compute different values for a given conformation-dependent property and the ensemble of such values defines a property space that can be used to describe its molecular variability; a most representative case is the lipophilicity space. In this review, a number of applications of lipophilicity space and other property spaces are presented, showing that this concept can be fruitfully exploited: to investigate the constraints exerted by media of different levels of structural organization, to examine processes of molecular recognition and binding at an atomic level, to derive informative descriptors to be included in quantitative structure--activity relationships and to analyze protein simulations extracting the relevant information. Much molecular information is neglected in the descriptors used by medicinal chemists, while the concept of property space can fill this gap by accounting for the often-disregarded dynamic behavior of both small ligands and biomacromolecules. Property space also introduces some innovative concepts such as molecular sensitivity and plasticity, which appear best suited to explore the ability of a molecule to adapt itself to the environment variously modulating its property and conformational profiles. Globally, such concepts can enhance our understanding of biological phenomena providing fruitful descriptors in drug-design and pharmaceutical sciences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Twenty microsatelitte loci were identified and characterized in common bean. Microsatellites were tested in 14 genotypes. The allele number ranged from 1 to 3, and the polymorphism information content (PIC) was between 0.14 and 0.65. These polymorphic markers are available to be used for breeding programs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this work was to determine the genetic variability available for triticale (X Triticosecale Wittmack) crop improvement in Brazil. Forty-two wheat genomic microsatellites were used to estimate the molecular diversity of 54 genotypes, which constitute the base of one of the major triticale breeding programs in the country. Average heterozygosity was 0.06 and average and effective number of alleles per locus were 2.13 and 1.61, respectively, with average allelic frequency of 0.34. The set of genomic wheat microsatellites used clustered the genotypes into seven groups, even when the germplasm was originated primarily from only two triticale breeding programs, a fact reflected on the average polymorphic information content value estimated for the germplasm (0.36). The 71.42% transferability achieved for the tested microsatellites indicates the possibility of exploiting these transferable markers in further triticale genetic and breeding studies, even those mapped on the D genome of wheat, when analyzing hexaploid triticales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objectives of this work were to investigate the genetic variation in 79 soybean (Glycine max) accessions from different regions of the world, to cluster the accessions based on their similarity, and to test the correlation between the two types of markers used. Simple sequence repeat markers present in genomic (SSR) and in expressed regions (EST-SSR) were used. Thirty SSR primer-pairs were selected (20 genomic and 10 EST-SSR) based on their distribution on the 20 genetic linkage groups of soybean, on their trinucleotide repetition unit and on their polymorphism information content. All analyzed loci were polymorphic, and 259 alleles were found. The number of alleles per locus varied from 2-21, with an average of 8.63. The accessions exhibit a significant number of rare alleles, with genotypes 19, 35, 63 and 65 carrying the greater number of exclusive alleles. Accessions 75 and 79 were the most similar and accessions 31 and 35, and 40 and 78, were the most divergent ones. A low correlation between SSR and EST-SSR data was observed, thus genomic and expressed microsatellite markers are required for an appropriate analysis of genetic diversity in soybean. The genetic diversity observed was high and allowed the formation of five groups and several subgroups. A moderate relationship between genetic divergence and geographic origin of accessions was observed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses primary care physicians, cardiologists, internists, angiologists and doctors desirous of improving vascular risk prediction in primary care. Many cardiovascular risk factors act aggressively on the arterial wall and result in atherosclerosis and atherothrombosis. Cardiovascular prognosis derived from ultrasound imaging is, however, excellent in subjects without formation of intimal thickening or atheromas. Since ultrasound visualises the arterial wall directly, the information derived from the arterial wall may add independent incremental information to the knowledge of risk derived from global risk assessment. This paper provides an overview on plaque imaging for vascular risk prediction in two parts: Part 1: Carotid IMT is frequently used as a surrogate marker for outcome in intervention studies addressing rather large cohorts of subjects. Carotid IMT as a risk prediction tool for the prevention of acute myocardial infarction and stroke has been extensively studied in many patients since 1987, and has yielded incremental hazard ratios for these cardiovascular events independently of established cardiovascular risk factors. However, carotid IMT measurements are not used uniformly and therefore still lack widely accepted standardisation. Hence, at an individual, practicebased level, carotid IMT is not recommended as a risk assessment tool. The total plaque area of the carotid arteries (TPA) is a measure of the global plaque burden within both carotid arteries. It was recently shown in a large Norwegian cohort involving over 6000 subjects that TPA is a very good predictor for future myocardial infarction in women with an area under the curve (AUC) using a receiver operating curves (ROC) value of 0.73 (in men: 0.63). Further, the AUC for risk prediction is high both for vascular death in a vascular prevention clinic group (AUC 0.77) and fatal or nonfatal myocardial infarction in a true primary care group (AUC 0.79). Since TPA has acceptable reproducibility, allows calculation of posttest risk and is easily obtained at low cost, this risk assessment tool may come in for more widespread use in the future and also serve as a tool for atherosclerosis tracking and guidance for intensity of preventive therapy. However, more studies with TPA are needed. Part 2: Carotid and femoral plaque formation as detected by ultrasound offers a global view of the extent of atherosclerosis. Several prospective cohort studies have shown that cardiovascular risk prediction is greater for plaques than for carotid IMT. The number of arterial beds affected by significant atheromas may simply be added numerically to derive additional information on the risk of vascular events. A new atherosclerosis burden score (ABS) simply calculates the sum of carotid and femoral plaques encountered during ultrasound scanning. ABS correlates well and independently with the presence of coronary atherosclerosis and stenosis as measured by invasive coronary angiogram. However, the prognostic power of ABS as an independent marker of risk still needs to be elucidated in prospective studies. In summary, the large number of ways to measure atherosclerosis and related changes in human arteries by ultrasound indicates that this technology is not yet sufficiently perfected and needs more standardisation and workup on clearly defined outcome studies before it can be recommended as a practice-based additional risk modifier.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this work was to evaluate the efficiency of EST‑SSR markers in the assessment of the genetic diversity of rubber tree genotypes (Hevea brasiliensis) and to verify the transferability of these markers for wild species of Hevea. Forty‑five rubber tree accessions from the Instituto Agronômico (Campinas, SP, Brazil) and six wild species were used. Information provided by modified Roger's genetic distance were used to analyze EST‑SSR data. UPGMA clustering divided the samples into two major groups with high genetic differentiation, while the software Structure distributed the 51 clones into eight groups. A parallel could be established between both clustering analyses. The 30 polymorphic EST‑SSRs showed from two to ten alleles and were efficient in amplifying the six wild species. Functional EST‑SSR microsatellites are efficient in evaluating the genetic diversity among rubber tree clones and can be used to translate the genetic differences among cultivars and to fingerprint closely related materials. The accessions from the Instituto Agronômico show high genetic diversity. The EST‑SSR markers, developed from Hevea brasiliensis, show transferability and are able to amplify other species of Hevea.