871 resultados para Information in biology
Resumo:
In previous papers, we have presented a logic-based framework based on fusion rules for merging structured news reports. Structured news reports are XML documents, where the textentries are restricted to individual words or simple phrases, such as names and domain-specific terminology, and numbers and units. We assume structured news reports do not require natural language processing. Fusion rules are a form of scripting language that define how structured news reports should be merged. The antecedent of a fusion rule is a call to investigate the information in the structured news reports and the background knowledge, and the consequent of a fusion rule is a formula specifying an action to be undertaken to form a merged report. It is expected that a set of fusion rules is defined for any given application. In this paper we extend the approach to handling probability values, degrees of beliefs, or necessity measures associated with textentries in the news reports. We present the formal definition for each of these types of uncertainty and explain how they can be handled using fusion rules. We also discuss the methods of detecting inconsistencies among sources.
Resumo:
The sediment sequence from Hasseldala port in southeastern Sweden provides a unique Lateglacial/early Holocene record that contains five different tephra layers. Three of these have been geochemically identified as the Borrobol Tephra, the Hasseldalen Tephra and the 10-ka Askja Tephra. Twenty-eight high-resolution C-14 measurements have been obtained and three different age models based on Bayesian statistics are employed to provide age estimates for the five different tephra layers. The chrono- and pollen stratigraphic framework supports the stratigraphic position of the Borrobol Tephra as found in Sweden at the very end of the Older Dryas pollen zone and provides the first age estimates for the Askja and Hasseldalen tephras. Our results, however, highlight the limitations that arise in attempting to establish a robust, chronologically independent lacustrine sequence that can be correlated in great detail to ice core or marine records. Radiocarbon samples are prone to error and sedimentation rates in lake basins may vary considerably due to a number of factors. Any type of valid and 'realistic' age model, therefore, has to take these limitations into account and needs to include this information in its prior assumptions. As a result, the age ranges for the specific horizons at Hasseldala port are large and calendar year estimates differ according to the assumptions of the age-model. Not only do these results provide a cautionary note for overdependence on one age-model for the derivation of age estimates for specific horizons, but they also demonstrate that precise correlations to other palaeoarchives to detect leads or lags is problematic. Given the uncertainties associated with establishing age-depth models for sedimentary sequences spanning the Lateglacial period, however, this exercise employing Bayesian probability methods represents the best possible approach and provides the most statistically significant age estimates for the pollen zone boundaries and tephra horizons. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Background
Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically.
Results
In this paper, we introduce a novel gene regulatory network inference (GRNI) algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently.
Conclusions
For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.
Resumo:
Segregation measures have been applied in the study of many societies, and traditionally such measures have been used to assess the degree of division between social and cultural groups across urban areas, wider regions, or perhaps national areas. The degree of segregation can vary substantially from place to place even within very small areas. In this paper the substantive concern is with religious/political segregation in Northern Ireland—particularly the proportion of Protestants (often taken as an indicator of those who wish to retain the union with Britain) to Catholics (often taken as an indicator of those who favour union with the Republic of Ireland). Traditionally, segregation is measured globally—that is, across all units in a given area. A recent trend in spatial data analysis generally, and in segregation analysis specifically, is to assess local features of spatial datasets. The rationale behind such approaches is that global methods may obscure important spatial variations in the property of interest, and thus prevent full use of the data. In this paper the utility of local measures of residential segregation is assessed with reference to the religious/political composition of Northern Ireland. The paper demonstrates marked spatial variations in the degree and nature of residential segregation across Northern Ireland. It is argued that local measures provide highly useful information in addition to that provided in maps of the raw variables and in standard global segregation measures.
Resumo:
Essential genes are absolutely required for the survival of an organism. The identification of essential genes, besides being one of the most fundamental questions in biology, is also of interest for the emerging science of synthetic biology and for the development of novel antimicrobials. New antimicrobial therapies are desperately needed to treat multidrug-resistant pathogens, such as members of the Burkholderia cepacia complex.
Resumo:
In this paper, we investigate what constitutes the least amount of a priori information on the nonlinearity so that the FIR linear part is identifiable in the non-Gaussian input case. Three types of a priori information are considered including quadrant information, point information and locally monotonous information. In all three cases, identifiability has been established and corresponding identification algorithms are developed with their convergence proofs.
Resumo:
Autism is a neuro-developmental disorder defined by atypical social behaviour, of which atypical social attention behaviours are among the earliest clinical markers (Volkmar et al., 1997). Eye tracking studies using still images and movie clips have provided a method for the precise quantification of atypical social attention in ASD. This is generally characterised by diminished viewing of the most socially pertinent regions (eyes), and increased viewing of less socially informative regions (body, background, objects) (Klin et al., 2002; Riby & Hancock, 2008, 2009). Ecological validity within eye tracking studies has become an increasingly important issue. As of yet, however, little is known about the precise nature of the atypicalities of social attention in ASD in real-life. Objectives: To capture and quantify gaze patterns for children with an ASD within a real life setting, compared to two Typically Developing (TD) comparison groups. Methods: Nine children with an ASD were compared to two age matched TD groups – a verbal (N=9) and a non-verbal (N=9) comparison group. A real-life scenario was created involving an experimenter posing as a magician, and consisted of 3 segments: a conversation segment; a magic trick segment; and a puppet segment. The first segment explored children’s attentional preferences during a real-life conversation; the magic trick segment explored children’s use of the eyes as a communicative cue, and the puppet segment explored attention capture. Finally, part of the puppet section explored children’s use of facial information in response to an unexpected event. Results: The most striking difference between the groups was the diminished viewing of the eyes by the ASD group in comparison to both control groups. This was found particularly during the conversation segment, but also during the magic trick segment, and during the puppet segment. When in conversation, participants with ASD were found to spend a greater proportion time looking off-screen, in comparison to TD participants. There was also a tendency for the ASD group to spend a greater proportion of time looking to the mouth of the experimenter. During the magic trick segment, despite the fact that the eyes were not predictive of a correct location, both TD comparison groups continued to use the eyes as a communicative cue, whereas the ASD group did not. In the puppet segment, all three groups spent a similar amount of time looking between the puppet and regions of the experimenter’s face. However, in response to an unexpected event, the ASD group were significantly slower to fixate back on the experimenter’s face. Conclusions: The results demonstrate the reduced salience of socially pertinent information for children with ASD in real life, and they provide support for the findings from previous eye tracking studies involving scene viewing. However, the results also highlight a pattern looking off-screen for both the TD and ASD groups. This eye movement behaviour is likely to be associated specifically with real-life interaction, as it has functional relevance (Doherty-Sneddon et al., 2002). However, the fact that it is significantly increased in the ASD group has implications for their understanding of real life social interactions.
Resumo:
Current guidelines for the management of cough highlight the value of a taking a careful history to establish specific features of the cough in particular its duration, typical triggers or aggravants and associated symptoms. Unfortunately the diagnostic yield from a history alone is poor and there is a need to understand the pattern of clinical cough in a more precise way. As the technology to record cough in ambulatory settings becomes more sophisticated so the possibility that precise measurement of the cough frequency, intensity and acoustic characteristics may offer diagnostically valuable information in individual patients becomes a reality. In this article the current knowledge of the clinical patterns of cough is discussed and the potential for new technology to record cough patterns in a meaningful way is considered.
Resumo:
Decision making is an important element throughout the life-cycle of large-scale projects. Decisions are critical as they have a direct impact upon the success/outcome of a project and are affected by many factors including the certainty and precision of information. In this paper we present an evidential reasoning framework which applies Dempster-Shafer Theory and its variant Dezert-Smarandache Theory to aid decision makers in making decisions where the knowledge available may be imprecise, conflicting and uncertain. This conceptual framework is novel as natural language based information extraction techniques are utilized in the extraction and estimation of beliefs from diverse textual information sources, rather than assuming these estimations as already given. Furthermore we describe an algorithm to define a set of maximal consistent subsets before fusion occurs in the reasoning framework. This is important as inconsistencies between subsets may produce results which are incorrect/adverse in the decision making process. The proposed framework can be applied to problems involving material selection and a Use Case based in the Engineering domain is presented to illustrate the approach. © 2013 Elsevier B.V. All rights reserved.
Resumo:
A free association test was used in the present study to examine the availability and accessibility of positive vs negative smoking-related information in the long-term memories of smokers. Participants were asked to generate smoking-related associations across a 4-minute interval. Although smokers generated more positive smoking-associations than non-smokers, both groups produced a greater number of negative than positive associations per se. Of particular interest was the finding that whilst the ratio of positive/negative associations generated was constant across time in non-smokers, this ratio varied in smokers. Specifically, smokers generated proportionately more of their available positive associations and proportionately less of their negative associations in the early time interval. It is suggested that these results not only indicate a greater availability of positive smoking-associations in smokers compared to non-smokers, but also a greater accessibility too. It is proposed that positive smoking associations are more automatically activated than negative associations in smokers, even though they have generally more negative associations available.
Resumo:
In spite of the controversy that they have generated, neutral models provide ecologists with powerful tools for creating dynamic predictions about beta-diversity in ecological communities. Ecologists can achieve an understanding of the assembly rules operating in nature by noting when and how these predictions are met or not met. This is particularly valuable for those groups of organisms that are challenging to study under natural conditions (e.g., bacteria and fungi). Here, we focused on arbuscular mycorrhizal fungal (AMF) communities and performed an extensive literature search that allowed us to synthesize the information in 19 data sets with the minimal requisites for creating a null hypothesis in terms of community dissimilarity expected under neutral dynamics. In order to achieve this task, we calculated the first estimates of neutral parameters for several AMF communities from different ecosystems. Communities were shown either to be consistent with neutrality or to diverge or converge with respect to the levels of compositional dissimilarity expected under neutrality. These data support the hypothesis that divergence occurs in systems where the effect of limited dispersal is overwhelmed by anthropogenic disturbance or extreme biological and environmental heterogeneity, whereas communities converge when systems have the potential for niche divergence within a relatively homogeneous set of environmental conditions. Regarding the study cases that were consistent with neutrality, the sampling designs employed may have covered relatively homogeneous environments in which the effects of dispersal limitation overwhelmed minor differences among AMF taxa that would lead to environmental filtering. Using neutral models we showed for the first time for a soil microbial group the conditions under which different assembly processes may determine different patterns of beta-diversity. Our synthesis is an important step showing how the application of general ecological theories to a model microbial taxon has the potential to shed light on the assembly and ecological dynamics of communities.
Resumo:
I study the institution of avoiding hiring one’s own Ph.D. graduates for assistant professorships. I argue that this institution is necessary to create better incentives for researchers to incorporate new information in studies, facilitating the convergence to asymptotic learning of the studied fundamentals.
Resumo:
Many plain text information hiding techniques demand deep semantic processing, and so suffer in reliability. In contrast, syntactic processing is a more mature and reliable technology. Assuming a perfect parser, this paper evaluates a set of automated and reversible syntactic transforms that can hide information in plain text without changing the meaning or style of a document. A large representative collection of newspaper text is fed through a prototype system. In contrast to previous work, the output is subjected to human testing to verify that the text has not been significantly compromised by the information hiding procedure, yielding a success rate of 96% and bandwidth of 0.3 bits per sentence. © 2007 SPIE-IS&T.