931 resultados para bayesian networks
Resumo:
Identifying protein-protein interactions is crucial for understanding cellular functions. Genomic data provides opportunities and challenges in identifying these interactions. We uncover the rules for predicting protein-protein interactions using a frequent pattern tree (FPT) approach modified to generate a minimum set of rules (mFPT), with rule attributes constructed from the interaction features of the yeast genomic data. The mFPT prediction accuracy is benchmarked against other commonly used methods such as Bayesian networks and logistic regressions under various statistical measures. Our study indicates that mFPT outranks other methods in predicting the protein-protein interactions for the database used. We predict a new protein-protein interaction complex whose biological function is related to premRNA splicing and new protein-protein interactions within existing complexes based on the rules generated.
Resumo:
C.M. Onyango, J.A. Marchant and R. Zwiggelaar, 'Modelling uncertainty in agricultural image analysis', Computers and Electronics in Agriculture 17 (3), 295-305 (1997)
Resumo:
The application of the formal framework of causal Bayesian Networks to children's causal learning provides the motivation to examine the link between judgments about the causal structure of a system, and the ability to make inferences about interventions on components of the system. Three experiments examined whether children are able to make correct inferences about interventions on different causal structures. The first two experiments examined whether children's causal structure and intervention judgments were consistent with one another. In Experiment 1, children aged between 4 and 8years made causal structure judgments on a three-component causal system followed by counterfactual intervention judgments. In Experiment 2, children's causal structure judgments were followed by intervention judgments phrased as future hypotheticals. In Experiment 3, we explicitly told children what the correct causal structure was and asked them to make intervention judgments. The results of the three experiments suggest that the representations that support causal structure judgments do not easily support simple judgments about interventions in children. We discuss our findings in light of strong interventionist claims that the two types of judgments should be closely linked. © 2011 Cognitive Science Society, Inc. <br/> <br/> <br/>-------------------------------------------------------------------------------- <br/> <br/>Reaxys Database Information| <br/>
Resumo:
Discrete Conditional Phase-type (DC-Ph) models are a family of models which represent skewed survival data conditioned on specific inter-related discrete variables. The survival data is modeled using a Coxian phase-type distribution which is associated with the inter-related variables using a range of possible data mining approaches such as Bayesian networks (BNs), the Naïve Bayes Classification method and classification regression trees. This paper utilizes the Discrete Conditional Phase-type model (DC-Ph) to explore the modeling of patient waiting times in an Accident and Emergency Department of a UK hospital. The resulting DC-Ph model takes on the form of the Coxian phase-type distribution conditioned on the outcome of a logistic regression model.
Resumo:
Local computation in join trees or acyclic hypertrees has been shown to be linked to a particular algebraic structure, called valuation algebra.There are many models of this algebraic structure ranging from probability theory to numerical analysis, relational databases and various classical and non-classical logics. It turns out that many interesting models of valuation algebras may be derived from semiring valued mappings. In this paper we study how valuation algebras are induced by semirings and how the structure of the valuation algebra is related to the algebraic structure of the semiring. In particular, c-semirings with idempotent multiplication induce idempotent valuation algebras and therefore permit particularly efficient architectures for local computation. Also important are semirings whose multiplicative semigroup is embedded in a union of groups. They induce valuation algebras with a partially defined division. For these valuation algebras, the well-known architectures for Bayesian networks apply. We also extend the general computational framework to allow derivation of bounds and approximations, for when exact computation is not feasible.
Resumo:
This paper investigates probabilistic logics endowed with independence relations. We review propositional probabilistic languages without and with independence. We then consider graph-theoretic representations for propositional probabilistic logic with independence; complexity is analyzed, algorithms are derived, and examples are discussed. Finally, we examine a restricted first-order probabilistic logic that generalizes relational Bayesian networks.
Resumo:
In this paper, we present a hybrid BDI-PGM framework, in which PGMs (Probabilistic Graphical Models) are incorporated into a BDI (belief-desire-intention) architecture. This work is motivated by the need to address the scalability and noisy sensing issues in SCADA (Supervisory Control And Data Acquisition) systems. Our approach uses the incorporated PGMs to model the uncertainty reasoning and decision making processes of agents situated in a stochastic environment. In particular, we use Bayesian networks to reason about an agent’s beliefs about the environment based on its sensory observations, and select optimal plans according to the utilities of actions defined in influence diagrams. This approach takes the advantage of the scalability of the BDI architecture and the uncertainty reasoning capability of PGMs. We present a prototype of the proposed approach using a transit scenario to validate its effectiveness.
Resumo:
This work proposes an extended version of the well-known tree-augmented naive Bayes (TAN) classifier where the structure learning step is performed without requiring features to be connected to the class. Based on a modification of Edmonds' algorithm, our structure learning procedure explores a superset of the structures that are considered by TAN, yet achieves global optimality of the learning score function in a very efficient way (quadratic in the number of features, the same complexity as learning TANs). We enhance our procedure with a new score function that only takes into account arcs that are relevant to predict the class, as well as an optimization over the equivalent sample size during learning. These ideas may be useful for structure learning of Bayesian networks in general. A range of experiments shows that we obtain models with better prediction accuracy than naive Bayes and TAN, and comparable to the accuracy of the state-of-the-art classifier averaged one-dependence estimator (AODE). We release our implementation of ETAN so that it can be easily installed and run within Weka.
Resumo:
Recently there has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and architectural complexity). Once one has learned a model based on their devised method, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Unfortunately, the standard tests used for this purpose are not able to jointly consider performance measures. The aim of this paper is to resolve this issue by developing statistical procedures that are able to account for multiple competing measures at the same time. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameter of such models, as usually the number of studied cases is very reduced in such comparisons. Real data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
The evaluation of forensic evidence can occur at any level within the hierarchy of propositions depending on the question being asked and the amount and type of information that is taken into account within the evaluation. Commonly DNA evidence is reported given propositions that deal with the sub-source level in the hierarchy, which deals only with the possibility that a nominated individual is a source of DNA in a trace (or contributor to the DNA in the case of a mixed DNA trace). We explore the use of information obtained from examinations, presumptive and discriminating tests for body fluids, DNA concentrations and some case circumstances within a Bayesian network in order to provide assistance to the Courts that have to consider propositions at source level. We use a scenario in which the presence of blood is of interest as an exemplar and consider how DNA profiling results and the potential for laboratory error can be taken into account. We finish with examples of how the results of these reports could be presented in court using either numerical values or verbal descriptions of the results.
Resumo:
Nous proposons une approche probabiliste afin de déterminer l’impact des changements dans les programmes à objets. Cette approche sert à prédire, pour un changement donné dans une classe du système, l’ensemble des autres classes potentiellement affectées par ce changement. Cette prédiction est donnée sous la forme d’une probabilité qui dépend d’une part, des interactions entre les classes exprimées en termes de nombre d’invocations et d’autre part, des relations extraites à partir du code source. Ces relations sont extraites automatiquement par rétro-ingénierie. Pour la mise en oeuvre de notre approche, nous proposons une approche basée sur les réseaux bayésiens. Après une phase d’apprentissage, ces réseaux prédisent l’ensemble des classes affectées par un changement. L’approche probabiliste proposée est évaluée avec deux scénarios distincts mettant en oeuvre plusieurs types de changements effectués sur différents systèmes. Pour les systèmes qui possèdent des données historiques, l’apprentissage a été réalisé à partir des anciennes versions. Pour les systèmes dont on ne possède pas assez de données relatives aux changements de ses versions antécédentes, l’apprentissage a été réalisé à l’aide des données extraites d’autres systèmes.
Resumo:
La modélisation de l’expérience de l’utilisateur dans les Interactions Homme-Machine est un enjeu important pour la conception et le développement des systèmes adaptatifs intelligents. Dans ce contexte, une attention particulière est portée sur les réactions émotionnelles de l’utilisateur, car elles ont une influence capitale sur ses aptitudes cognitives, comme la perception et la prise de décision. La modélisation des émotions est particulièrement pertinente pour les Systèmes Tutoriels Émotionnellement Intelligents (STEI). Ces systèmes cherchent à identifier les émotions de l’apprenant lors des sessions d’apprentissage, et à optimiser son expérience d’interaction en recourant à diverses stratégies d’interventions. Cette thèse vise à améliorer les méthodes de modélisation des émotions et les stratégies émotionnelles utilisées actuellement par les STEI pour agir sur les émotions de l’apprenant. Plus précisément, notre premier objectif a été de proposer une nouvelle méthode pour détecter l’état émotionnel de l’apprenant, en utilisant différentes sources d’informations qui permettent de mesurer les émotions de façon précise, tout en tenant compte des variables individuelles qui peuvent avoir un impact sur la manifestation des émotions. Pour ce faire, nous avons développé une approche multimodale combinant plusieurs mesures physiologiques (activité cérébrale, réactions galvaniques et rythme cardiaque) avec des variables individuelles, pour détecter une émotion très fréquemment observée lors des sessions d’apprentissage, à savoir l’incertitude. Dans un premier lieu, nous avons identifié les indicateurs physiologiques clés qui sont associés à cet état, ainsi que les caractéristiques individuelles qui contribuent à sa manifestation. Puis, nous avons développé des modèles prédictifs permettant de détecter automatiquement cet état à partir des différentes variables analysées, à travers l’entrainement d’algorithmes d’apprentissage machine. Notre deuxième objectif a été de proposer une approche unifiée pour reconnaître simultanément une combinaison de plusieurs émotions, et évaluer explicitement l’impact de ces émotions sur l’expérience d’interaction de l’apprenant. Pour cela, nous avons développé une plateforme hiérarchique, probabiliste et dynamique permettant de suivre les changements émotionnels de l'apprenant au fil du temps, et d’inférer automatiquement la tendance générale qui caractérise son expérience d’interaction à savoir : l’immersion, le blocage ou le décrochage. L’immersion correspond à une expérience optimale : un état dans lequel l'apprenant est complètement concentré et impliqué dans l’activité d’apprentissage. L’état de blocage correspond à une tendance d’interaction non optimale où l'apprenant a de la difficulté à se concentrer. Finalement, le décrochage correspond à un état extrêmement défavorable où l’apprenant n’est plus du tout impliqué dans l’activité d’apprentissage. La plateforme proposée intègre trois modalités de variables diagnostiques permettant d’évaluer l’expérience de l’apprenant à savoir : des variables physiologiques, des variables comportementales, et des mesures de performance, en combinaison avec des variables prédictives qui représentent le contexte courant de l’interaction et les caractéristiques personnelles de l'apprenant. Une étude a été réalisée pour valider notre approche à travers un protocole expérimental permettant de provoquer délibérément les trois tendances ciblées durant l’interaction des apprenants avec différents environnements d’apprentissage. Enfin, notre troisième objectif a été de proposer de nouvelles stratégies pour influencer positivement l’état émotionnel de l’apprenant, sans interrompre la dynamique de la session d’apprentissage. Nous avons à cette fin introduit le concept de stratégies émotionnelles implicites : une nouvelle approche pour agir subtilement sur les émotions de l’apprenant, dans le but d’améliorer son expérience d’apprentissage. Ces stratégies utilisent la perception subliminale, et plus précisément une technique connue sous le nom d’amorçage affectif. Cette technique permet de solliciter inconsciemment les émotions de l’apprenant, à travers la projection d’amorces comportant certaines connotations affectives. Nous avons mis en œuvre une stratégie émotionnelle implicite utilisant une forme particulière d’amorçage affectif à savoir : le conditionnement évaluatif, qui est destiné à améliorer de façon inconsciente l’estime de soi. Une étude expérimentale a été réalisée afin d’évaluer l’impact de cette stratégie sur les réactions émotionnelles et les performances des apprenants.
Resumo:
We investigate solution sets of a special kind of linear inequality systems. In particular, we derive characterizations of these sets in terms of minimal solution sets. The studied inequalities emerge as information inequalities in the context of Bayesian networks. This allows to deduce important properties of Bayesian networks, which is important within causal inference.
Resumo:
The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).