897 resultados para Probabilistic Reasoning
Resumo:
In a series of studies, I investigated the developmental changes in children’s inductive reasoning strategy, methodological manipulations affecting the trajectory, and driving mechanisms behind the development of category induction. I systematically controlled the nature of the stimuli used, and employed a triad paradigm in which perceptual cues were directly pitted against category membership, to explore under which circumstances children used perceptual or category induction. My induction tasks were designed for children aged 3-9 years old using biologically plausible novel items. In Study 1, I tested 264 children. Using a wide age range allowed me to systematically investigate the developmental trajectory of induction. I also created two degrees of perceptual distractor – high and low – and explored whether the degree of perceptual similarity between target and test items altered children’s strategy preference. A further 52 children were tested in Study 2, to examine whether children showing a perceptual-bias were in fact basing their choice on maturation categories. A gradual transition was observed from perceptual to category induction. However, this transition could not be due to the inability to inhibit high perceptual distractors as children of all ages were equally distracted. Children were also not basing their strategy choices on maturation categories. In Study 3, I investigated category structure (featural vs. relational category rules) and domain (natural vs. artefact) on inductive preference. I tested 403 children. Each child was assigned to either the featural or relational condition, and completed both a natural kind and an artefact task. A further 98 children were tested in Study 4, on the effect of using stimuli labels during the tasks. I observed the same gradual transition from perceptual to category induction preference in Studies 3 and 4. This pattern was stable across domains, but children developed a category-bias one year later for relational categories, arguably due to the greater demands on executive function (EF) posed by these stimuli. Children who received labels during the task made significantly more category choices than those who did not receive labels, possibly due to priming effects. Having investigated influences affecting the developmental trajectory, I continued by exploring the driving mechanism behind the development of category induction. In Study 5, I tested 60 children on a battery of EF tasks as well as my induction task. None of the EF tasks were able to predict inductive variance, therefore EF development is unlikely to be the driving factor behind the transition. Finally in Study 6, I divided 252 children into either a comparison group or an intervention group. The intervention group took part in an interactive educational session at Twycross Zoo about animal adaptations. Both groups took part in four induction tasks, two before and two a week after the zoo visits. There was a significant increase in the number of category choices made in the intervention condition after the zoo visit, a result not observed in the comparison condition. This highlights the role of knowledge in supporting the transition from perceptual to category induction. I suggest that EF development may support induction development, but the driving mechanism behind the transition is an accumulation of knowledge, and an appreciation for the importance of category membership.
Resumo:
Guest editorial: This special issue has been drawn from papers that were published as part of the Second European Conference on Management of Technology (EuroMOT) which was held at Aston Business School (Birmingham, UK) 10-12 September 2006. This was the official European conference for the International Association for Management of Technology (IAMOT); the overall theme of the conference was “Technology and global integration.” There were many high-calibre papers submitted to the conference and published in the associated proceedings (Bennett et al., 2006). The streams of interest that emerged from these submissions were the importance of: technology strategy, innovation, process technologies, managing change, national policies and systems, research and development, supply chain technology, service and operational technology, education and training, small company incubation, technology transfer, virtual operations, technology in developing countries, partnership and alliance, and financing and investment. This special issue focuses upon the streams of interest that accentuate the importance of collaboration between different organisations. Such organisations vary greatly in character; for instance, they may be large or small, publicly or privately owned, and operate in manufacturing or service sectors. Despite these varying characteristics they all have something in common; they all stress the importance of inter-organisational collaboration as a critical success factor for their organisation. In today's global economy it is essential that organisations decide what their core competencies are what those of complementing organisations are. Core competences should be developed to become a bases of differentiation, leverage and competitive advantage, whilst those that are less mature should be outsourced to other organisations that can claim to have had more recognition and success in that particular core competence (Porter, 2001). This strategic trend can be observed throughout advanced economies and is growing strongly. If a posteriori reasoning is applied here it follows that organisations could continue to become more specialised in fewer areas whilst simultaneously becoming more dependent upon other organisations for critical parts of their operations. Such actions seem to fly in the face of rational business strategy and so the question must be asked: why are organisations developing this way? The answer could lie in the recent changes in endogenous and exogenous factors of the organisation; the former emphasising resource-based issues in the short-term, and strategic positioning in the long-term whilst the later emphasises transaction costs in the short-term and acquisition of new skills and knowledge in the long-term. For a harmonious balance of these forces to prevail requires organisations to firstly declare a shared meta-strategy, then to put some cross-organisational processes into place which have their routine operations automated as far as possible. A rolling business plan would review, assess and reposition each organisation within this meta-strategy according to how well they have contributed (Binder and Clegg, 2006). The important common issue here is that an increasing number of businesses today are gaining direct benefit from increasing their levels of inter-organisational collaboration. Such collaboration has largely been possible due to recent technological advances which can make organisational structures more agile (e.g. the extended or the virtual enterprise), organisational infra-structure more connected, and the sharing of real-time information an operational reality. This special issue consists of research papers that have explored the above phenomenon in some way. For instance, the role of government intervention, the use of internet-based technologies, the role of research and development organisations, the changing relationships between start-ups and established firms, the importance of cross-company communities of practice, the practice of networking, the front-loading of large-scale projects, innovation and the probabilistic uncertainties that organisations experience are explored in these papers. The cases cited in these papers are limited as they have a Eurocentric focus. However, it is hoped that readers of this special issue will gain a valuable insight into the increasing importance of collaborative practices via these studies.
Resumo:
This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.
Resumo:
Terms such as moral and ethical leadership are used widely in theory, yet little systematic research has related a sociomoral dimension to leadership in organizations. This study investigated whether managers' moral reasoning (n=132) was associated with the transformational and transactional leadership behaviors they exhibited as perceived by their subordinates (n=407). Managers completed the Defining Issues Test (J. R. Rest, 1990), whereas their subordinates completed the Multifactor Leadership Questionnaire (B. M. Bass & B. J. Avolio, 1995). Analysis of covariance indicated that managers scoring in the highest group of the moral-reasoning distribution exhibited more transformational leadership behaviors than leaders scoring in the lowest group. As expected, there was no relationship between moral-reasoning group and transactional leadership behaviors. Implications for leadership development are discussed.
Resumo:
The generation of very short range forecasts of precipitation in the 0-6 h time window is traditionally referred to as nowcasting. Most existing nowcasting systems essentially extrapolate radar observations in some manner, however, very few systems account for the uncertainties involved. Thus deterministic forecast are produced, which have a limited use when decisions must be made, since they have no measure of confidence or spread of the forecast. This paper develops a Bayesian state space modelling framework for quantitative precipitation nowcasting which is probabilistic from conception. The model treats the observations (radar) as noisy realisations of the underlying true precipitation process, recognising that this process can never be completely known, and thus must be represented probabilistically. In the model presented here the dynamics of the precipitation are dominated by advection, so this is a probabilistic extrapolation forecast. The model is designed in such a way as to minimise the computational burden, while maintaining a full, joint representation of the probability density function of the precipitation process. The update and evolution equations avoid the need to sample, thus only one model needs be run as opposed to the more traditional ensemble route. It is shown that the model works well on both simulated and real data, but that further work is required before the model can be used operationally. © 2004 Elsevier B.V. All rights reserved.
Resumo:
This thesis introduces a flexible visual data exploration framework which combines advanced projection algorithms from the machine learning domain with visual representation techniques developed in the information visualisation domain to help a user to explore and understand effectively large multi-dimensional datasets. The advantage of such a framework to other techniques currently available to the domain experts is that the user is directly involved in the data mining process and advanced machine learning algorithms are employed for better projection. A hierarchical visualisation model guided by a domain expert allows them to obtain an informed segmentation of the input space. Two other components of this thesis exploit properties of these principled probabilistic projection algorithms to develop a guided mixture of local experts algorithm which provides robust prediction and a model to estimate feature saliency simultaneously with the training of a projection algorithm.Local models are useful since a single global model cannot capture the full variability of a heterogeneous data space such as the chemical space. Probabilistic hierarchical visualisation techniques provide an effective soft segmentation of an input space by a visualisation hierarchy whose leaf nodes represent different regions of the input space. We use this soft segmentation to develop a guided mixture of local experts (GME) algorithm which is appropriate for the heterogeneous datasets found in chemoinformatics problems. Moreover, in this approach the domain experts are more involved in the model development process which is suitable for an intuition and domain knowledge driven task such as drug discovery. We also derive a generative topographic mapping (GTM) based data visualisation approach which estimates feature saliency simultaneously with the training of a visualisation model.
Resumo:
Qualitative reasoning has traditionally been applied in the domain of physical systems, where there are well established and understood laws governing the behaviour of each `component' in the system. Such application has shown that it is possible to produce models which can be used for explaining and predicting the behaviour of physical phenomena and also trouble-shooting. The principles underlying the theory ensure that the models are robust and exhibit consistent behaviour under all conditions. This research examines the validity of applying the theory in the financial domain where such laws may not exist or if they do, may not be universally applicable. In particular, it investigates how far these principles and techniques may be applied in the construction of financial analysis models. Because of the inherent differences in the nature of these two domains, it is argued that a different qualitative value system ought to be employed. The dissertation enlarges on the constraints this places on model descriptions and the effect it may have on the power and usefulness of the resulting models. It also describes the implementation of a system that investigates the implications of applying this theory by way of testing it on situations drawn from both text-books and published financial information.
Resumo:
Knitwear design is a creative activity that is hard to automate using the computer. The production of the associated knitting pattern, however, is repetitive, time-consuming and error-prone, calling for automation. Our objectives are two-fold: to facilitate the design and to ease the burden of calculations and checks in pattern production. We conduct a feasibility study for applying case-based reasoning in knitwear design: we describe appropriate methods and show their application.
Resumo:
Diagnosing faults in wastewater treatment, like diagnosis of most problems, requires bi-directional plausible reasoning. This means that both predictive (from causes to symptoms) and diagnostic (from symptoms to causes) inferences have to be made, depending on the evidence available, in reasoning for the final diagnosis. The use of computer technology for the purpose of diagnosing faults in the wastewater process has been explored, and a rule-based expert system was initiated. It was found that such an approach has serious limitations in its ability to reason bi-directionally, which makes it unsuitable for diagnosing tasks under the conditions of uncertainty. The probabilistic approach known as Bayesian Belief Networks (BBNS) was then critically reviewed, and was found to be well-suited for diagnosis under uncertainty. The theory and application of BBNs are outlined. A full-scale BBN for the diagnosis of faults in a wastewater treatment plant based on the activated sludge system has been developed in this research. Results from the BBN show good agreement with the predictions of wastewater experts. It can be concluded that the BBNs are far superior to rule-based systems based on certainty factors in their ability to diagnose faults and predict systems in complex operating systems having inherently uncertain behaviour.
Resumo:
Hierarchical knowledge structures are frequently used within clinical decision support systems as part of the model for generating intelligent advice. The nodes in the hierarchy inevitably have varying influence on the decisionmaking processes, which needs to be reflected by parameters. If the model has been elicited from human experts, it is not feasible to ask them to estimate the parameters because there will be so many in even moderately-sized structures. This paper describes how the parameters could be obtained from data instead, using only a small number of cases. The original method [1] is applied to a particular web-based clinical decision support system called GRiST, which uses its hierarchical knowledge to quantify the risks associated with mental-health problems. The knowledge was elicited from multidisciplinary mental-health practitioners but the tree has several thousand nodes, all requiring an estimation of their relative influence on the assessment process. The method described in the paper shows how they can be obtained from about 200 cases instead. It greatly reduces the experts’ elicitation tasks and has the potential for being generalised to similar knowledge-engineering domains where relative weightings of node siblings are part of the parameter space.
Resumo:
Speed's theory makes two predictions for the development of analogical reasoning. Firstly, young children should not be able to reason analogically due to an undeveloped PFC neural network. Secondly, category knowledge enables the reinforcement of structural features over surface features, and thus the development of sophisticated, analogical, reasoning. We outline existing studies that support these predictions and highlight some critical remaining issues. Specifically, we argue that the development of inhibition must be directly compared alongside the development of reasoning strategies in order to support Speed's account. © 2010 Psychology Press.
Resumo:
A wide range of essential reasoning tasks rely on contradiction identification, a cornerstone of human rationality, communication and debate founded on the inversion of the logical operators "Every" and "Some." A high-density electroencephalographic (EEG) study was performed in 11 normal young adults. The cerebral network involved in the identification of contradiction included the orbito-frontal and anterior-cingulate cortices and the temporo-polar cortices. The event-related dynamic of this network showed an early negative deflection lasting 500 ms after sentence presentation. This was followed by a positive deflection lasting 1.5 s, which was different for the two logical operators. A lesser degree of network activation (either in neuron number or their level of phase locking or both) occurred while processing statements with "Some," suggesting that this was a relatively simpler scenario with one example to be figured out, instead of the many examples or the absence of a counterexample searched for while processing statements with "Every." A self-generated reward system seemed to resonate the recruited circuitry when the contradictory task is successfully completed.
Resumo:
This paper concerns the problem of agent trust in an electronic market place. We maintain that agent trust involves making decisions under uncertainty and therefore the phenomenon should be modelled probabilistically. We therefore propose a probabilistic framework that models agent interactions as a Hidden Markov Model (HMM). The observations of the HMM are the interaction outcomes and the hidden state is the underlying probability of a good outcome. The task of deciding whether to interact with another agent reduces to probabilistic inference of the current state of that agent given all previous interaction outcomes. The model is extended to include a probabilistic reputation system which involves agents gathering opinions about other agents and fusing them with their own beliefs. Our system is fully probabilistic and hence delivers the following improvements with respect to previous work: (a) the model assumptions are faithfully translated into algorithms; our system is optimal under those assumptions, (b) It can account for agents whose behaviour is not static with time (c) it can estimate the rate with which an agent's behaviour changes. The system is shown to significantly outperform previous state-of-the-art methods in several numerical experiments. Copyright © 2010, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Resumo:
This thesis explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. Probabilistic graphical structures can be a combination of graph and probability theory that provide numerous advantages when it comes to the representation of domains involving uncertainty, domains such as the mental health domain. In this thesis the advantages that probabilistic graphical structures offer in representing such domains is built on. The Galatean Risk Screening Tool (GRiST) is a psychological model for mental health risk assessment based on fuzzy sets. In this thesis the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. This thesis describes how a chain graph can be developed from the psychological model to provide a probabilistic evaluation of risk that complements the one generated by GRiST’s clinical expertise by the decomposing of the GRiST knowledge structure in component parts, which were in turned mapped into equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements
Resumo:
Classification is the most basic method for organizing resources in the physical space, cyber space, socio space and mental space. To create a unified model that can effectively manage resources in different spaces is a challenge. The Resource Space Model RSM is to manage versatile resources with a multi-dimensional classification space. It supports generalization and specialization on multi-dimensional classifications. This paper introduces the basic concepts of RSM, and proposes the Probabilistic Resource Space Model, P-RSM, to deal with uncertainty in managing various resources in different spaces of the cyber-physical society. P-RSM’s normal forms, operations and integrity constraints are developed to support effective management of the resource space. Characteristics of the P-RSM are analyzed through experiments. This model also enables various services to be described, discovered and composed from multiple dimensions and abstraction levels with normal form and integrity guarantees. Some extensions and applications of the P-RSM are introduced.