991 resultados para belief theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on ongoing research to develop a design theory for classes of information systems that allow for work practices that exhibit a minimal harmful impact on the natural environment. We call such information systems Green IS. In this paper we describe the building blocks of our Green IS design theory, which develops prescriptions for information systems that allow for: (1) belief formation, action formation and outcome measurement relating to (2) environmentally sustainable work practices and environmentally sustainable decisions on (3) a macro or micro level. For each element, we specify structural features, symbolic expressions, user abilities and goals required for the affordances to emerge. We also provide a set of testable propositions derived from our design theory and declare two principles of implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper,we present a belief propagation (BP) based algorithm for decoding non-orthogonal space-time block codes (STBC) from cyclic division algebras (CDA) having large dimensions. The proposed approachinvolves message passing on Markov random field (MRF) representation of the STBC MIMO system. Adoption of BP approach to decode non-orthogonal STBCs of large dimensions has not been reported so far. Our simulation results show that the proposed BP-based decoding achieves increasingly closer to SISO AWGN performance for increased number of dimensions. In addition, it also achieves near-capacity turbo coded BER performance; for e.g., with BP decoding of 24 x 24 STBC from CDA using BPSK (i.e.,n576 real dimensions) and rate-1/2 turbo code (i.e., 12 bps/Hz spectral efficiency), coded BER performance close to within just about 2.5 dB from the theoretical MIMO capacity is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the application of belief propagation (BP) to achieve near-optimal signal detection in large multiple-input multiple-output (MIMO) systems at low complexities. Large-MIMO architectures based on spatial multiplexing (V-BLAST) as well as non-orthogonal space-time block codes(STBC) from cyclic division algebra (CDA) are considered. We adopt graphical models based on Markov random fields (MRF) and factor graphs (FG). In the MRF based approach, we use pairwise compatibility functions although the graphical models of MIMO systems are fully/densely connected. In the FG approach, we employ a Gaussian approximation (GA) of the multi-antenna interference, which significantly reduces the complexity while achieving very good performance for large dimensions. We show that i) both MRF and FG based BP approaches exhibit large-system behavior, where increasingly closer to optimal performance is achieved with increasing number of dimensions, and ii) damping of messages/beliefs significantly improves the bit error performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.

In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.

The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.

The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to show that Dempster-Shafer evidence theory may be successfully applied to unsupervised classification in multisource remote sensing. Dempster-Shafer formulation allows for consideration of unions of classes, and to represent both imprecision and uncertainty, through the definition of belief and plausibility functions. These two functions, derived from mass function, are generally chosen in a supervised way. In this paper, the authors describe an unsupervised method, based on the comparison of monosource classification results, to select the classes necessary for Dempster-Shafer evidence combination and to define their mass functions. Data fusion is then performed, discarding invalid clusters (e.g. corresponding to conflicting information) thank to an iterative process. Unsupervised multisource classification algorithm is applied to MAC-Europe'91 multisensor airborne campaign data collected over the Orgeval French site. Classification results using different combinations of sensors (TMS and AirSAR) or wavelengths (L- and C-bands) are compared. Performance of data fusion is evaluated in terms of identification of land cover types. The best results are obtained when all three data sets are used. Furthermore, some other combinations of data are tried, and their ability to discriminate between the different land cover types is quantified

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of alternative combination rules in DS theory when evidence is in conflict has emerged again recently as an interesting topic, especially in data/information fusion applications. These studies have mainly focused on investigating which alternative would be appropriate for which conflicting situation, under the assumption that a conflict is identified. The issue of detection (or identification) of conflict among evidence has been ignored. In this paper, we formally define when two basic belief assignments are in conflict. This definition deploys quantitative measures of both the mass of the combined belief assigned to the emptyset before normalization and the distance between betting commitments of beliefs.We argue that only when both measures are high, it is safe to say the evidence is in conflict. This definition can be served as a prerequisite for selecting appropriate combination rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Belief revision characterizes the process of revising an agent’s beliefs when receiving new evidence. In the field of artificial intelligence, revision strategies have been extensively studied in the context of logic-based formalisms and probability kinematics. However, so far there is not much literature on this topic in evidence theory. In contrast, combination rules proposed so far in the theory of evidence, especially Dempster rule, are symmetric. They rely on a basic assumption, that is, pieces of evidence being combined are considered to be on a par, i.e. play the same role. When one source of evidence is less reliable than another, it is possible to discount it and then a symmetric combination operation
is still used. In the case of revision, the idea is to let prior knowledge of an agent be altered by some input information. The change problem is thus intrinsically asymmetric. Assuming the input information is reliable, it should be retained whilst the prior information should be changed minimally to that effect. To deal with this issue, this paper defines the notion of revision for the theory of evidence in such a way as to bring together probabilistic and logical views. Several revision rules previously proposed are reviewed and we advocate one of them as better corresponding to the idea of revision. It is extended to cope with inconsistency between prior and input information. It reduces to Dempster
rule of combination, just like revision in the sense of Alchourron, Gardenfors, and Makinson (AGM) reduces to expansion, when the input is strongly consistent with the prior belief function. Properties of this revision rule are also investigated and it is shown to generalize Jeffrey’s rule of updating, Dempster rule of conditioning and a form of AGM revision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to explore the relationship between religious identity, acculturation strategies and perceptions of acculturation orientation in the school context amongst young people from minority
belief backgrounds. Based on a qualitative study including interviews with 26 young people from religious minority belief backgrounds in Northern Ireland, it is argued that acculturation theory provides a useful lens for understanding how young people from religious minority belief backgrounds navigate majority religious school contexts. Using a qualitative approach to explore acculturation theory enables an in-depth understanding of the inter-relationship between minority belief youth’s acculturation strategies and their respective school contexts. Similar to previous research, integrationist attitudes generally prevailed amongst minority belief young people in this study. The findings highlight how young people negotiate their religious identities in a complex web of inter-relationships between their minority religious belief community and the mainstream school culture as represented through peer and staff attitudes, school ethos and practices and religious education. Young people demonstrated differentiated understandings of acculturation orientations within the school context, which they evaluated on the basis of complex perceptions of educational policy, interpersonal relationships and individuals’ motivations. Findings are discussed in view of acculturation tensions, which arose particularly in relation to the religious education curriculum and their implications for opt-out provision as stipulated by human rights law.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an event recognition framework, based on Dempster-Shafer theory, that combines evidence of events from low-level computer vision analytics. The proposed method employing evidential network modelling of composite events, is able to represent uncertainty of event output from low level video analysis and infer high level events with semantic meaning along with degrees of belief. The method has been evaluated on videos taken of subjects entering and leaving a seated area. This has relevance to a number of transport scenarios, such as onboard buses and trains, and also in train stations and airports. Recognition results of 78% and 100% for four composite events are encouraging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Belief revision performs belief change on an agent’s beliefs when new evidence (either of the form of a propositional formula or of the form of a total pre-order on a set of interpretations) is received. Jeffrey’s rule is commonly used for revising probabilistic epistemic states when new information is probabilistically uncertain. In this paper, we propose a general epistemic revision framework where new evidence is of the form of a partial epistemic state. Our framework extends Jeffrey’s rule with uncertain inputs and covers well-known existing frameworks such as ordinal conditional function (OCF) or possibility theory. We then define a set of postulates that such revision operators shall satisfy and establish representation theorems to characterize those postulates. We show that these postulates reveal common characteristics of various existing revision strategies and are satisfied by OCF conditionalization, Jeffrey’s rule of conditioning and possibility conditionalization. Furthermore, when reducing to the belief revision situation, our postulates can induce Darwiche and Pearl’s postulates C1 and C2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combination rules proposed so far in the Dempster-Shafer theory of evidence, especially Dempster rule, rely on a basic assumption, that is, pieces of evidence being combined are considered to be on a par, i.e. play the same role. When a source of evidence is less reliable than another, it is possible to discount it and then a symmetric combination operation is still used. In the case of revision, the idea is to let prior knowledge of an agent be altered by some input information. The change problem is thus intrinsically asymmetric. Assuming the input information is reliable, it should be retained whilst the prior information should
be changed minimally to that effect. Although belief revision is already an important subfield of artificial intelligence, so far, it has been little addressed in evidence theory. In this paper, we define the notion of revision for the theory of evidence and propose several different revision rules, called the inner and outer
revisions, and a modified adaptive outer revision, which better corresponds to the idea of revision. Properties of these revision rules are also investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Northern Ireland has the worst oral health in the UK and its children have among the highest levels of tooth decay in Europe (DHSSPS, 2007).
Aim: The aim of this study is to investigate the factors influencing tooth brushing behaviour among Year 6 primary schoolchildren using the Theory of Planned Behaviour (TPB).
Method: Seven semi-structured focus groups involving 56 children were conducted during which children were asked questions about the factors that influence whether or not they brush their teeth. Thematic analysis was used with the purpose of eliciting the belief-based measures for all the TPB constructs.
Results: The findings suggest that children are knowledgeable about their teeth and are aware of the importance of maintaining good oral health; although a number of barriers to consistent tooth brushing exist.
Discussion: The findings will be used to inform stage 2 of the research project; questionnaire development to identify the factors influencing young people’s motivations to improve their tooth brushing behaviour and to assess their relative importance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Necessary and sufficient conditions for choice functions to be rational have been intensively studied in the past. However, in these attempts, a choice function is completely specified. That is, given any subset of options, called an issue, the best option over that issue is always known, whilst in real-world scenarios, it is very often that only a few choices are known instead of all. In this paper, we study partial choice functions and investigate necessary and sufficient rationality conditions for situations where only a few choices are known. We prove that our necessary and sufficient condition for partial choice functions boils down to the necessary and sufficient conditions for complete choice functions proposed in the literature. Choice functions have been instrumental in belief revision theory. That is, in most approaches to belief revision, the problem studied can simply be described as the choice of possible worlds compatible with the input information, given an agent’s prior belief state. The main effort has been to devise strategies in order to infer the agents revised belief state. Our study considers the converse problem: given a collection of input information items and their corresponding revision results (as provided by an agent), does there exist a rational revision operation used by the agent and a consistent belief state that may explain the observed results?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.