900 resultados para Epistemic modality
Resumo:
An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.
Resumo:
Credal networks are graph-based statistical models whose parameters take values on a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The result of inferences with such models depends on the irrelevance/independence concept adopted. In this paper, we study the computational complexity of inferences under the concepts of epistemic irrelevance and strong independence. We strengthen complexity results by showing that inferences with strong independence are NP-hard even in credal trees with ternary variables, which indicates that tractable algorithms, including the existing one for epistemic trees, cannot be used for strong independence. We prove that the polynomial time of inferences in credal trees under epistemic irrelevance is not likely to extend to more general models, because the problem becomes NP-hard even in simple polytrees. These results draw a definite line between networks with efficient inferences and those where inferences are hard, and close several open questions regarding the computational complexity of such models.
Resumo:
This paper investigates the computation of lower/upper expectations that must cohere with a collection of probabilistic assessments and a collection of judgements of epistemic independence. New algorithms, based on multilinear programming, are presented, both for independence among events and among random variables. Separation properties of graphical models are also investigated.
Resumo:
Revising its beliefs when receiving new information is an important ability of any intelligent system. However, in realistic settings the new input is not always certain. A compelling way of dealing with uncertain input in an agent-based setting is to treat it as unreliable input, which may strengthen or weaken the beliefs of the agent. Recent work focused on the postulates associated with this form of belief change and on finding semantical operators that satisfy these postulates. In this paper we propose a new syntactic approach for this form of belief change and show that it agrees with the semantical definition. This makes it feasible to develop complex agent systems capable of efficiently dealing with unreliable input in a semantically meaningful way. Additionally, we show that imposing restrictions on the input and the beliefs that are entailed allows us to devise a tractable approach suitable for resource-bounded agents or agents where reactiveness is of paramount importance.
Resumo:
Belief revision performs belief change on an agent’s beliefs when new evidence (either of the form of a propositional formula or of the form of a total pre-order on a set of interpretations) is received. Jeffrey’s rule is commonly used for revising probabilistic epistemic states when new information is probabilistically uncertain. In this paper, we propose a general epistemic revision framework where new evidence is of the form of a partial epistemic state. Our framework extends Jeffrey’s rule with uncertain inputs and covers well-known existing frameworks such as ordinal conditional function (OCF) or possibility theory. We then define a set of postulates that such revision operators shall satisfy and establish representation theorems to characterize those postulates. We show that these postulates reveal common characteristics of various existing revision strategies and are satisfied by OCF conditionalization, Jeffrey’s rule of conditioning and possibility conditionalization. Furthermore, when reducing to the belief revision situation, our postulates can induce Darwiche and Pearl’s postulates C1 and C2.
Resumo:
Synesthesia based in visual modalities has been associated with reports of vivid visual imagery. We extend this finding to consider whether other forms of synesthesia are also associated with enhanced imagery, and whether this enhancement reflects the modality of synesthesia. We used self‐report imagery measures across multiple sensory modalities, comparing synesthetes’ responses (with a variety of forms of synesthesia) to those of nonsynesthete matched controls. Synesthetes reported higher levels of visual, auditory, gustatory, olfactory and tactile imagery and a greater level of imagery use. Furthermore, their reported enhanced imagery is restricted to the modalities involved in the individual’s synesthesia. There was also a relationship between the number of forms of synesthesia an individual has, and the reported vividness of their imagery, highlighting the need for future research to consider the impact of multiple forms of synesthesia. We also recommend the use of behavioral measures to validate these self‐report findings.
Resumo:
The oscillation of neuronal circuits reflected in the EEG gamma frequency may be fundamental to the perceptual process referred to as binding (the integration of various thoughts and perceptions into a coherent picture). The aim of our study was to expand our knowledge of the developmental course ofEEG gamma in the auditory modality. 2 We investigated EEG 40 Hz gamma band responses (35.2 to 43.0 Hz) using an auditory novelty oddball paradigm alone and with a visual-number-series distracter task in 208 participants as a function of age (7 years to adult) at 9 sites across the sagital and lateral axes (F3, Fz, F4, C3, Cz, C4, P3, Pz, P4). Gamma responses were operationally defined as change in power or a change in phase synchrony level from baseline within two time windows. The evoked gamma response was defined as a significant change from baseline occurring between 0 to 150 ms after stimulus onset; the induced gamma response was measured from 250 to 750 ms after stimulus onset. A significant evoked gamma band response was found when measuring changes in both power and phase synchrony. The increase in both measures was maximal at frontal regions. Decreases in both measures were found when participants were distracted by a secondary task. For neither measure were developmental effects noted. However, evoked gamma power was significantly enhanced with the presentation of a novel stimulus, especially at the right frontal site (F4); frontal evoked gamma phase synchrony also showed enhancement for novel stimuli but only for our two oldest age groups (16-18 year olds and adults). Induced gamma band responses also varied with task-dependent cognitive stimulus properties. In the induced gamma power response in all age groups, target stimuli generated the highest power values at the parietal region, while the novel stimuli were always below baseline. Target stimuli increased induced synchrony in all regions for all participants, but the novel stimulus selectively affected participants dependent on their age and gender. Adult participants, for example, exhibited a reduction in gamma power, but an increase in synchrony to the novel stimulus within the same region. Induced gamma synchrony was more sensitive to the gender of the participant than was induced gamma power. While induced gamma power produced little effects of age, gamma synchrony did have age effects. These results confirm that the perceptual process which regulates gamma power is distinct from that which governs the synchronization for neuronal firing, and both gamma power and synchrony are important factors to be considered for the "binding" hypothesis. However, there is surprisingly little effect of age on the absolute levels of or distribution of EEG gamma in the age range investigated.
Resumo:
Ever since Sen’s (1993; 1997) criticism on the notion of internal consistency or menu independence of choice, there exists a widespread perception that the standard revealed preference approach to the theory of rational choice has difficulties in coping with the existence of external norms, or the information a menu of choice might convey to a decision-maker, viz., the epistemic value of a menu. This paper provides a brief survey of possible responses to these criticisms of traditional rational choice theory. It is shown that a novel concept of norm-conditional rationalizability can neatly accommodate external norms within the standard framework of rationalizability theory. Furthermore, we illustrate that there are several ways of incorporating considerations regarding the epistemic value of opportunity sets into a generalized model of rational choice theory.