825 resultados para Subjective Uncertainty


Relevância:

80.00% 80.00%

Publicador:

Resumo:

An experiment was conducted to investigate the idea that an important motive for identifying with social groups is to reduce subjective uncertainty, particularly uncertainty on subjectively important dimensions that have implications for the self-concept (e.g., Hogg, 1996; Hogg & Mullin, 1999). When people are uncertain on a dimension that is subjectively important, they self-categorize in terms of an available social categorization and, thus, exhibit group behaviors. To test this general hypothesis, group membership, task uncertainty, and task importance were manipulated in a 2 x 2 x 2 between-participants design (N = 128), under relatively minimal group conditions. Ingroup identification and desire for consensual validation of specific attitudes were the key dependent measures, but we also measured social awareness. All three predictions were supported. Participants identified with their group (H1), and desired to obtain consensual validation from ingroup members (H2) when they were uncertain about their judgments on important dimensions, indicating that uncertainty reduction motivated participants towards embracing group membership. In addition, identification mediated the interactive effect of the independent variables on consensual validation (H3), and the experimental results were not associated with an increased sense of social awareness and, therefore, were unlikely to represent only behavioral compliance with generic social norms. Some implications of this research in the study of cults and totalist groups and the explication of genocide and group violence are discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Two experiments tested the prediction that uncertainty reduction and self-enhancement motivations have an interactive effect on ingroup identification. In Experiment 1 (N = 64), uncertainty and group status were manipulated, and the effect on ingroup identification was measured. As predicted, low-uncertainty participants identified more strongly with a high- than low-status group, whereas high-uncertainty participants showed no preference; and low-status group members identified more strongly under high than low uncertainty, whereas high-status group members showed no preference. Experiment 2 (N = 210) replicated Experiment 1, but with a third independent variable that manipulated how prototypical participants were of their group. As predicted, the effects obtained in Experiment 1 only emerged where participants were highly prototypical. Low prototypicality depressed identification with a low-status group under high uncertainty. The implications of these results for intergroup relations and the role of prototypicality in social identity processes are discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Two studies were conducted to examine the impact of subjective uncertainty on conformity to group norms in the attitude-behaviour context. In both studies, subjective uncertainty was manipulated using a deliberative mindset manipulation (McGregor, Zanna, Holmes, & Spencer, 2001). In Study 1 (N = 106), participants were exposed to either an attitude-congruent or an attitude-incongruent in-group norm. In Study 2(N = 83), participants were exposed to either a congruent, incongruent, or an ambiguous in-group norm. Ranges of attitude-behaviour outcomes, including attitude-intention consistency and change in attitude-certainty, were assessed. In both studies, levels of group-normative behaviour varied as a function of uncertainty condition. In Study 1, conformity to group norms, as evidenced by variations in the level of attitude-intention consistency, was observed only in the high uncertainty condition. In Study 2, exposure to an ambiguous norm had different effects for those in the low and die high uncertainty conditions. In the low uncertainty condition, greatest conformity was observed in the attitude-congruent norm condition compared with an attitude-congruent or ambiguous norm. In contrast, individuals in the high uncertainty condition displayed greatest conformity when exposed to either an attitude-congruent or an ambiguous in-group norm. The implications of these results for the role of subjective uncertainty in social influence processes are discussed. © 2007 The British Psychological Society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Authors analyses questions of the subjective uncertainty and inexactness situations in the moment of using expert information and another questions which are connected with expert information uncertainty by fuzzy sets with rough membership functions in this article. You can find information about integral problems of individual expert marks and about connection among total marks “degree of inexactness” with sensibility of measurement scale. A lot of different situation which are connected with distribution of the function accessory significance and orientation of the concrete take to task decision making are analyses here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional safety program managers face a daunting challenge in the attempt to reduce deaths, injuries, and economic losses that result from motor vehicle crashes. This difficult mission is complicated by the combination of a large perceived need, small budget, and uncertainty about how effective each proposed countermeasure would be if implemented. A manager can turn to the research record for insight, but the measured effect of a single countermeasure often varies widely from study to study and across jurisdictions. The challenge of converting widespread and conflicting research results into a regionally meaningful conclusion can be addressed by incorporating "subjective" information into a Bayesian analysis framework. Engineering evaluations of crashes provide the subjective input on countermeasure effectiveness in the proposed Bayesian analysis framework. Empirical Bayes approaches are widely used in before-and-after studies and "hot-spot" identification; however, in these cases, the prior information was typically obtained from the data (empirically), not subjective sources. The power and advantages of Bayesian methods for assessing countermeasure effectiveness are presented. Also, an engineering evaluation approach developed at the Georgia Institute of Technology is described. Results are presented from an experiment conducted to assess the repeatability and objectivity of subjective engineering evaluations. In particular, the focus is on the importance, methodology, and feasibility of the subjective engineering evaluation for assessing countermeasures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been a recent spate of high profile infrastructure cost overruns in Australia and internationally. This is just the tip of a longer-term and more deeply-seated problem with initial budget estimating practice, well recognised in both academic research and industry reviews: the problem of uncertainty. A case study of the Sydney Opera House is used to identify and illustrate the key causal factors and system dynamics of cost overruns. It is conventionally the role of risk management to deal with such uncertainty, but the type and extent of the uncertainty involved in complex projects is shown to render established risk management techniques ineffective. This paper considers a radical advance on current budget estimating practice which involves a particular approach to statistical modelling complemented by explicit training in estimating practice. The statistical modelling approach combines the probability management techniques of Savage, which operate on actual distributions of values rather than flawed representations of distributions, and the data pooling technique of Skitmore, where the size of the reference set is optimised. Estimating training employs particular calibration development methods pioneered by Hubbard, which reduce the bias of experts caused by over-confidence and improve the consistency of subjective decision-making. A new framework for initial budget estimating practice is developed based on the combined statistical and training methods, with each technique being explained and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictions about sensory input exert a dominant effect on what we perceive, and this is particularly true for the experience of pain. However, it remains unclear what component of prediction, from an information-theoretic perspective, controls this effect. We used a vicarious pain observation paradigm to study how the underlying statistics of predictive information modulate experience. Subjects observed judgments that a group of people made to a painful thermal stimulus, before receiving the same stimulus themselves. We show that the mean observed rating exerted a strong assimilative effect on subjective pain. In addition, we show that observed uncertainty had a specific and potent hyperalgesic effect. Using computational functional magnetic resonance imaging, we found that this effect correlated with activity in the periaqueductal gray. Our results provide evidence for a novel form of cognitive hyperalgesia relating to perceptual uncertainty, induced here by vicarious observation, with control mediated by the brainstem pain modulatory system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expectations about the magnitude of impending pain exert a substantial effect on subsequent perception. However, the neural mechanisms that underlie the predictive processes that modulate pain are poorly understood. In a combined behavioral and high-density electrophysiological study we measured anticipatory neural responses to heat stimuli to determine how predictions of pain intensity, and certainty about those predictions, modulate brain activity and subjective pain ratings. Prior to receiving randomized laser heat stimuli at different intensities (low, medium or high) subjects (n=15) viewed cues that either accurately informed them of forthcoming intensity (certain expectation) or not (uncertain expectation). Pain ratings were biased towards prior expectations of either high or low intensity. Anticipatory neural responses increased with expectations of painful vs. non-painful heat intensity, suggesting the presence of neural responses that represent predicted heat stimulus intensity. These anticipatory responses also correlated with the amplitude of the Laser-Evoked Potential (LEP) response to painful stimuli when the intensity was predictable. Source analysis (LORETA) revealed that uncertainty about expected heat intensity involves an anticipatory cortical network commonly associated with attention (left dorsolateral prefrontal, posterior cingulate and bilateral inferior parietal cortices). Relative certainty, however, involves cortical areas previously associated with semantic and prospective memory (left inferior frontal and inferior temporal cortex, and right anterior prefrontal cortex). This suggests that biasing of pain reports and LEPs by expectation involves temporally precise activity in specific cortical networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research has shown that often there is clear inertia in individual decision making---that is, a tendency for decision makers to choose a status quo option. I conduct a laboratory experiment to investigate two potential determinants of inertia in uncertain environments: (i) regret aversion and (ii) ambiguity-driven indecisiveness. I use a between-subjects design with varying conditions to identify the effects of these two mechanisms on choice behavior. In each condition, participants choose between two simple real gambles, one of which is the status quo option. I find that inertia is quite large and that both mechanisms are equally important.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, simple and simplistic models may produce similar outputs to more robust and disaggregated models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.