1000 resultados para decision systemm
Resumo:
This panel will discuss the research being conducted, and the models being used in three current coastal EPA studies being conducted on ecosystem services in Tampa Bay, the Chesapeake Bay and the Coastal Carolinas. These studies are intended to provide a broader and more comprehensive approach to policy and decision-making affecting coastal ecosystems as well as provide an account of valued services that have heretofore been largely unrecognized. Interim research products, including updated and integrated spatial data, models and model frameworks, and interactive decision support systems will be demonstrated to engage potential users and to elicit feedback. It is anticipated that the near-term impact of the projects will be to increase the awareness by coastal communities and coastal managers of the implications of their actions and to foster partnerships for ecosystem services research and applications. (PDF contains 4 pages)
Resumo:
Population pressure in coastal New Hampshire challenges land use decision-making and threatens the ecological health and functioning of Great Bay, an estuary designated as both a NOAA National Estuarine Research Reserve and an EPA National Estuary Program site. Regional population in the seacoast has quadrupled in four decades resulting in sprawl, increased impervious surface cover and larger lot rural development (Zankel, et.al., 2006). All of Great Bay’s contributing watersheds face these challenges, resulting in calls for strategies addressing growth, development and land use planning. The communities within the Lamprey River watershed comprise this case study. Do these towns communicate upstream and downstream when making land use decisions? Are cumulative effects considered while debating development? Do town land use groups consider the Bay or the coasts in their decision-making? This presentation, a follow-up from the TCS 2008 conference and a completed dissertation, will discuss a novel social science approach to analyze and understand the social landscape of land use decision-making in the towns of the Lamprey River watershed. The methods include semi-structured interviews with GIS based maps in a grounded theory analytical strategy. The discussion will include key findings, opportunities and challenges in moving towards a watershed approach for land use planning. This presentation reviews the results of the case study and developed methodology, which can be used in watersheds elsewhere to map out the potential for moving towns towards EBM and watershed-scaled, land use planning. (PDF contains 4 pages)
Resumo:
Coastal managers need accessible, trusted, tailored resources to help them interpret climate information, identify vulnerabilities, and apply climate information to decisions about adaptation on regional and local levels. For decades, climate scientists have studied the impacts that short term natural climate variability and long term climate change will have on coastal systems. For example, recent estimates based on Intergovernmental Panel on Climate Change (IPCC) warming scenarios suggest that global sea levels may rise 0.5 to 1.4 meters above 1990 levels by 2100 (Rahmstorf 2007; Grinsted, Moore, and Jevrejeva 2009). Many low-lying coastal ecosystems and communities will experience more frequent salt water intrusion events, more frequent coastal flooding, and accelerated erosion rates before they experience significant inundation. These changes will affect the ways coastal managers make decisions, such as timing surface and groundwater withdrawals, replacing infrastructure, and planning for changing land use on local and regional levels. Despite the advantages, managers’ use of scientific information about climate variability and change remains limited in environmental decision-making (Dow and Carbone 2007). Traditional methods scientists use to disseminate climate information, like peer-reviewed journal articles and presentations at conferences, are inappropriate to fill decision-makers’ needs for applying accessible, relevant climate information to decision-making. General guides that help managers scope out vulnerabilities and risks are becoming more common; for example, Snover et al. (2007) outlines a basic process for local and state governments to assess climate change vulnerability and preparedness. However, there are few tools available to support more specific decision-making needs. A recent survey of coastal managers in California suggests that boundary institutions can help to fill the gaps between climate science and coastal decision-making community (Tribbia and Moser 2008). The National Sea Grant College Program, the National Oceanic and Atmospheric Administration's (NOAA) university-based program for supporting research and outreach on coastal resource use and conservation, is one such institution working to bridge these gaps through outreach. Over 80% of Sea Grant’s 32 programs are addressing climate issues, and over 60% of programs increased their climate outreach programming between 2006 and 2008 (National Sea Grant Office 2008). One way that Sea Grant is working to assist coastal decision-makers with using climate information is by developing effective methods for coastal climate extension. The purpose of this paper is to discuss climate extension methodologies on regional scales, using the Carolinas Coastal Climate Outreach Initiative (CCCOI) as an example of Sea Grant’s growing capacities for climate outreach and extension. (PDF contains 3 pages)
Resumo:
When hazardous storms threaten coastal communities, people need information to decide how to respond to this potential emergency. NOAA and NC Sea Grant are funding a two-year project (Risk Perceptions and Emergency Communication Effectiveness in Coastal Zones) to learn how residents, government officials, businesses and other organizations are informed and use information regarding hurricane and tropical storms. (PDF contains 4 pages)
Resumo:
Humans are particularly adept at modifying their behavior in accordance with changing environmental demands. Through various mechanisms of cognitive control, individuals are able to tailor actions to fit complex short- and long-term goals. The research described in this thesis uses functional magnetic resonance imaging to characterize the neural correlates of cognitive control at two levels of complexity: response inhibition and self-control in intertemporal choice. First, we examined changes in neural response associated with increased experience and skill in response inhibition; successful response inhibition was associated with decreased neural response over time in the right ventrolateral prefrontal cortex, a region widely implicated in cognitive control, providing evidence for increased neural efficiency with learned automaticity. We also examined a more abstract form of cognitive control using intertemporal choice. In two experiments, we identified putative neural substrates for individual differences in temporal discounting, or the tendency to prefer immediate to delayed rewards. Using dynamic causal models, we characterized the neural circuit between ventromedial prefrontal cortex, an area involved in valuation, and dorsolateral prefrontal cortex, a region implicated in self-control in intertemporal and dietary choice, and found that connectivity from dorsolateral prefrontal cortex to ventromedial prefrontal cortex increases at the time of choice, particularly when delayed rewards are chosen. Moreover, estimates of the strength of connectivity predicted out-of-sample individual rates of temporal discounting, suggesting a neurocomputational mechanism for variation in the ability to delay gratification. Next, we interrogated the hypothesis that individual differences in temporal discounting are in part explained by the ability to imagine future reward outcomes. Using a novel paradigm, we imaged neural response during the imagining of primary rewards, and identified negative correlations between activity in regions associated the processing of both real and imagined rewards (lateral orbitofrontal cortex and ventromedial prefrontal cortex, respectively) and the individual temporal discounting parameters estimated in the previous experiment. These data suggest that individuals who are better able to represent reward outcomes neurally are less susceptible to temporal discounting. Together, these findings provide further insight into role of the prefrontal cortex in implementing cognitive control, and propose neurobiological substrates for individual variation.
Resumo:
These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.
More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.
The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.
Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.