749 resultados para decision-making model
Resumo:
This panel will discuss the research being conducted, and the models being used in three current coastal EPA studies being conducted on ecosystem services in Tampa Bay, the Chesapeake Bay and the Coastal Carolinas. These studies are intended to provide a broader and more comprehensive approach to policy and decision-making affecting coastal ecosystems as well as provide an account of valued services that have heretofore been largely unrecognized. Interim research products, including updated and integrated spatial data, models and model frameworks, and interactive decision support systems will be demonstrated to engage potential users and to elicit feedback. It is anticipated that the near-term impact of the projects will be to increase the awareness by coastal communities and coastal managers of the implications of their actions and to foster partnerships for ecosystem services research and applications. (PDF contains 4 pages)
Resumo:
A decision is a commitment to a proposition or plan of action based on evidence and the expected costs and benefits associated with the outcome. Progress in a variety of fields has led to a quantitative understanding of the mechanisms that evaluate evidence and reach a decision. Several formalisms propose that a representation of noisy evidence is evaluated against a criterion to produce a decision. Without additional evidence, however, these formalisms fail to explain why a decision-maker would change their mind. Here we extend a model, developed to account for both the timing and the accuracy of the initial decision, to explain subsequent changes of mind. Subjects made decisions about a noisy visual stimulus, which they indicated by moving a handle. Although they received no additional information after initiating their movement, their hand trajectories betrayed a change of mind in some trials. We propose that noisy evidence is accumulated over time until it reaches a criterion level, or bound, which determines the initial decision, and that the brain exploits information that is in the processing pipeline when the initial decision is made to subsequently either reverse or reaffirm the initial decision. The model explains both the frequency of changes of mind as well as their dependence on both task difficulty and whether the initial decision was accurate or erroneous. The theoretical and experimental findings advance the understanding of decision-making to the highly flexible and cognitive acts of vacillation and self-correction.
Resumo:
In a typical experiment on decision making, one out of two possible stimuli is displayed and observers decide which one was presented. Recently, Stanford and colleagues (2010) introduced a new variant of this classical one-stimulus presentation paradigm to investigate the speed of decision making. They found evidence for "perceptual decision making in less than 30 ms". Here, we extended this one-stimulus compelled-response paradigm to a two-stimulus compelled-response paradigm in which a vernier was followed immediately by a second vernier with opposite offset direction. The two verniers and their offsets fuse. Only one vernier is perceived. When observers are asked to indicate the offset direction of the fused vernier, the offset of the second vernier dominates perception. Even for long vernier durations, the second vernier dominates decisions indicating that decision making can take substantial time. In accordance with previous studies, we suggest that our results are best explained with a two-stage model of decision making where a leaky evidence integration stage precedes a race-to-threshold process. © 2013 Rüter et al.
Resumo:
Bistable dynamical switches are frequently encountered in mathematical modeling of biological systems because binary decisions are at the core of many cellular processes. Bistable switches present two stable steady-states, each of them corresponding to a distinct decision. In response to a transient signal, the system can flip back and forth between these two stable steady-states, switching between both decisions. Understanding which parameters and states affect this switch between stable states may shed light on the mechanisms underlying the decision-making process. Yet, answering such a question involves analyzing the global dynamical (i.e., transient) behavior of a nonlinear, possibly high dimensional model. In this paper, we show how a local analysis at a particular equilibrium point of bistable systems is highly relevant to understand the global properties of the switching system. The local analysis is performed at the saddle point, an often disregarded equilibrium point of bistable models but which is shown to be a key ruler of the decision-making process. Results are illustrated on three previously published models of biological switches: two models of apoptosis, the programmed cell death and one model of long-term potentiation, a phenomenon underlying synaptic plasticity. © 2012 Trotta et al.
Resumo:
We consider challenges associated with application domains in which a large number of distributed, networked sensors must perform a sensing task repeatedly over time. For the tasks we consider, there are three significant challenges to address. First, nodes have resource constraints imposed by their finite power supply, which motivates computations that are energy-conserving. Second, for the applications we describe, the utility derived from a sensing task may vary depending on the placement and size of the set of nodes who participate, which often involves complex objective functions for nodes to target. Finally, nodes must attempt to realize these global objectives with only local information. We present a model for such applications, in which we define appropriate global objectives based on utility functions and specify a cost model for energy consumption. Then, for an important class of utility functions, we present distributed algorithms which attempt to maximize the utility derived from the sensor network over its lifetime. The algorithms and experimental results we present enable nodes to adaptively change their roles over time and use dynamic reconfiguration of routes to load balance energy consumption in the network.
Resumo:
How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.
Resumo:
When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.
Resumo:
Political drivers such as the Kyoto protocol, the EU Energy Performance of Buildings Directive and the Energy end use and Services Directive have been implemented in response to an identified need for a reduction in human related CO2 emissions. Buildings account for a significant portion of global CO2 emissions, approximately 25-30%, and it is widely acknowledged by industry and research organisations that they operate inefficiently. In parallel, unsatisfactory indoor environmental conditions have proven to negatively impact occupant productivity. Legislative drivers and client education are seen as the key motivating factors for an improvement in the holistic environmental and energy performance of a building. A symbiotic relationship exists between building indoor environmental conditions and building energy consumption. However traditional Building Management Systems and Energy Management Systems treat these separately. Conventional performance analysis compares building energy consumption with a previously recorded value or with the consumption of a similar building and does not recognise the fact that all buildings are unique. Therefore what is required is a new framework which incorporates performance comparison against a theoretical building specific ideal benchmark. Traditionally Energy Managers, who work at the operational level of organisations with respect to building performance, do not have access to ideal performance benchmark information and as a result cannot optimally operate buildings. This thesis systematically defines Holistic Environmental and Energy Management and specifies the Scenario Modelling Technique which in turn uses an ideal performance benchmark. The holistic technique uses quantified expressions of building performance and by doing so enables the profiled Energy Manager to visualise his actions and the downstream consequences of his actions in the context of overall building operation. The Ideal Building Framework facilitates the use of this technique by acting as a Building Life Cycle (BLC) data repository through which ideal building performance benchmarks are systematically structured and stored in parallel with actual performance data. The Ideal Building Framework utilises transformed data in the form of the Ideal Set of Performance Objectives and Metrics which are capable of defining the performance of any building at any stage of the BLC. It is proposed that the union of Scenario Models for an individual building would result in a building specific Combination of Performance Metrics which would in turn be stored in the BLC data repository. The Ideal Data Set underpins the Ideal Set of Performance Objectives and Metrics and is the set of measurements required to monitor the performance of the Ideal Building. A Model View describes the unique building specific data relevant to a particular project stakeholder. The energy management data and information exchange requirements that underlie a Model View implementation are detailed and incorporate traditional and proposed energy management. This thesis also specifies the Model View Methodology which complements the Ideal Building Framework. The developed Model View and Rule Set methodology process utilises stakeholder specific rule sets to define stakeholder pertinent environmental and energy performance data. This generic process further enables each stakeholder to define the resolution of data desired. For example, basic, intermediate or detailed. The Model View methodology is applicable for all project stakeholders, each requiring its own customised rule set. Two rule sets are defined in detail, the Energy Manager rule set and the LEED Accreditor rule set. This particular measurement generation process accompanied by defined View would filter and expedite data access for all stakeholders involved in building performance. Information presentation is critical for effective use of the data provided by the Ideal Building Framework and the Energy Management View definition. The specifications for a customised Information Delivery Tool account for the established profile of Energy Managers and best practice user interface design. Components of the developed tool could also be used by Facility Managers working at the tactical and strategic levels of organisations. Informed decision making is made possible through specified decision assistance processes which incorporate the Scenario Modelling and Benchmarking techniques, the Ideal Building Framework, the Energy Manager Model View, the Information Delivery Tool and the established profile of Energy Managers. The Model View and Rule Set Methodology is effectively demonstrated on an appropriate mixed use existing ‘green’ building, the Environmental Research Institute at University College Cork, using the Energy Management and LEED rule sets. Informed Decision Making is also demonstrated using a prototype scenario for the demonstration building.
Resumo:
This paper addresses the problems often faced by social workers and their supervisors in decision making where human rights considerations and child protection concerns collide. High profile court cases in the United Kingdom and Europe have consistently called for social workers to convey more clarity when justifying their reasons for interfering with human rights in child protection cases. The themes emerging from these case law decisions imply that social workers need to be better at giving reasons and evidence in more explicit ways to support any actions they propose which cause interference with Convention Rights. Toulmin (1958, 1985) offers a structured approach to argumentation which may have relevance to the supervision of child protection cases when social workers and managers are required to balance these human rights considerations. One of the key challenges in this balancing act is the need for decision makers to feel confident that any interventions resulting in the interference of human rights are both justified and proportionate. Toulmin’s work has already been shown to have relevance for assisting social workers navigate pathways through cases involving competing ethical and moral demands (Osmo and Landau, 2001) and more recently to human rights and decision making in child protection (Duffy et al, 2006). Toulmin’s model takes the practitioner through a series of stages where any argument or proposed recommendation (claim) is subjected to intense critical analysis involving exposition of its strengths and weaknesses. The author therefore proposes that explicit argumentation (Osmo and Landau, 2001) may help supervisors and practitioners towards safer and more confident decision making in child protection cases involving the interference of the human rights of children and parents. In addition to highlighting the broader context of human rights currently permeating child protection decision making, the paper will include case material to practically demonstrate the application of Toulmin’s model of argumentation to the supervision context. In this way the paper adopts a strong practice approach in helping to assist practitioners with the problems and dilemmas they may come up against in decision making in complex cases.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. This feature makes the model particularly suited for the implementation of classifiers and knowledge-based systems. When working with sets of (instead of single) probability distributions, the identification of the optimal option can be based on different criteria, some of them eventually leading to multiple choices. Yet, most of the inference algorithms for credal nets are designed to compute only the bounds of the posterior probabilities. This prevents some of the existing criteria from being used. To overcome this limitation, we present two simple transformations for credal nets which make it possible to compute decisions based on the maximality and E-admissibility criteria without any modification in the inference algorithms. We also prove that these decision problems have the same complexity of standard inference, being NP^PP-hard for general credal nets and NP-hard for polytrees.
Resumo:
Child welfare professionals regularly make crucial decisions that have a significant impact on children and their families. The present study presents the Judgments and Decision Processes in Context model (JUDPIC) and uses it to examine the relationships between three indepndent domains: case characteristic (mother’s wish with regard to removal), practitioner characteristic (child welfare attitudes), and protective system context (four countries: Israel, the Netherlands, Northern Ireland and Spain); and three dependent factors: substantiation of maltreatment, risk assessment, and intervention recommendation.
The sample consisted of 828 practitioners from four countries. Participants were presented with a vignette of a case of alleged child maltreatment and were asked to determine whether maltreatment was substantiated, assess risk and recommend an intervention using structured instruments. Participants’ child welfare attitudes were assessed.
The case characteristic of mother’s wish with regard to removal had no impact on judgments and decisions. In contrast, practitioners’ child welfare attitudes were associated with substantiation, risk assessments and recommendations. There were significant country differences on most measures.
The findings support most of the predictions derived from the JUDPIC model. The significant differences between practitioners from different countries underscore the importance of context in child protection decision making. Training should enhance practitioners’ awareness of the impact that their attitudes and the context in which they are embedded have on their judgments and decisions.
Resumo:
Purpose: As resident work hours policies evolve, residents’ off-duty time remains poorly understood. Despite assumptions about how residents should be using their postcall, off-duty time, there is little research on how residents actually use this time and the reasoning underpinning their activities. This study sought to understand residents’ nonclinical postcall activities when they leave the hospital, their decision-making processes, and their perspectives on the relationship between these activities and their well-being or recovery.
Method: The study took place at a Liaison Committee on Medical Education–accredited Canadian medical school from 2012 to 2014. The authors recruited a purposive and convenience sample of postgraduate year 1–5 residents from six surgical and nonsurgical specialties at three hospitals affiliated with the medical school. Using a constructivist grounded theory approach, semistructured interviews were conducted, audio-taped, transcribed, anonymized, and combined with field notes. The authors analyzed interview transcripts using constant comparative analysis and performed post hoc member checking.
Results: Twenty-four residents participated. Residents characterized their predominant approach to postcall decision making as one of making trade-offs between multiple, competing, seemingly incompatible, but equally valuable, activities. Participants exhibited two different trade-off orientations: being oriented toward maintaining a normal life or toward mitigating fatigue.
Conclusions: The authors’ findings on residents’ trade-off orientations suggest a dual recovery model with postcall trade-offs motivated by the recovery of sleep or of self. This model challenges the dominant viewpoint in the current duty hours literature and suggests that the duty hours discussion must be broadened to include other recovery processes.
Resumo:
The advent of novel genomic technologies that enable the evaluation of genomic alterations on a genome-wide scale has significantly altered the field of genomic marker research in solid tumors. Researchers have moved away from the traditional model of identifying a particular genomic alteration and evaluating the association between this finding and a clinical outcome measure to a new approach involving the identification and measurement of multiple genomic markers simultaneously within clinical studies. This in turn has presented additional challenges in considering the use of genomic markers in oncology, such as clinical study design, reproducibility and interpretation and reporting of results. This Review will explore these challenges, focusing on microarray-based gene-expression profiling, and highlights some common failings in study design that have impacted on the use of putative genomic markers in the clinic. Despite these rapid technological advances there is still a paucity of genomic markers in routine clinical use at present. A rational and focused approach to the evaluation and validation of genomic markers is needed, whereby analytically validated markers are investigated in clinical studies that are adequately powered and have pre-defined patient populations and study endpoints. Furthermore, novel adaptive clinical trial designs, incorporating putative genomic markers into prospective clinical trials, will enable the evaluation of these markers in a rigorous and timely fashion. Such approaches have the potential to facilitate the implementation of such markers into routine clinical practice and consequently enable the rational and tailored use of cancer therapies for individual patients. © 2010 Macmillan Publishers Limited. All rights reserved.
Resumo:
Shared decision-making (SDM) is a high priority in healthcare policy and is complementary to the recovery philosophy in mental health care. This agenda has been operationalised within the Values-Based Practice (VBP) framework, which offers a theoretical and practical model to promote democratic interprofessional approaches to decision-making. However, these are limited by a lack of recognition of the implications of power implicit within the mental health system. This study considers issues of power within the context of decision-making and examines to what extent decisions about patients? care on acute in-patient wards are perceived to be shared. Focus groups were conducted with 46 mental health professionals, service users, and carers. The data were analysed using the framework of critical narrative analysis (CNA). The findings of the study suggested each group constructed different identity positions, which placed them as inside or outside of the decision-making process. This reflected their view of themselves as best placed to influence a decision on behalf of the service user. In conclusion, the discourse of VBP and SDM needs to take account of how differentials of power and the positioning of speakers affect the context in which decisions take place.
Resumo:
A Waste Water monitoring program aiming to help decision making is presented. The program includes traditional and inboard sensor sampling, hydrodynamic and water quality modeling and a GIS based database to help the decision making of manager authorities. The focus is in the quality of waters receiving discharges from Waste Water Treatment Plants. Data was used to feed model simulations and produce hydrodynamic, effluent dispersion and ecological results. The system was then used to run different scenarios of discharge flow, concentration and location. The results enable to access the current water quality state of the lagoon and are being used as a decision making tool by the waste water managers in the evaluation phase of the treatment plant project to decide the location and the level of treatment of the discharge.