985 resultados para Multiple decisions
Resumo:
This paper describes and analyzes research on the dynamics of long-term care and the policy relevance of identifying the sources of persistence in caregiving arrangements (including the effect of dynamics on parameter estimates, implications for family welfare, parent welfare, child welfare, and cost of government programs). We discuss sources and causes of observed persistence in caregiving arrangements including inertia/state dependence (confounded by unobserved heterogeneity) and costs of changing caregivers. We comment on causes of dynamics including learning/human capital accumulation; burnout; and game-playing. We suggest how to deal with endogenous geography; dynamics in discrete and continuous choices; and equilibrium issues (multiple equilibria, dynamic equilibria). We also present an overview of commonly used longitudinal data sets and evaluate their relative advantages/disadvantages. We also discuss other data issues related to noisy measures of wealth and family structure. Finally, we suggest some methods to handle econometric problems such as endogeneous geography. © 2014 Springer Science+Business Media New York.
Resumo:
This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.
Resumo:
Maintenance decisions for large-scale asset systems are often beyond an asset manager's capacity to handle. The presence of a number of possibly conflicting decision criteria, the large number of possible maintenance policies, and the reality of budget constraints often produce complex problems, where the underlying trade-offs are not apparent to the asset manager. This paper presents the decision support tool "JOB" (Justification and Optimisation of Budgets), which has been designed to help asset managers of large systems assess, select, interpret and optimise the effects of their maintenance policies in the presence of limited budgets. This decision support capability is realized through an efficient, scalable backtracking- based algorithm for the optimisation of maintenance policies, while enabling the user to view a number of solutions near this optimum and explore tradeoffs with other decision criteria. To assist the asset manager in selecting between various policies, JOB also provides the capability of Multiple Criteria Decision Making. In this paper, the JOB tool is presented and its applicability for the maintenance of a complex power plant system.
Resumo:
This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e. the autonomous vehicles’ ability to make appropriate driving decisions in city road traffic situations. After decomposing the problem into two consecutive decision making stages, and giving a short overview about previous work, the paper explains how Multiple Criteria Decision Making (MCDM) can be used in the process of selecting the most appropriate driving maneuver.
Resumo:
This paper proposes a recommendation system that supports process participants in taking risk-informed decisions, with the goal of reducing risks that may arise during process execution. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a business process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we suggest to the participant the action to perform which minimizes the predicted process risk. Risks are predicted by traversing decision trees generated from the logs of past process executions, which consider process data, involved resources, task durations and other information elements like task frequencies. When applied in the context of multiple process instances running concurrently, a second technique is employed that uses integer linear programming to compute the optimal assignment of resources to tasks to be performed, in order to deal with the interplay between risks relative to different instances. The recommendation system has been implemented as a set of components on top of the YAWL BPM system and its effectiveness has been evaluated using a real-life scenario, in collaboration with risk analysts of a large insurance company. The results, based on a simulation of the real-life scenario and its comparison with the event data provided by the company, show that the process instances executed concurrently complete with significantly fewer faults and with lower fault severities, when the recommendations provided by our recommendation system are taken into account.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
Major infrastructure and construction (MIC) projects are those with significant traffic or environmental impact, of strategic and regional significance and high sensitivity. The decision making process of schemes of this type is becoming ever more complicated, especially with the increasing number of stakeholders involved and their growing tendency to defend their own varied interests. Failing to address and meet the concerns and expectations of stakeholders may result in project failures. To avoid this necessitates a systematic participatory approach to facilitate decision-making. Though numerous decision models have been established in previous studies (e.g. ELECTRE methods, the analytic hierarchy process and analytic network process) their applicability in the decision process during stakeholder participation in contemporary MIC projects is still uncertain. To resolve this, the decision rule approach is employed for modeling multi-stakeholder multi-objective project decisions. Through this, the result is obtained naturally according to the “rules” accepted by any stakeholder involved. In this sense, consensus is more likely to be achieved since the process is more convincing and the result is easier to be accepted by all concerned. Appropriate “rules”, comprehensive enough to address multiple objectives while straightforward enough to be understood by multiple stakeholders, are set for resolving conflict and facilitating consensus during the project decision process. The West Kowloon Cultural District (WKCD) project is used as a demonstration case and a focus group meeting is conducted in order to confirm the validity of the model established. The results indicate that the model is objective, reliable and practical enough to cope with real world problems. Finally, a suggested future research agenda is provided.
Resumo:
We conducted an experiment to investigate the impact of sport scandal on consumer attitudes toward a range of sport stakeholders. We examined the effects of fans’ social identity (fan of scandalized team vs. fan of rival team), scandal severity (single perpetrator vs. multiple perpetrators) and the sponsor brand’s response to the scandal (sponsorship retention vs. termination) on consumers’ attitudes toward the implicated team, the scandal perpetrators, the sport, and sponsor brand. We find evidence of differential reactions to scandal reflecting social identity, such that fans support their own team despite increased scandal severity but negatively judge a rival team’s transgressions. Results suggest that where fans are concerned, sponsors may be better served to continue with a sponsorship following scandal than to terminate, even for some forms of severe scandal. However, termination may receive more positive evaluation from rival team fans; hence continuation of sponsorship needs to accompany a tempered approach.
Resumo:
This paper considers the problem of identifying the footprints of communication of multiple transmitters in a given geographical area. To do this, a number of sensors are deployed at arbitrary but known locations in the area, and their individual decisions regarding the presence or absence of the transmitters' signal are combined at a fusion center to reconstruct the spatial spectral usage map. One straightforward scheme to construct this map is to query each of the sensors and cluster the sensors that detect the primary's signal. However, using the fact that a typical transmitter footprint map is a sparse image, two novel compressive sensing based schemes are proposed, which require significantly fewer number of transmissions compared to the querying scheme. A key feature of the proposed schemes is that the measurement matrix is constructed from a pseudo-random binary phase shift applied to the decision of each sensor prior to transmission. The measurement matrix is thus a binary ensemble which satisfies the restricted isometry property. The number of measurements needed for accurate footprint reconstruction is determined using compressive sampling theory. The three schemes are compared through simulations in terms of a performance measure that quantifies the accuracy of the reconstructed spatial spectral usage map. It is found that the proposed sparse reconstruction technique-based schemes significantly outperform the round-robin scheme.
Resumo:
A disadvantage of multiple-choice tests is that students have incentives to guess. To discourage guessing, it is common to use scoring rules that either penalize wrong answers or reward omissions. These scoring rules are considered equivalent in psychometrics, although experimental evidence has not always been consistent with this claim. We model students' decisions and show, first, that equivalence holds only under risk neutrality and, second, that the two rules can be modified so that they become equivalent even under risk aversion. This paper presents the results of a field experiment in which we analyze the decisions of subjects taking multiple-choice exams. The evidence suggests that differences between scoring rules are due to risk aversion as theory predicts. We also find that the number of omitted items depends on the scoring rule, knowledge, gender and other covariates.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Resumo:
Knowing where pinnipeds forage is vital to managing and protecting their populations, and for assessing potential interactions with fisheries. We assessed the spatial relationship between the seasonal distribution of Pacific harbor seals (Phoca vitulina richardii) outfitted with satellite transmitters and the seasonal distributions of potential harbor seal prey species in San Francisco Bay, California. Pearson’s correlation coefficients were calculated between the number of harbor seal locations in an area of the San Francisco Bay and the abundance of specific prey species in the same area. The influence of scale on the analyses was assessed by varying the scale of analysis from 1 to 10 km. There was consistency in the prey species targeted by harbor seals year-round, although there were seasonal differences between the most important prey species. The highest correlations between harbor seals and their prey were found for seasonally abundant benthic species, located within about 10 km of the primary haul-out site. Probable foraging habitat for harbor seals was identified, based on areas with high abundances of prey species that were strongly correlated with harbor seal distribution. With comparable local data inputs, this approach has potential application to pinniped management in other areas, and to decisions about the location of marine reserves designed to protect these species.
Resumo:
Decision making in an uncertain environment poses a conflict between the opposing demands of gathering and exploiting information. In a classic illustration of this 'exploration-exploitation' dilemma, a gambler choosing between multiple slot machines balances the desire to select what seems, on the basis of accumulated experience, the richest option, against the desire to choose a less familiar option that might turn out more advantageous (and thereby provide information for improving future decisions). Far from representing idle curiosity, such exploration is often critical for organisms to discover how best to harvest resources such as food and water. In appetitive choice, substantial experimental evidence, underpinned by computational reinforcement learning (RL) theory, indicates that a dopaminergic, striatal and medial prefrontal network mediates learning to exploit. In contrast, although exploration has been well studied from both theoretical and ethological perspectives, its neural substrates are much less clear. Here we show, in a gambling task, that human subjects' choices can be characterized by a computationally well-regarded strategy for addressing the explore/exploit dilemma. Furthermore, using this characterization to classify decisions as exploratory or exploitative, we employ functional magnetic resonance imaging to show that the frontopolar cortex and intraparietal sulcus are preferentially active during exploratory decisions. In contrast, regions of striatum and ventromedial prefrontal cortex exhibit activity characteristic of an involvement in value-based exploitative decision making. The results suggest a model of action selection under uncertainty that involves switching between exploratory and exploitative behavioural modes, and provide a computationally precise characterization of the contribution of key decision-related brain systems to each of these functions.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
Some luxury goods manufacturers offer limited editions of their products, whereas some others market multiple product lines. Researchers have found that reference groups shape consumer evaluations of these product categories. Yet little empirical research has examined how reference groups affect the product line decisions of firms. Indeed, in a field setting it is quite a challenge to isolate reference group effects from contextual effects and correlated effects. In this paper, we propose a parsimonious model that allows us to study how reference groups influence firm behavior and that lends itself to experimental analysis. With the aid of the model we investigate the behavior of consumers in a laboratory setting where we can focus on the reference group effects after controlling for the contextual and correlated effects. The experimental results show that in the presence of strong reference group effects, limited editions and multiple products can help improve firms' profits. Furthermore, the trends in the purchase decisions of our participants point to the possibility that they are capable of introspecting close to two steps of thinking at the outset of the game and then learning through reinforcement mechanisms. © 2010 INFORMS.