957 resultados para expected utility theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dominance measuring methods are an approach for dealing with complex decision-making problems with imprecise information within multi-attribute value/utility theory. These methods are based on the computation of pairwise dominance values and exploit the information in the dominance matrix in different ways to derive measures of dominance intensity and rank the alternatives under consideration. In this paper we review dominance measuring methods proposed in the literature for dealing with imprecise information (intervals, ordinal information or fuzzy numbers) about decision-makers? preferences and their performance in comparison with other existing approaches, like SMAA and SMAA-II or Sarabando and Dias? method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a groupdecision-making problem within multi-attribute utility theory, in which the relative importance of decisionmakers (DMs) is known and their preferences are represented by means of an additive function. We allow DMs to provide veto values for the attribute under consideration and build veto and adjust functions that are incorporated into the additive model. Veto functions check whether alternative performances are within the respective veto intervals, making the overall utility of the alternative equal to 0, where as adjust functions reduce the utilty of the alternative performance to match the preferences of other DMs. Dominance measuring methods are used to account for imprecise information in the decision-making scenario and to derive a ranking of alternatives for each DM. Specifically, ordinal information about the relative importance of criteria is provided by each DM. Finally, an extension of Kemeny's method is used to aggregate the alternative rankings from the DMs accounting for the irrelative importance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la mayoría de problemas de decisión a los que nos enfrentamos no hay evidencia sobre cuál es la mejor elección debido a la complejidad de los mismos. Esta complejidad está asociada a la existencia de múltiples objetivos conflictivos y a que en muchos casos solo se dispone de información incompleta o imprecisa sobre los distintos parámetros del modelo de decisión. Por otro lado, el proceso de toma de decisiones se puede realizar en grupo, debiendo incorporar al modelo las preferencias individuales de cada uno de los decisores y, posteriormente, agregarlas para alcanzar un consenso final, lo que dificulta más todavía el proceso de decisión. La metodología del Análisis de Decisiones (AD) es un procedimiento sistemático y lógico que permite estructurar y simplificar la tarea de tomar decisiones. Utiliza la información existente, datos recogidos, modelos y opiniones profesionales para cuantificar la probabilidad de los valores o impactos de las alternativas y la Teoría de la Utilidad para cuantificar las preferencias de los decisores sobre los posibles valores de las alternativas. Esta tesis doctoral se centra en el desarrollo de extensiones del modelo multicriterio en utilidad aditivo para toma de decisiones en grupo con veto en base al AD y al concepto de la intensidad de la dominancia, que permite explotar la información incompleta o imprecisa asociada a los parámetros del modelo. Se considera la posibilidad de que la importancia relativa que tienen los criterios del problema para los decisores se representa mediante intervalos de valores o información ordinal o mediante números borrosos trapezoidales. Adicionalmente, se considera que los decisores tienen derecho a veto sobre los valores de los criterios bajo consideración, pero solo un subconjunto de ellos es efectivo, teniéndose el resto solo en cuenta de manera parcial. ABSTRACT In most decision-making problems, the best choice is unclear because of their complexity. This complexity is mainly associated with the existence of multiple conflicting objectives. Besides, there is, in many cases, only incomplete or inaccurate information on the various decision model parameters. Alternatively, the decision-making process may be performed by a group. Consequently, the model must account for individual preferences for each decision-maker (DM), which have to be aggregated to reach a final consensus. This makes the decision process even more difficult. The decision analysis (DA) methodology is a systematic and logical procedure for structuring and simplifying the decision-making task. It takes advantage of existing information, collected data, models and professional opinions to quantify the probability of the alternative values or impacts and utility theory to quantify the DM’s preferences concerning the possible alternative values. This PhD. thesis focuses on developing extensions for a multicriteria additive utility model for group decision-making accounting for vetoes based on DA and on the concept of dominance intensity in order to exploit incomplete or imprecise information associated with the parameters of the decision-making model. We consider the possibility of the relative importance of criteria for DMs being represented by intervals or ordinal information, or by trapezoidal fuzzy numbers. Additionally, we consider that DMs are allowed to provide veto values for the criteria under consideration, of which only a subset are effective, whereas the remainder are only partially taken into account.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although many new diseases have emerged within the past 2 decades [Cohen, M. L. (1998) Brit. Med. Bull. 54, 523–532], attributing low numbers of animal hosts to the existence of even a new pathogen is problematic. This is because very rarely does one have data on host abundance before and after the epizootic as well as detailed descriptions of pathogen prevalence [Dobson, A. P. & Hudson, P. J. (1985) in Ecology of Infectious Diseases in Natural Populations, eds. Grenfell, B. T. & Dobson, A. P. (Cambridge Univ. Press, Cambridge, U.K.), pp. 52–89]. Month by month we tracked the spread of the epizootic of an apparently novel strain of a widespread poultry pathogen, Mycoplasma gallisepticum, through a previously unknown host, the house finch, whose abundance has been monitored over past decades. Here we are able to demonstrate a causal relationship between high disease prevalence and declining house finch abundance throughout the eastern half of North America because the epizootic reached different parts of the house finch range at different times. Three years after the epizootic arrived, house finch abundance stabilized at similar levels, although house finch abundance had been high and stable in some areas but low and rapidly increasing in others. This result, not previously documented in wild populations, is as expected from theory if transmission of the disease was density dependent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A polimerização em emulsão de estireno em um microrreator Syrris de 250 µL com misturador estático junção \"T\" foi estudada em duas etapas. Primeiro somente a fluidodinâmica deste dispositivo não convencional foi avaliada, depois, foi desenvolvida a reação de polimerização de forma a observar como este fator influencia no sistema. Os experimentos foram realizados procurando se atingir maiores conversões, mas mantendo a estabilidade da emulsão. Foi um trabalho exploratório, portanto se assemelha mais a um processo de evolução (evolutionary process). Foram verificados a partir de qual relação das vazões dos dois fluidos ocorre a formação de gotas, e que com o aumento da vazão da fase contínua, aquosa (Qc), mantendo constante a vazão da fase dispersa (Qd), foi verificado uma diminuição do diâmetro das gotas e um regime de fluxo laminar. Posteriormente, realizou-se a polimerização em emulsão do estireno no microrreator, porém com restrições para altas vazões. Os parâmetros de processo testados foram a proporção Qc e Qd, a temperatura e a concentração do iniciador para então verificar o efeito que a variação destas ocasionam na conversão de monômero, no diâmetro e número de partículas e nas massas moleculares médias. A polimerização foi feita para soma das vazões Qc e Qd da ordem de 100 µ L/min, com 15% de monômero na formulação e com o maior tempo de residência possível de 2,5 minutos. Para maiores concentrações de monômero, acima de 15% foi verificado entupimento do canal do microrreator. A taxa de conversão de monômero aumentou com o aumento da temperatura e com o aumento da concentração do iniciador, mas o maior valor atingido foi de apenas 37% devido ao baixo tempo de residência. Nos casos de maiores taxas de conversão, as massas moleculares obtidas foram as menores conforme o esperado pela teoria. Finalmente, os índices de polidispersão (PDI), obtidos foram da ordem de 2,5 a 3,5.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditionally, literature estimates the equity of a brand or its extension but it pays little attention to collective brand equity even though collective branding is increasingly used to differentiate the homogenous products of different firms or organizations. We propose an approach that estimates the incremental effect of individual brands (or the contribution of individual brands) on collective brand equity through the various stages of a consumer hierarchical buying choice process in which decisions are nested: “whether to buy”, “what collective brand to buy” and “what individual brand to buy”. This proposal follows the approach of the Random Utility Theory, and it is theoretically argued through the Associative Networks Theory and the cybernetic model of decision making. The empirical analysis carried out in the area of collective brands in Spanish tourism finds a three-stage hierarchical sequence, and estimates the contribution of individual brands to the equity of the collective brands of “Sun, Sea and Sand” and of “World Heritage Cities”.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We model social choices as acts mapping states of the world to (social) outcomes. A (social choice) rule assigns an act to every profile of subjective expected utility preferences over acts. A rule is strategy-proof if no agent ever has an incentive to misrepresent her beliefs about the world or her valuation of the outcomes; it is ex-post efficient if the act selected at any given preference profile picks a Pareto-efficient outcome in every state of the world. We show that every two-agent ex-post efficient and strategy-proof rule is a top selection: the chosen act picks the most preferred outcome of some (possibly different) agent in every state of the world. The states in which an agent’s top outcome is selected cannot vary with the reported valuations of the outcomes but may change with the reported beliefs. We give a complete characterization of the ex-post efficient and strategy-proof rules in the two-agent, two-state case, and we identify a rich class of such rules in the two-agent case.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We model social choices as acts mapping states of the world to (social) outcomes. A (social choice) rule assigns an act to every profile of subjective expected utility preferences over acts. A rule is strategy-proof if no agent ever has an incentive to misrepresent her beliefs about the world or her valuation of the outcomes; it is ex-post efficient if the act selected at any given preference profile picks a Pareto-efficient outcome in every state of the world. We show that every two-agent ex-post efficient and strategy-proof rule is a top selection: the chosen act picks the most preferred outcome of some (possibly different) agent in every state of the world. The states in which an agent’s top outcome is selected cannot vary with the reported valuations of the outcomes but may change with the reported beliefs. We give a complete characterization of the ex-post efficient and strategy-proof rules in the two-agent, two-state case, and we identify a rich class of such rules in the two-agent case.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Four variations on Two Envelope Paradox are stated and compared. The variations are employed to provide a diagnosis and an explanation of what has gone awry in the paradoxical modeling of the decision problem that the paradox poses. The canonical formulation of the paradox underdescribes the ways in which one envelope can have twice the amount that is in the other. Some ways one envelope can have twice the amount that is in the other make it rational to prefer the envelope that was originally rejected. Some do not, and it is a mistake to treat them alike. The nature of the mistake is diagnosed by the different roles that rigid designators and definite descriptions play in unproblematic and in untoward formulations of decision tables that are employed in setting out the decision problem that gives rise to the paradox. The decision maker’s knowledge or ignorance of how one envelope came to have twice the amount that is in the other determines which of the different ways of modeling his decision problem is correct. Under this diagnosis, the paradoxical modeling of the Two Envelope problem is incoherent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces the rank-dependent quality-adjusted life-years (QALY) model, a new method to aggregate QALYs in economic evaluations of health care. The rank-dependent QALY model permits the formalization of influential concepts of equity in the allocation of health care, such as the fair innings approach, and it includes as special cases many of the social welfare functions that have been proposed in the literature. An important advantage of the rank-dependent QALY model is that it offers a straightforward procedure to estimate equity weights for QALYs. We characterize the rank-dependent QALY model and argue that its central condition has normative appeal. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a definition of increasing uncertainty, in which an elementary increase in the uncertainty of any act corresponds to the addition of an 'elementary bet' that increases consumption by a fixed amount in (relatively) 'good' states and decreases consumption by a fixed (and possibly different) amount in (relatively) 'bad' states. This definition naturally gives rise to a dual definition of comparative aversion to uncertainty. We characterize this definition for a popular class of generalized models of choice under uncertainty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we consider the relationship between supermodularity and risk aversion. We show that supermodularity of the certainty equivalent implies that the certainty equivalent of any random variable is less than its mean. We also derive conditions under which supermodularity of the certainty equivalent is equivalent to aversion to mean-preserving spreads in the sense of Rothschild and Stiglitz. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Greater inclusion of individuals with disabilities into mainstream society is an important goal for society. One of the best ways to include individuals is to actively promote and encourage their participation in the labor force. Of all disabilities, it is feasible to assume that individual with spinal cord injuries can be among the most easily mainstreamed into the labor force. However, less that fifty percent of individuals with spinal cord injuries work. ^ This study focuses on how disability benefit programs, such as Social Security Disability Insurance, and Worker's Compensation, the Americans with Disabilities Act and rehabilitation programs affect employment decisions. The questions were modeled using utility theory with an augmented expenditure function and indifference theory. Statically, Probit, Logit, predicted probability, and linear regressions were used to analyze these questions. Statistical analysis was done on the probability of working, ever attempting to work after injury, and on the number of years after injury that work was first attempted and the number of hours worked per week. The data utilized were from the National Spinal Cord Injury Database and the Spinal Cord Injuries and Labor Database. The Spinal Cord Injuries and Labor Database was created specifically for this study by the author. Receiving disability benefits decreased the probability of working, of ever attempting to work, increased the number of years after injury before the first work attempt was made, and decreased the number of hours worked per week for those individuals working. These results were all statistically significant. The Americans with Disabilities Act decrease the number of years before an individual made a work attempt. The decrease is statistically significant. The amount of rehabilitation had a significant positive effect for male individuals with low paraplegia, and significant negative effect for individuals with high tetraplegia. For women, there were significant negative effects for high tetraplegia and high paraplegia. ^ This study finds that the financial disincentives of receiving benefits are the major determinants of whether an individual with a spinal cord injury returns to the labor force. Policies are recommended that would decrease the disincentive. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Greater inclusion of individuals with disabilities into mainstream society is an important goal for society. One of the best ways to include individuals is to actively promote and encourage their participation in the labor force. Of all disabilities, it is feasible to assume that individual with spinal cord injuries can be among the most easily mainstreamed into the labor force. However, less that fifty percent of individuals with spinal cord injuries work. This study focuses on how disability benefit programs, such as Social Security Disability Insurance, and Worker's Compensation, the Americans with Disabilities Act and rehabilitation programs affect employment decisions. The questions were modeled using utility theory with an augmented expenditure function and indifference theory. Statically, Probit, Logit, predicted probability, and linear regressions were used to analyze these questions. Statistical analysis was done on the probability of working, ever attempting to work after injury, and on the number of years after injury that work was first attempted and the number of hours worked per week. The data utilized were from the National Spinal Cord Injury Database and the Spinal Cord Injuries and Labor Database. The Spinal Cord Injuries and Labor Database was created specifically for this study by the author. Receiving disability benefits decreased the probability of working, of ever attempting to work, increased the number of years after injury before the first work attempt was made, and decreased the number of hours worked per week for those individuals working. These results were all statistically significant. The Americans with Disabilities Act decrease the number of years before an individual made a work attempt. The decrease is statistically significant. The amount of rehabilitation had a significant positive effect for male individuals with low paraplegia, and significant negative effect for individuals with high tetraplegia. For women, there were significant negative effects for high tetraplegia and high paraplegia. This study finds that the financial disincentives of receiving benefits are the major determinants of whether an individual with a spinal cord injury returns to the labor force. Policies are recommended that would decrease the disincentive.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.

Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.

Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with

little or no prior knowledge