726 resultados para Subjective expected utility


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Angepasste Kommunikationssysteme für den effizienten Einsatz in dezentralen elektrischen Versorgungsstrukturen - In öffentlichen Elektrizitätsnetzen wird der Informationsaustausch seit längerem durch historisch gewachsene und angepasste Systeme erfolgreich bewerkstelligt. Basierend auf einem weiten Erfahrungsspektrum und einer gut ausgebauten Kommunikationsinfrastruktur stellt die informationstechnische Anbindung eines Teilnehmers im öffentlichen Versorgungsnetz primär kein Hemmnis dar. Anders gestaltet sich dagegen die Situation in dezentralen Versorgungsstrukturen. Da die Elektrifizierung von dezentralen Versorgungsgebieten, mittels der Vernetzung vieler verteilter Erzeugungsanlagen und des Aufbaus von nicht an das öffentliche Elektrizitätsnetz angeschlossenen Verteilnetzen (Minigrids), erst in den letzten Jahren an Popularität gewonnen hat, sind nur wenige Projekte bis dato abgeschlossen. Für die informationstechnische Anbindung von Teilnehmern in diesen Strukturen bedeutet dies, dass nur in einem sehr begrenzten Umfang auf Erfahrungswerte bei der Systemauswahl zurückgegriffen werden kann. Im Rahmen der Dissertation ist deshalb ein Entscheidungsfindungsprozess (Leitfaden für die Systemauswahl) entwickelt worden, der neben einem direkten Vergleich von Kommunikationssystemen basierend auf abgeleiteten Bewertungskriterien und Typen, der Reduktion des Vergleichs auf zwei Systemwerte (relativer Erwartungsnutzenzuwachs und Gesamtkostenzuwachs), die Wahl eines geeigneten Kommunikationssystems für die Applikation in dezentralen elektrischen Versorgungsstrukturen ermöglicht. In Anlehnung an die klassische Entscheidungstheorie werden mit der Berechnung eines Erwartungsnutzens je Kommunikationssystems, aus der Gesamtsumme der Einzelprodukte der Nutzwerte und der Gewichtungsfaktor je System, sowohl die technischen Parameter und applikationsspezifischen Aspekte, als auch die subjektiven Bewertungen zu einem Wert vereint. Mit der Ermittlung der jährlich erforderlichen Gesamtaufwendungen für ein Kommunikationssystem bzw. für die anvisierten Kommunikationsaufgaben, in Abhängigkeit der Applikation wird neben dem ermittelten Erwartungsnutzen des Systems, ein weiterer Entscheidungsparameter für die Systemauswahl bereitgestellt. Die anschließende Wahl geeigneter Bezugsgrößen erlaubt die Entscheidungsfindung bzgl. der zur Auswahl stehenden Systeme auf einen Vergleich mit einem Bezugssystem zurückzuführen. Hierbei sind nicht die absoluten Differenzen des Erwartungsnutzen bzw. des jährlichen Gesamtaufwandes von Interesse, sondern vielmehr wie sich das entsprechende System gegenüber dem Normal (Bezugssystem) darstellt. Das heißt, der relative Zuwachs des Erwartungsnutzen bzw. der Gesamtkosten eines jeden Systems ist die entscheidende Kenngröße für die Systemauswahl. Mit dem Eintrag der berechneten relativen Erwartungsnutzen- und Gesamtkostenzuwächse in eine neu entwickelte 4-Quadranten-Matrix kann unter Berücksichtigung der Lage der korrespondierenden Wertepaare eine einfache (grafische) Entscheidung bzgl. der Wahl des für die Applikation optimalsten Kommunikationssystems erfolgen. Eine exemplarisch durchgeführte Systemauswahl, basierend auf den Analyseergebnissen von Kommunikationssystemen für den Einsatz in dezentralen elektrischen Versorgungsstrukturen, veranschaulicht und verifiziert die Handhabung des entwickelten Konzeptes. Die abschließende Realisierung, Modifikation und Test des zuvor ausgewählten Distribution Line Carrier Systems unterstreicht des Weiteren die Effizienz des entwickelten Entscheidungsfindungsprozesses. Dem Entscheidungsträger für die Systemauswahl wird insgesamt ein Werkzeug zur Verfügung gestellt, das eine einfache und praktikable Entscheidungsfindung erlaubt. Mit dem entwickelten Konzept ist erstmals eine ganzheitliche Betrachtung unter Berücksichtigung sowohl der technischen und applikationsspezifischen, als auch der ökonomischen Aspekte und Randbedingungen möglich, wobei das Entscheidungsfindungskonzept nicht nur auf die Systemfindung für dezentrale elektrische Energieversorgungsstrukturen begrenzt ist, sondern auch bei entsprechender Modifikation der Anforderungen, Systemkenngrößen etc. auf andere Applikationsanwendungen übertragen werden.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This note shows that, under appropriate conditions, preferences may be locally approximated by the linear utility or risk-neutral preference functional associated with a local probability transformation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address a portfolio optimization problem in a semi-Markov modulated market. We study both the terminal expected utility optimization on finite time horizon and the risk-sensitive portfolio optimization on finite and infinite time horizon. We obtain optimal portfolios in relevant cases. A numerical procedure is also developed to compute the optimal expected terminal utility for finite horizon problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the most tangled fields of research is the field of defining and modeling affective concepts, i. e. concepts regarding emotions and feelings. The subject can be approached from many disciplines. The main problem is lack of generally approved definitions. However, e.g. linguists have recently started to check the consistency of their theories with the help of computer simulations. Definitions of affective concepts are needed for performing similar simulations in behavioral sciences. In this thesis, preliminary computational definitions of affects for a simple utility-maximizing agent are given. The definitions have been produced by synthetizing ideas from theories from several fields of research. The class of affects is defined as a superclass of emotions and feelings. Affect is defined as a process, in which a change in an agent's expected utility causes a bodily change. If the process is currently under the attention of the agent (i.e. the agent is conscious of it), the process is a feeling. If it is not, but can in principle be taken into attention (i.e. it is preconscious), the process is an emotion. Thus, affects do not presuppose consciousness, but emotions and affects do. Affects directed at unexpected materialized (i.e. past) events are delight and fright. Delight is the consequence of an unexpected positive event and fright is the consequence of an unexpected negative event. Affects directed at expected materialized (i.e. past) events are happiness (expected positive event materialized), disappointment (expected positive event did not materialize), sadness (expected negative event materialized) and relief (expected negative event did not materialize). Affects directed at expected unrealized (i.e. future) events are fear and hope. Some other affects can be defined as directed towards originators of the events. The affect classification has also been implemented as a computer program, the purpose of which is to ensure the coherence of the definitions and also to illustrate the capabilities of the model. The exact content of bodily changes associated with specific affects is not considered relevant from the point of view of the logical structure of affective phenomena. The utility function need also not be defined, since the target of examination is only its dynamics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimal Punishment of Economic Crime: A Study on Bankruptcy Crime This thesis researches whether the punishment practise of bankruptcy crimes is optimal in light of Gary S. Becker’s theory of optimal punishment. According to Becker, a punishment is optimal if it eliminates the expected utility of the crime for the offender and - on the other hand - minimizes the cost of the crime to society. The decision process of the offender is observed through their expected utility of the crime. The expected utility is calculated based on the offender's probability of getting caught, the cost of getting caught and the profit from the crime. All objects including the punishment are measured in cash. The cost of crimes to the society is observed defining the disutility caused by the crime to the society. The disutility is calculated based on the cost of crime prevention, crime damages, punishment execution and the probability of getting caught. If the goal is to minimize the crime profits, the punishments of bankruptcy crimes are not optimal. If the debtors would decide whether or not to commit the crime solely based on economical consideration, the crime rate would be multiple times higher than the current rate is. The prospective offender relies heavily on non-economic aspects in their decision. Most probably social pressure and personal commitment to oblige the laws are major factors in the prospective criminal’s decision-making. The function developed by Becker measuring the cost to society was not useful in the measurement of the optimality of a punishment. The premise of the function that the costs of the society correlate to the costs for the offender from the punishment proves to be unrealistic in observation of the bankruptcy crimes. However, it was observed that majority of the cost of crime for the society are caused by the crime damages. This finding supports the preventive criminal politics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Es útil para estudiantes de postgrado (Master y Doctorado) en cursos de Economía o de Microeconomía en los que se analicen problemas de Decisión en condiciones de Riesgo o Incertidumbre. El documento comienza explicando la Teoría de la Utilidad Esperada. A continuación se estudian la aversión al riesgo, los coeficientes de aversión absoluta y relativa al riesgo, la relación “más averso que” entre agentes económicos y los efectos riqueza sobre las decisiones en algunas relaciones de preferencia utilizadas frecuentemente en el análisis económico. La sección 4 se centra en la comparación entre alternativas arriesgadas en términos de rendimiento y riesgo, considerando la dominancia estocástica de primer y segundo orden y algunas extensiones posteriores de esas relaciones de orden. El documento concluye con doce ejercicios resueltos en los que se aplican los conceptos y resultados expuestos en las secciones anteriores a problemas de decisión en varios contextos

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents the basic elements for the analysis of decision under uncertainty: Expected Utility Theory and its citicisms and risk aversion and its measurement. The concepts of certainty equivalent, risk premium, absolute risk aversion and relative risk aversion, and the "more risk averse than" relation are discussed. The work is completed with several applications of decision making under uncertainty to different economic problems: investment in risky assets and portfolio selection, risk sharing, investment to reduce risk, insurance, taxes and income underreporting, deposit insurance and the value of information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ao longo da história, poucos fenômenos despertaram tanto interesse dos cidadãos, dispenderam tantos recursos do Estado e contribuíram tanto para o atraso de seu desenvolvimento quanto o problema da corrupção dos agentes públicos. Nesta tese, uso o conceito de improbidade para definir um tipo particular de fenômeno e distingui-lo de outros geralmente abrangidos pelo conceito de corrupção. A partir daí procuro responder as seguintes questões: quais elementos influenciam o processo de tomada de decisão do agente público para que considere o engajamento em uma improbidade? Que fatores estão associados à ocorrência de improbidades na Administração Pública municipal brasileira? Sob a perspectiva da nova economia institucional, a primeira parte da tese procede com a análise conceitual e metodológica do fenômeno, consubstanciado no referencial analíticos das improbidades, sustentando que: a) as improbidades correspondem a um gênero de atitudes oportunistas, dentre as quais se destaca uma espécie denominada transação corrupta; b) o processo de tomada de decisão do agente público, inserido no contexto de racionalidade limitada, orienta-se igualmente por elementos da análise custo-benefício (maximização da utilidade esperada), da dinâmica dos processos de aprendizagem e da barreira ético-moral dos próprios indivíduos. As demais partes da tese apresentam os resultados de uma investigação empírica sistemática, baseada nas informações de uma amostra aleatória de 960 municípios brasileiros auditados pela Controladoria-Geral da União. A análise evidencia os fatores associados à ocorrência das improbidades, tanto sob o ponto vista da literatura tradicional (modernização, capital social e rent-seeking), quanto da nova perspectiva analítica proposta, baseada nos mecanismos de governança. O teste a partir dos modelos tradicionais da literatura demonstra: a) a associação negativa entre a ocorrência das improbidades e os indicadores de desempenho institucional e de desenvolvimento socioeconômico dos municípios (em consonância com os efeitos comumente atribuídos as improbidades); b) a associação negativa entre a ocorrência das improbidades e os indicadores de modernização e de capital social dos municípios (em consonância com as causas comumente atribuídas as improbidades); c) a não associação entre a ocorrência das improbidades nos municípios brasileiros e os indicadores de incentivo ao comportamento rent-seeking (em oposição à clássica proposição de que quanto maior o tamanho do Estado, maior será a ocorrência de improbidades em razão dos incentivos oriundos de seu monopólio). Com base nos resultados obtidos, incorporo os supostos neoinstitucionalistas a análise das improbidades, interpretando-os como decorrência da inadequação das estruturas de governança dos contratos. Assim, para além do impacto do arranjo institucional, que abarca o controle parlamentar, administrativo e jurisdicional interno e externo dos recursos públicos federais transferidos aos municípios, apresento evidências de que as variações observadas na contagem de improbidades nos municípios brasileiros estão diretamente relacionadas à qualidade de seus mecanismos de governança. Dentre esses, destacam-se aqueles de natureza democrática: os mecanismos de controle social (os conselhos municipais de políticas públicas); os mecanismos de promoção da transparência (a qualidade do governo eletrônico); e os mecanismos de accountability (a disputa político-eleitoral). De acordo com o referencial analítico das improbidades, a existência e operação desses mecanismos elevam os custos de transação do agente público que, mesmo superando as limitações da barreira ético-moral e de aprendizado, ainda considera um eventual engajamento nesse gênero de atitudes oportunistas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Partially ordered preferences generally lead to choices that do not abide by standard expected utility guidelines; often such preferences are revealed by imprecision in probability values. We investigate five criteria for strategy selection in decision trees with imprecision in probabilities: “extensive” Γ-maximin and Γ-maximax, interval dominance, maximality and E-admissibility. We present algorithms that generate strategies for all these criteria; our main contribution is an algorithm for Eadmissibility that runs over admissible strategies rather than over sets of probability distributions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the role of higher-order moments in portfolio choice within an expected-utility framework. We consider two-, three-, four- and five-parameter density functions for portfolio returns and derive exact conditions under which investors would all be optimally plungers rather than diversifiers. Through comparative statics we show the importance of higher-order risk preference properties, such as riskiness, prudence and temperance, in determining plunging behaviour. Empirical estimates for the S&P500 provide evidence for the optimality of diversification.