292 resultados para Discount.
Resumo:
The dissertation consists of three essays on misplanning wealth and health accumulation. The conventional economics assumes that individual's intertemporal preferences are exponential (exponential preferences, EP). Recent findings in behavioural economics have shown that, actually, people do discount near future relatively heavier than distant future. This implies hyperbolic intertemporal preferences (HP). Essays I and II concentrate especially on the effects of a delayed completion of tasks, a feature of behaviour that HP enables. Essay III uses current Finnish data to analyse the evolvement of the quality adjusted life years (QALYs) and inconsistencies in measuring that. Essay I studies the existence effects of a lucrative retirement savings program (SP) on the retirement savings of different individual types having HP. If the individual does not know that he will have HP also in the future, i.e. he is the naïve, for certain conditions, he delays the enrolment on SP until he abandons it. Very interesting finding is that the naïve retires then poorer in the presence than in the absence of SP. For the same conditions, the individual who knows that he will have HP also in the future, i.e. he is the sophisticated, gains from the existence of SP, and retires with greater retirement savings in the presence than in the absence of SP. Finally, capabilities to learn from past behaviour and about intertemporal preferences improve possibilities to gain from the existence but an adequate time to learn must be then guaranteed. Essay II studies delayed doctor's visits, theirs effects on the costs of a public health care system and government's attempts to control patient behaviour and fund the system. The controlling devices are a consultation fee and a deductible for that. The deductible is effective only for a patient whose diagnosis reveals a disease that would not get cured without the doctor's visit. The naives delay their visits the longest while EP-patients are the quickest visitors. To control the naives, the government should implement a low fee and a high deductible, while for the sophisticates the opposite is true. Finally, if all the types exist in an economy then using an incorrect conventional assumption that all individuals have EP leads to worse situation and requires higher tax rates than assuming incorrectly but unconventionally that only the naives exists. Essay III studies the development of QALYs in Finland 1995/96-2004. The essay concentrates on developing a consistent measure, i.e. independent of discounting, for measuring the age and gender specific QALY-changes and their incidences. For the given time interval, use of a relative change out of an attainable change seems to be almost intact to discounting and reveals that the greatest gains are for older age groups.
Resumo:
Submergence of land is a major impact of large hydropower projects. Such projects are often also dogged by siltation, delays in construction and heavy debt burdens-factors that are not considered in the project planning exercise. A simple constrained optimization model for the benefit~ost analysis of large hydropower projects that considers these features is proposed. The model is then applied to two sites in India. Using the potential productivity of an energy plantation on the submergible land is suggested as a reasonable approach to estimating the opportunity cost of submergence. Optimum project dimensions are calculated for various scenarios. Results indicate that the inclusion of submergence cost may lead to a substanual reduction in net present value and hence in project viability. Parameters such as project lifespan, con$truction time, discount rate and external debt burden are also of significance. The designs proposed by the planners are found to be uneconomic, whIle even the optimal design may not be viable for more typical scenarios. The concept of energy opportunity cost is useful for preliminary screening; some projects may require more detailed calculations. The optimization approach helps identify significant trade-offs between energy generation and land availability.
Resumo:
Auction based mechanisms have become popular in industrial procurement settings. These mechanisms minimize the cost of procurement and at the same time achieve desirable properties such as truthful bidding by the suppliers. In this paper, we investigate the design of truthful procurement auctions taking into account an additional important issue namely carbon emissions. In particular, we focus on the following procurement problem: A buyer wishes to source multiple units of a homogeneous item from several competing suppliers who offer volume discount bids and who also provide emission curves that specify the cost of emissions as a function of volume of supply. We assume that emission curves are reported truthfully since that information is easily verifiable through standard sources. First we formulate the volume discount procurement auction problem with emission constraints under the assumption that the suppliers are honest (that is they report production costs truthfully). Next we describe a mechanism design formulation for green procurement with strategic suppliers. Our numerical experimentation shows that emission constraints can significantly alter sourcing decisions and affect the procurement costs dramatically. To the best of our knowledge, this is the first effort in explicitly taking into account carbon emissions in planning procurement auctions.
Resumo:
The migration of a metal atom in a metal olefin complex from one pi face of the olefin to the opposite pi face has been rarely documented. Gladysz and co-workers showed that such a movement is indeed possible in monosubstituted chiral Re olefin complexes, resulting in diastereomerization. Interestingly, this isomerization occurred without dissociation, and on the basis of kinetic isotope effects, the involvement of a trans C-H bond was indicated. Either oxidative addition or an agostic interaction of the vinylic C-H(D) bond with the metal could account for the experimentally observed kinetic isotope effect. In this study we compute the free energy of activation for the migration of Re from one enantioface of the olefin to the other through various pathways. On the basis of DFT calculations at the B3LYP level we show that a trans (C-H)center dot center dot center dot Re interaction and trans C-H oxidative addition provide a nondissociative path for the diastereomerization. The trans (C-H)center dot center dot center dot Re interaction path is computed to be more favorable by 2.3 kcal mol(-1) than the oxidative addition path. While direct experimental evidence was not able to discount the migration of the metal through the formation of a eta(2)-arene complex (conducted tour mechanism), computational results at the B3LYP level show that it is energetically more expensive. Surprisingly, a similar analysis carried out at the M06 level computes a lower energy path for the conducted tour mechanism and is not consistent with the experimental isotope effects observed. Metal-(C-H) interactions and oxidative additions of the metal into C-H bonds are closely separated in energy and might contribute to unusual fluxional processes such as this diastereomerization.
Resumo:
Reliable estimates for the maximum available uplift resistance from the backfill soil are essential to prevent upheaval buckling of buried pipelines. The current design code DNV RP F110 does not offer guidance on how to predict the uplift resistance when the cover:pipe diameter (H/D) ratio is less than 2. Hence the current industry practice is to discount the shear contribution from uplift resitance for design scenarios with H/D ratios less than 1. The necessity of this extra conservatism is assessed through a series of full-scale and centrifuge tests, 21 in total, at the Schofield Centre, University of Cambridge. Backfill types include saturated loose sand, saturated dense sand and dry gravel. Data revealed that the Vertical Slip Surface Model remains applicable for design scenarios in loose sand, dense sand and gravel with H/D ratios less than 1, and that there is no evidence that the contribution from shear should be ignored at these low H/D ratios. For uplift events in gravel, the shear component seems reliable if the cover is more than 1-2 times the average particle size (D50), and more research effort is currenty being carried out to verify this conclusion. Strain analysis from the Particle Image Velocimetry (PIV) technique proves that the Vertical Slip Surface Model is a good representation of the true uplift deformation mechanism in loose sand at H/D ratios between 0.5 and 3.5. At very low H/D ratios (H/D < 0.5), the deformation mechanism is more wedge-like, but the increased contribution from soil weight is likely to be compensated by the reduced shear contributions. Hence the design equation based on the Vertical Slip Surface Model still produces good estimates for the maximum available uplift resistance. The evolution of shear strain field from PIV analysis provides useful insight into how uplift resistance is mobilized as the uplift event progresses. Copyright 2010, Offshore Technology Conference.
Resumo:
To be published in: Revista Internacional de Sociología (2011), Special Issue on Experimental and Behavioral Economics.
Resumo:
The purpose of this article is to characterize dynamic optimal harvesting trajectories that maximize discounted utility assuming an age-structured population model, in the same line as Tahvonen (2009). The main novelty of our study is that uses as an age-structured population model the standard stochastic cohort framework applied in Virtual Population Analysis for fish stock assessment. This allows us to compare optimal harvesting in a discounted economic context with standard reference points used by fisheries agencies for long term management plans (e.g. Fmsy). Our main findings are the following. First, optimal steady state is characterized and sufficient conditions that guarantees its existence and uniqueness for the general case of n cohorts are shown. It is also proved that the optimal steady state coincides with the traditional target Fmsy when the utility function to be maximized is the yield and the discount rate is zero. Second, an algorithm to calculate the optimal path that easily drives the resource to the steady state is developed. And third, the algorithm is applied to the Northern Stock of hake. Results show that management plans based exclusively on traditional reference targets as Fmsy may drive fishery economic results far from the optimal.
Resumo:
The paper has two major contributions to the theory of repeated games. First, we build a supergame oligopoly model where firms compete in supply functions, we show how collusion sustainability is affected by the presence of a convex cost function, the magnitude of both the slope of demand market, and the number of rivals. Then, we compare the results with those of the traditional Cournot reversion under the same structural characteristics. We find how depending on the number of firms and the slope of the linear demand, collusion sustainability is easier under supply function than under Cournot competition. The conclusions of the models are simulated with data from the Spanish wholesale electricity market to predict lower bounds of the discount factors.
Resumo:
I consider cooperation situations where players have network relations. Networks evolve according to a stationary transition probability matrix and at each moment in time players receive payoffs from a stationary allocation rule. Players discount the future by a common factor. The pair formed by an allocation rule and a transition probability matrix is called expected fair if for every link in the network both participants gain, marginally, and in discounted, expected terms, the same from it; and it is called a pairwise network formation procedure if the probability that a link is created (or eliminated) is positive if the discounted, expected gains to its two participants are positive too. The main result is the existence, for the discount factor small enough, of an expected fair and pairwise network formation procedure where the allocation rule is component balanced, meaning it distributes the total value of any maximal connected subnetwork among its participants. This existence result holds for all discount factors when the pairwise network formation procedure is restricted. I finally provide some comparison with previous models of farsighted network formation.
Resumo:
We consider cooperation situations where players have network relations. Networks evolve according to a stationary transition probability matrix and at each moment in time players receive payoffs from a stationary allocation rule. Players discount the future by a common factor. The pair formed by an allocation rule and a transition probability matrix is called a forward-looking network formation scheme if, first, the probability that a link is created is positive if the discounted, expected gains to its two participants are positive, and if, second, the probability that a link is eliminated is positive if the discounted, expected gains to at least one of its two participants are positive. The main result is the existence, for all discount factors and all value functions, of a forward-looking network formation scheme. Furthermore, we can always nd a forward-looking network formation scheme such that (i) the allocation rule is component balanced and (ii) the transition probabilities increase in the di erence in payo s for the corresponding players responsible for the transition. We use this dynamic solution concept to explore the tension between e ciency and stability.
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
4 p.
Resumo:
4 p.
Resumo:
29 p.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.