409 resultados para Discount Fares
Resumo:
Reliable estimates for the maximum available uplift resistance from the backfill soil are essential to prevent upheaval buckling of buried pipelines. The current design code DNV RP F110 does not offer guidance on how to predict the uplift resistance when the cover:pipe diameter (H/D) ratio is less than 2. Hence the current industry practice is to discount the shear contribution from uplift resitance for design scenarios with H/D ratios less than 1. The necessity of this extra conservatism is assessed through a series of full-scale and centrifuge tests, 21 in total, at the Schofield Centre, University of Cambridge. Backfill types include saturated loose sand, saturated dense sand and dry gravel. Data revealed that the Vertical Slip Surface Model remains applicable for design scenarios in loose sand, dense sand and gravel with H/D ratios less than 1, and that there is no evidence that the contribution from shear should be ignored at these low H/D ratios. For uplift events in gravel, the shear component seems reliable if the cover is more than 1-2 times the average particle size (D50), and more research effort is currenty being carried out to verify this conclusion. Strain analysis from the Particle Image Velocimetry (PIV) technique proves that the Vertical Slip Surface Model is a good representation of the true uplift deformation mechanism in loose sand at H/D ratios between 0.5 and 3.5. At very low H/D ratios (H/D < 0.5), the deformation mechanism is more wedge-like, but the increased contribution from soil weight is likely to be compensated by the reduced shear contributions. Hence the design equation based on the Vertical Slip Surface Model still produces good estimates for the maximum available uplift resistance. The evolution of shear strain field from PIV analysis provides useful insight into how uplift resistance is mobilized as the uplift event progresses. Copyright 2010, Offshore Technology Conference.
Resumo:
To be published in: Revista Internacional de Sociología (2011), Special Issue on Experimental and Behavioral Economics.
Resumo:
The purpose of this article is to characterize dynamic optimal harvesting trajectories that maximize discounted utility assuming an age-structured population model, in the same line as Tahvonen (2009). The main novelty of our study is that uses as an age-structured population model the standard stochastic cohort framework applied in Virtual Population Analysis for fish stock assessment. This allows us to compare optimal harvesting in a discounted economic context with standard reference points used by fisheries agencies for long term management plans (e.g. Fmsy). Our main findings are the following. First, optimal steady state is characterized and sufficient conditions that guarantees its existence and uniqueness for the general case of n cohorts are shown. It is also proved that the optimal steady state coincides with the traditional target Fmsy when the utility function to be maximized is the yield and the discount rate is zero. Second, an algorithm to calculate the optimal path that easily drives the resource to the steady state is developed. And third, the algorithm is applied to the Northern Stock of hake. Results show that management plans based exclusively on traditional reference targets as Fmsy may drive fishery economic results far from the optimal.
Resumo:
Aborda o papel do Congresso Nacional na democratização da política externa brasileira. Descreve os instrumentos jurídicos e políticos à disposição do Poder Legislativo para o exame de tratados internacionais assinados pelo Poder Executivo. Discute a possibilidade de apresentação de emendas a texto de tratado. Analisa proposições que visam modificar o poder formal do Legislativo quanto à formulação da política externa.
Resumo:
The paper has two major contributions to the theory of repeated games. First, we build a supergame oligopoly model where firms compete in supply functions, we show how collusion sustainability is affected by the presence of a convex cost function, the magnitude of both the slope of demand market, and the number of rivals. Then, we compare the results with those of the traditional Cournot reversion under the same structural characteristics. We find how depending on the number of firms and the slope of the linear demand, collusion sustainability is easier under supply function than under Cournot competition. The conclusions of the models are simulated with data from the Spanish wholesale electricity market to predict lower bounds of the discount factors.
Resumo:
I consider cooperation situations where players have network relations. Networks evolve according to a stationary transition probability matrix and at each moment in time players receive payoffs from a stationary allocation rule. Players discount the future by a common factor. The pair formed by an allocation rule and a transition probability matrix is called expected fair if for every link in the network both participants gain, marginally, and in discounted, expected terms, the same from it; and it is called a pairwise network formation procedure if the probability that a link is created (or eliminated) is positive if the discounted, expected gains to its two participants are positive too. The main result is the existence, for the discount factor small enough, of an expected fair and pairwise network formation procedure where the allocation rule is component balanced, meaning it distributes the total value of any maximal connected subnetwork among its participants. This existence result holds for all discount factors when the pairwise network formation procedure is restricted. I finally provide some comparison with previous models of farsighted network formation.
Resumo:
We consider cooperation situations where players have network relations. Networks evolve according to a stationary transition probability matrix and at each moment in time players receive payoffs from a stationary allocation rule. Players discount the future by a common factor. The pair formed by an allocation rule and a transition probability matrix is called a forward-looking network formation scheme if, first, the probability that a link is created is positive if the discounted, expected gains to its two participants are positive, and if, second, the probability that a link is eliminated is positive if the discounted, expected gains to at least one of its two participants are positive. The main result is the existence, for all discount factors and all value functions, of a forward-looking network formation scheme. Furthermore, we can always nd a forward-looking network formation scheme such that (i) the allocation rule is component balanced and (ii) the transition probabilities increase in the di erence in payo s for the corresponding players responsible for the transition. We use this dynamic solution concept to explore the tension between e ciency and stability.
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
4 p.
Resumo:
4 p.
Resumo:
29 p.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
25 p.
Resumo:
The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?
In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.
Resumo:
[ES] Las comunidades online se han convertido en un lugar de encuentro muy popular para los consumidores que les permite compartir información. En este artículo se presenta una técnica de información novedosa como la netnografía, y se aplica para determinar cuál es el posicionamiento de las empresas de distribución alimentaria. Tras la recogida y análisis de 506 mensajes válidos de la comunidad online Ciao, se pudo conocer qué atributos se asociaban a seis establecimientos de alimentación analizados. Mercadona se asocia con la calidad de su marca de distribuidor y una escasa variedad de marcas/productos. Las tiendas discount, Lidl y DIA, destacan por la posibilidad de mejora en la limpieza del establecimiento y la localización de los productos. Los hipermercados, Eroski, Alcampo y Carrefour, son destacados por su variedad de marcas/productos, y alejado del domicilio. También se ha identificado a los competidores más directos de cada empresa, encontrándose una competencia entre los formatos de venta del mismo tipo (intratipo). El uso de la netnografia, técnica relativamente reciente, supone la mayor originalidad del trabajo. Además, las conclusiones obtenidas, que son coincidentes con estudios anteriores, muestran que la netnografía puede ser una fuente de información para determinar cuál es la imagen comercial y el posicionamiento de las empresas.