64 resultados para Export unit value indices
Resumo:
There are many situations in which individuals have a choice of whether or notto observe eventual outcomes. In these instances, individuals often prefer to remainignorant. These contexts are outside the scope of analysis of the standard vonNeumann-Morgenstern (vNM) expected utility model, which does not distinguishbetween lotteries for which the agent sees the final outcome and those for which hedoes not. I develop a simple model that admits preferences for making an observationor for remaining in doubt. I then use this model to analyze the connectionbetween preferences of this nature and risk-attitude. This framework accommodatesa wide array of behavioral patterns that violate the vNM model, and thatmay not seem related, prima facie. For instance, it admits self-handicapping, inwhich an agent chooses to impair his own performance. It also accommodatesa status quo bias without having recourse to framing effects, or to an explicitdefinition of reference points. In a political economy context, voters have strictincentives to shield themselves from information. In settings with other-regardingpreferences, this model predicts observed behavior that seems inconsistent witheither altruism or self-interested behavior.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
The aim of this paper is to examine the pros and cons of book and fair value accounting from the perspective of the theory of banking. We consider the implications of the two accounting methods in an overlapping generations environment. As observed by Allen and Gale(1997), in an overlapping generation model, banks have a role as intergenerational connectors as they allow for intertemporal smoothing. Our main result is that when dividends depend on profits, book value ex ante dominates fair value, as it provides better intertemporal smoothing. This is in contrast with the standard view that states that, fair value yields a better allocation as it reflects the real opportunity cost of assets. Banking regulation play an important role by providing the right incentives for banks to smooth intertemporal consumption whereas market discipline improves intratemporal efficiency.
Resumo:
Many revenue management (RM) industries are characterized by (a) fixed capacities in theshort term (e.g., hotel rooms, seats on an airline flight), (b) homogeneous products (e.g., twoairline flights between the same cities at similar times), and (c) customer purchasing decisionslargely influenced by price. Competition in these industries is also very high even with just twoor three direct competitors in a market. However, RM competition is not well understood andpractically all known implementations of RM software and most published models of RM donot explicitly model competition. For this reason, there has been considerable recent interestand research activity to understand RM competition. In this paper we study price competitionfor an oligopoly in a dynamic setting, where each of the sellers has a fixed number of unitsavailable for sale over a fixed number of periods. Demand is stochastic, and depending on howit evolves, sellers may change their prices at any time. This reflects the fact that firms constantly,and almost costlessly, change their prices (alternately, allocations at a price in quantity-basedRM), reacting either to updates in their estimates of market demand, competitor prices, orinventory levels. We first prove existence of a unique subgame-perfect equilibrium for a duopoly.In equilibrium, in each state sellers engage in Bertrand competition, so that the seller withthe lowest reservation value ends up selling a unit at a price that is equal to the equilibriumreservation value of the competitor. This structure hence extends the marginal-value conceptof bid-price control, used in many RM implementations, to a competitive model. In addition,we show that the seller with the lowest capacity sells all its units first. Furthermore, we extendthe results transparently to n firms and perform a number of numerical comparative staticsexploiting the uniqueness of the subgame-perfect equilibrium.
Resumo:
We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity;and (ii) the number of servers. It follows that its relativesuboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's ruleto the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set ofwork decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.
Resumo:
The paper contrasts empirically the results of alternative methods for estimating thevalue and the depreciation of mineral resources. The historical data of Mexico andVenezuela, covering the period 1920s-1980s, is used to contrast the results of severalmethods. These are the present value, the net price method, the user cost method andthe imputed income method. The paper establishes that the net price and the user costare not competing methods as such, but alternative adjustments to different scenariosof closed and open economies. The results prove that the biases of the methods, ascommonly described in the theoretical literature, only hold under the most restrictedscenario of constant rents over time. It is argued that the difference between what isexpected to happen and what actually did happen is for the most part due to a missingvariable, namely technological change. This is an important caveat to therecommendations made based on these models.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid(whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then theproblem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.
Resumo:
This paper presents a method for the measurement of changes in health inequality and income-related health inequality over time in a population.For pure health inequality (as measured by the Gini coefficient) andincome-related health inequality (as measured by the concentration index),we show how measures derived from longitudinal data can be related tocross section Gini and concentration indices that have been typicallyreported in the literature to date, along with measures of health mobilityinspired by the literature on income mobility. We also show how thesemeasures of mobility can be usefully decomposed into the contributions ofdifferent covariates. We apply these methods to investigate the degree ofincome-related mobility in the GHQ measure of psychological well-being inthe first nine waves of the British Household Panel Survey (BHPS). Thisreveals that dynamics increase the absolute value of the concentrationindex of GHQ on income by 10%.
Resumo:
This paper proposes an argument that explains incumbency advantage without recurring to the collective irresponsibility of legislatures. For that purpose, we exploit the informational value of incumbency: incumbency confers voters information about governing politicians not available from challengers. Because there are many reasons for high reelection rates different from incumbency status, we propose a measure of incumbency advantage that improves the use of pure reelection success. We also study the relationship between incumbency advantage and ideological and selection biases. An important implication of our analysis is that the literature linking incumbency and legislature irresponsibility most likely provides an overestimation of the latter.
Resumo:
How much information does an auctioneer want bidders to have in a private value environment?We address this question using a novel approach to ordering information structures based on the property that in private value settings more information leads to a more disperse distribution of buyers updated expected valuations. We define the class of precision criteria following this approach and different notions of dispersion, and relate them to existing criteria of informativeness. Using supermodular precision, we obtain three results: (1) a more precise information structure yields a more efficient allocation; (2) the auctioneer provides less than the efficient level of information since more information increases bidder informational rents; (3) there is a strategic complementarity between information and competition, so that both the socially efficient and the auctioneer s optimal choice of precision increase with the number of bidders, and both converge as the number of bidders goes to infinity.
Resumo:
This paper examines the value of connections between German industry andthe Nazi movement in early 1933. Drawing on previously unused contemporarysources about management and supervisory board composition and stock returns,we find that one out of seven firms, and a large proportion of the biggest companies,had substantive links with the National Socialist German Workers Party. Firmssupporting the Nazi movement experienced unusually high returns, outperformingunconnected ones by 5% to 8% between January and March 1933. These resultsare not driven by sectoral composition and are robust to alternative estimatorsand definitions of affiliation.