920 resultados para Optimal switch allocation
Resumo:
This paper addresses the issue of the optimal behaviour of the Lender of Last Resort (LOLR) in its microeconomic role regarding individual financial institutions in distress. It has been argued that the LOLR should not intervene at the microeconomic level and let any defaulting institution face the market discipline, as it will be confronted with the consequences of the risks it has taken. By considering a simple costbenefit analysis we show that this position may lack a sufficient foundation. We establish that, instead, uder reasonable assumptions, the optimal policy has to be conditional on the amount of uninsured debt issued by the defaulting bank. Yet in equilibrium, because the rescue policy is costly, the LOLR will not rescue all the banks that fulfill the uninsured debt requirement condition, but will follow a mixed strategy. This we interpret as the confirmation of the "creative ambiguity" principle, perfectly in line with the central bankers claim that it is efficient for them to have discretion in lending to individual institutions. Alternatively, in other cases, when the social cost of a bank's bankruptcy is too high, it is optimal for the LOLR to bail out the insititution, and this gives support to the "too big to fail" policy.
Resumo:
This article studies the effects of interest rate restrictions on loan allocation. The British governmenttightened the usury laws in 1714, reducing the maximum permissible interest rate from 6% to5%. A sample of individual loan transactions reveals that average loan size and minimum loan sizeincreased strongly, while access to credit worsened for those with little social capital. Collateralisedcredits, which had accounted for a declining share of total lending, returned to their former role ofprominence. Our results suggest that the usury laws distorted credit markets significantly; we findno evidence that they offered a form of Pareto-improving social insurance.
Resumo:
We explore the implications for the optimal degree of fiscal decentralization when people spreferences for goods and services, which classic treatments of fiscal federalism (Oates, 1972)place in the purview of local governments, exhibit specific egalitarianism (Tobin, 1970), orsolidarity. We find that a system in which the central government provides a common minimumlevel of the publicly provided good, and local governments are allowed to use their ownresources to provide an even higher local level, performs better from an efficiency perspectiverelative to all other systems analyzed for a relevant range of preferences over solidarity.
Resumo:
To recover a version of Barro's (1979) `random walk'tax smoothing outcome, we modify Lucas and Stokey's (1983) economyto permit only risk--free debt. This imparts near unit root like behaviorto government debt, independently of the government expenditureprocess, a realistic outcome in the spirit of Barro's. We showhow the risk--free--debt--only economy confronts the Ramsey plannerwith additional constraints on equilibrium allocations thattake the form of a sequence of measurability conditions.We solve the Ramsey problem by formulating it in terms of a Lagrangian,and applying a Parameterized Expectations Algorithm tothe associated first--order conditions. The first--order conditions andnumerical impulse response functions partially affirmBarro's random walk outcome. Though the behaviors oftax rates, government surpluses, and government debts differ, allocationsare very close for computed Ramsey policies across incomplete and completemarkets economies.
Resumo:
How much information does an auctioneer want bidders to have in a private value environment?We address this question using a novel approach to ordering information structures based on the property that in private value settings more information leads to a more disperse distribution of buyers updated expected valuations. We define the class of precision criteria following this approach and different notions of dispersion, and relate them to existing criteria of informativeness. Using supermodular precision, we obtain three results: (1) a more precise information structure yields a more efficient allocation; (2) the auctioneer provides less than the efficient level of information since more information increases bidder informational rents; (3) there is a strategic complementarity between information and competition, so that both the socially efficient and the auctioneer s optimal choice of precision increase with the number of bidders, and both converge as the number of bidders goes to infinity.
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.
Resumo:
This paper extends the optimal law enforcement literature to organized crime.We model the criminal organization as a vertical structure where the principal extracts some rents from the agents through extortion. Depending on the principal's information set, threats may or may not be credible. As long as threats are credible, the principal is able to fully extract rents.In that case, the results obtained by applying standard theory of optimal law enforcement are robust: we argue for a tougher policy. However, when threats are not credible, the principal is not able to fully extract rents and there is violence. Moreover, we show that it is not necessarily true that a tougher law enforcement policy should be chosen when in presence of organized crime.
Resumo:
We develop a mathematical programming approach for the classicalPSPACE - hard restless bandit problem in stochastic optimization.We introduce a hierarchy of n (where n is the number of bandits)increasingly stronger linear programming relaxations, the lastof which is exact and corresponds to the (exponential size)formulation of the problem as a Markov decision chain, while theother relaxations provide bounds and are efficiently computed. Wealso propose a priority-index heuristic scheduling policy fromthe solution to the first-order relaxation, where the indices aredefined in terms of optimal dual variables. In this way wepropose a policy and a suboptimality guarantee. We report resultsof computational experiments that suggest that the proposedheuristic policy is nearly optimal. Moreover, the second-orderrelaxation is found to provide strong bounds on the optimalvalue.
Resumo:
In this paper, we take an organizational view of organized crime. In particular, we study the organizational consequences of product illegality attending at the following characteristics: (i) contracts are not enforceable in court, (ii) all participants are subject to the risk of being punished, (iii) employees present a major threat to the entrepreneur having the most detailed knowledge concerning participation, (iv) separation between ownership and management is difficult because record-keeping and auditing augments criminal evidence.
Resumo:
We postulate a two-region world, comprised of North (calibrated after the US) and South(calibrated after China). Our optimization results show the compatibility of the following threedesiderata:(1) Global CO2 emissions follow a conservative path that leads to the stabilizationof concentrations at 450 ppm.(2) North and South converge to a path of sustained growth at 1% per year (28.2%per generation) in 2075.(3) During the transition to the steady state, North also grows at 1% per year whileSouth s rates of growth are markedly higher.The transition paths require a drastic reduction of the share of emissions allocated to North,large investments in knowledge, both in North and South, as well as very large investments ineducation in South. Surprisingly, in order to sustain North s utility growth rate, some output mustbe transferred from South to North during the transition.Although undoubtedly subject to many caveats, our results support a degree of optimism byproviding prima facie evidence of the possibility of tackling climate change in a way that is fairboth across generations and across regions while allowing for positive rates of humandevelopment.
Resumo:
In this paper, I analyze the ownership dynamics of N strategic risk-averse corporate insiders facing a moral hazard problem. A solution for the equilibrium share price and the dynamics of the aggregate insider stake is obtained in two cases: when agents can crediblycommit to an optimal ownership policy and when they cannot commit (time-consistent case). Inthe latter case, the aggregate stake gradually adjusts towards the competitive allocation. The speed of adjustment increases with N when outside investors are risk-averse, and does not depend on it when investors are risk-neutral. Predictions of the model are consistent with recent empirical findings.
Resumo:
In this paper we address the issue of locating hierarchical facilities in the presence of congestion. Two hierarchical models are presented, where lower level servers attend requests first, and then, some of the served customers are referred to higher level servers. In the first model, the objective is to find the minimum number of servers and theirlocations that will cover a given region with a distance or time standard. The second model is cast as a Maximal Covering Location formulation. A heuristic procedure is then presented together with computational experience. Finally, some extensions of these models that address other types of spatial configurations are offered.
Resumo:
Let there be a positive (exogenous) probability that, at each date, the human species will disappear.We postulate an Ethical Observer (EO) who maximizes intertemporal welfare under thisuncertainty, with expected-utility preferences. Various social welfare criteria entail alternativevon Neumann- Morgenstern utility functions for the EO: utilitarian, Rawlsian, and an extensionof the latter that corrects for the size of population. Our analysis covers, first, a cake-eating economy(without production), where the utilitarian and Rawlsian recommend the same allocation.Second, a productive economy with education and capital, where it turns out that the recommendationsof the two EOs are in general different. But when the utilitarian program diverges, thenwe prove it is optimal for the extended Rawlsian to ignore the uncertainty concerning the possibledisappearance of the human species in the future. We conclude by discussing the implicationsfor intergenerational welfare maximization in the presence of global warming.
Resumo:
When procurement takes place in the presence of horizontally differentiated contractors, the design of the object being procured affects the resulting degree of competition. This paper highlights the interaction between theoptimal procurement mechanism and the design choice. Contrary to conventional wisdom, the sponsor's design choice, instead of homogenizingthe market to generate competition, promotes heterogeneity.
Resumo:
We incorporate the process of enforcement learning by assuming that the agency's current marginal cost is a decreasing function of its past experience of detecting and convicting. The agency accumulates data and information (on criminals, on opportunities of crime) enhancing the ability to apprehend in the future at a lower marginal cost.We focus on the impact of enforcement learning on optimal stationary compliance rules. In particular, we show that the optimal stationary fine could be less-than-maximal and the optimal stationary probability of detection could be higher-than-otherwise.