113 resultados para optimal prediction
Resumo:
To recover a version of Barro's (1979) `random walk'tax smoothing outcome, we modify Lucas and Stokey's (1983) economyto permit only risk--free debt. This imparts near unit root like behaviorto government debt, independently of the government expenditureprocess, a realistic outcome in the spirit of Barro's. We showhow the risk--free--debt--only economy confronts the Ramsey plannerwith additional constraints on equilibrium allocations thattake the form of a sequence of measurability conditions.We solve the Ramsey problem by formulating it in terms of a Lagrangian,and applying a Parameterized Expectations Algorithm tothe associated first--order conditions. The first--order conditions andnumerical impulse response functions partially affirmBarro's random walk outcome. Though the behaviors oftax rates, government surpluses, and government debts differ, allocationsare very close for computed Ramsey policies across incomplete and completemarkets economies.
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
This paper extends the optimal law enforcement literature to organized crime.We model the criminal organization as a vertical structure where the principal extracts some rents from the agents through extortion. Depending on the principal's information set, threats may or may not be credible. As long as threats are credible, the principal is able to fully extract rents.In that case, the results obtained by applying standard theory of optimal law enforcement are robust: we argue for a tougher policy. However, when threats are not credible, the principal is not able to fully extract rents and there is violence. Moreover, we show that it is not necessarily true that a tougher law enforcement policy should be chosen when in presence of organized crime.
Resumo:
In this paper, we take an organizational view of organized crime. In particular, we study the organizational consequences of product illegality attending at the following characteristics: (i) contracts are not enforceable in court, (ii) all participants are subject to the risk of being punished, (iii) employees present a major threat to the entrepreneur having the most detailed knowledge concerning participation, (iv) separation between ownership and management is difficult because record-keeping and auditing augments criminal evidence.
Resumo:
When procurement takes place in the presence of horizontally differentiated contractors, the design of the object being procured affects the resulting degree of competition. This paper highlights the interaction between theoptimal procurement mechanism and the design choice. Contrary to conventional wisdom, the sponsor's design choice, instead of homogenizingthe market to generate competition, promotes heterogeneity.
Resumo:
We incorporate the process of enforcement learning by assuming that the agency's current marginal cost is a decreasing function of its past experience of detecting and convicting. The agency accumulates data and information (on criminals, on opportunities of crime) enhancing the ability to apprehend in the future at a lower marginal cost.We focus on the impact of enforcement learning on optimal stationary compliance rules. In particular, we show that the optimal stationary fine could be less-than-maximal and the optimal stationary probability of detection could be higher-than-otherwise.
Resumo:
In this paper, we focus on the problem created by asymmetric informationabout the enforcer's (agent's) costs associated to enforcement expenditure. This adverse selection problem affects optimal law enforcement because a low cost enforcer may conceal its information by imitating a high cost enforcer, and must then be given a compensation to be induced to reveal its true costs. The government faces a trade-off between minimizing the enforcer's compensation and maximizing the net surplus of harmful acts. As a consequence, the probability of apprehension and punishment is usually reduced leading to more offenses being committed. We show that asymmetry of information does not affect law enforcement as long as raising public funds is costless. The consideration of costly raising of public funds permits to establish the positive correlation between asymmetry of information between government and enforcers andthe crime rate.
Resumo:
The aim of this work was the use of NIR technology by direct application of a fiber optic probe on back fat to analyze the fatty acid composition of CLA fed boars and gilts. 265 animals were fed 3 different diets and the fatty acid profile of back fat from Gluteus medius was analyzed using gas chromatography and FT-NIR. Spectra were acquired using a Bruker Optics Matrix-F duplex spectrometer equipped with a fiber optic probe (IN-268-2). Oleic and stearic fatty acids were predicted accurately; myristic, vaccenic and linoleic fatty acids were predicted with lower accuracy, while palmitic and α-linolenic fatty acids were poorly predicted. The relative percentage of fatty acids and NIR spectra showed differences in fatty acid composition of back fat from pigs fed CLA which increased the relative percentage of SFA and PUFA while MUFA decreased. Results suggest that a NIR fiber optic probe can be used to predict total saturated and unsaturated fatty acid composition, as well as the percentage of stearic and oleic. NIR showed potential as a rapid and easily implemented method to discriminate carcasses from animals fed different diets.
Resumo:
Desenvolupament dels models matemàtics necessaris per a controlar de forma òptima la microxarxa existent als laboratoris del Institut de Recerca en Energia de Catalunya. Els algoritmes s'implementaran per tal de simular el comportament i posteriorment es programaran directament sobre els elements de la microxarxa per verificar el seu correcte funcionament.. Desenvolupament dels models matemàtics necessaris per a controlar de forma òptima la microxarxa existent als laboratoris del Institut de Recerca en Energia de Catalunya. Els algoritmes s'implementaran per tal de simular el comportament i posteriorment es programaran directament sobre els elements de la microxarxa per verificar el seu correcte funcionament.
Resumo:
The control and prediction of wastewater treatment plants poses an important goal: to avoid breaking the environmental balance by always keeping the system in stable operating conditions. It is known that qualitative information — coming from microscopic examinations and subjective remarks — has a deep influence on the activated sludge process. In particular, on the total amount of effluent suspended solids, one of the measures of overall plant performance. The search for an input–output model of this variable and the prediction of sudden increases (bulking episodes) is thus a central concern to ensure the fulfillment of current discharge limitations. Unfortunately, the strong interrelationbetween variables, their heterogeneity and the very high amount of missing information makes the use of traditional techniques difficult, or even impossible. Through the combined use of several methods — rough set theory and artificial neural networks, mainly — reasonable prediction models are found, which also serve to show the different importance of variables and provide insight into the process dynamics
Resumo:
[spa] La mayoría de siniestros con daños corporales se liquidan mediante negociación, llegando a juicio menos del 5% de los casos. Una estrategia de negociación bien definida es, por tanto, fundamental para las compañías aseguradoras. En este artículo asumimos que la compensación monetaria concedida en juicio es la máxima cuantía que debería ser ofrecida por el asegurador en el proceso de negociación. Usando una base de datos real, implementamos un modelo log-lineal para estimar la máxima oferta de negociación. Perturbaciones no-esféricas son detectadas. Correlación ocurre cuando más de una siniestro se liquida en la misma sentencia judicial. Heterocedasticidad por grupos se debe a la influencia de la valoración del forense en la indemnización final.
Resumo:
[spa] La mayoría de siniestros con daños corporales se liquidan mediante negociación, llegando a juicio menos del 5% de los casos. Una estrategia de negociación bien definida es, por tanto, fundamental para las compañías aseguradoras. En este artículo asumimos que la compensación monetaria concedida en juicio es la máxima cuantía que debería ser ofrecida por el asegurador en el proceso de negociación. Usando una base de datos real, implementamos un modelo log-lineal para estimar la máxima oferta de negociación. Perturbaciones no-esféricas son detectadas. Correlación ocurre cuando más de una siniestro se liquida en la misma sentencia judicial. Heterocedasticidad por grupos se debe a la influencia de la valoración del forense en la indemnización final.
Resumo:
[eng] This paper provides, from a theoretical and quantitative point of view, an explanation of why taxes on capital returns are high (around 35%) by analyzing the optimal fiscal policy in an economy with intergenerational redistribution. For this purpose, the government is modeled explicitly and can choose (and commit to) an optimal tax policy in order to maximize society's welfare. In an infinitely lived economy with heterogeneous agents, the long run optimal capital tax is zero. If heterogeneity is due to the existence of overlapping generations, this result in general is no longer true. I provide sufficient conditions for zero capital and labor taxes, and show that a general class of preferences, commonly used on the macro and public finance literature, violate these conditions. For a version of the model, calibrated to the US economy, the main results are: first, if the government is restricted to a set of instruments, the observed fiscal policy cannot be disregarded as sub optimal and capital taxes are positive and quantitatively relevant. Second, if the government can use age specific taxes for each generation, then the age profile capital tax pattern implies subsidizing asset returns of the younger generations and taxing at higher rates the asset returns of the older ones.