807 resultados para Information Model
Resumo:
This paper investigates the introduction of type dynamic in the La ont and Tirole's regulation model. The regulator and the rm are engaged in a two period relationship governed by short-term contracts, where, the regulator observes cost but cannot distinguish how much of the cost is due to e ort on cost reduction or e ciency of rm's technology, named type. There is asymmetric information about the rm's type. Our model is developed in a framework in which the regulator learns with rm's choice in the rst period and uses that information to design the best second period incentive scheme. The regulator is aware of the possibility of changes in types and takes that into account. We show how type dynamic builds a bridge between com- mitment and non-commitment situations. In particular, the possibility of changing types mitigates the \ratchet e ect". We show that for small degree of type dynamic the equilibrium shows separation and the welfare achived is close to his upper bound (given by the commitment allocation).
Resumo:
Esta tese se dedica ao estudo de modelos de fixação de preços e suas implicações macroeconômicas. Nos primeiros dois capítulos analiso modelos em que as decisões das firmas sobre seus preços praticados levam em conta custos de menu e de informação. No Capítulo 1 eu estimo tais modelos empregando estatísticas de variações de preços dos Estados Unidos, e concluo que: os custos de informação são significativamente maiores que os custos de menu; os dados claramente favorecem o modelo em que informações sobre condições agregadas são custosas enquanto que as idiossincráticas têm custo zero. No Capítulo 2 investigo as consequências de choques monetários e anúncios de desinflação usando os modelos previamente estimados. Mostro que o grau de não-neutralidade monetária é maior no modelo em que parte da informação é grátis. O Capítulo 3 é um artigo em conjunto com Carlos Carvalho (PUC-Rio) e Antonella Tutino (Federal Reserve Bank of Dallas). No artigo examinamos um modelo de fixação de preços em que firmas estão sujeitas a uma restrição de fluxo de informação do tipo Shannon. Calibramos o modelo e estudamos funções impulso-resposta a choques idiossincráticos e agregados. Mostramos que as firmas vão preferir processar informações agregadas e idiossincráticas conjuntamente ao invés de investigá-las separadamente. Este tipo de processamento gera ajustes de preços mais frequentes, diminuindo a persistência de efeitos reais causados por choques monetários.
Resumo:
This paper studies construction of facilities in a federal state under asymmetric information. A country consists of two regions, each ruled by a local authority. The federal government plans to construct a facility in one of the regions. The facility generates a local value in the host region and has spillover effects in the other region. The federal government does not observe the local value because it is the local authority's private information. 80 the federal governrnent designs an incentive-compatible mechanism, specifying if the facility should be constructed and a balanced scheme of interregional transfers to finance its cost. The federal governrnent is constitutionally constrained to respect a given leveI of each region's welfare. We show that depending upon the facility's local value and the spillover effect, the governrnent faces different incentive problems. Moreover, their existence depends crucially on how stringent constitutional constraints are. Therefore, the optimal mechanism will also depend upon these three features of the model.
Resumo:
This paper discusses distribution and the historical phases of capitalism. It assumes that technical progress and growth are taking place, and, given that, its question is on the functional distribution of income between labor and capital, having as reference classical theory of distribution and Marx’s falling tendency of the rate of profit. Based on the historical experience, it, first, inverts the model, making the rate of profit as the constant variable in the long run and the wage rate, as the residuum; second, it distinguishes three types of technical progress (capital-saving, neutral and capital-using) and applies it to the history of capitalism, having the UK and France as reference. Given these three types of technical progress, it distinguishes four phases of capitalist growth, where only the second is consistent with Marx prediction. The last phase, after World War II, should be, in principle, capital-saving, consistent with growth of wages above productivity. Instead, since the 1970s wages were kept stagnant in rich countries because of, first, the fact that the Information and Communication Technology Revolution proved to be highly capital using, opening room for a new wage of substitution of capital for labor; second, the new competition coming from developing countries; third, the emergence of the technobureaucratic or professional class; and, fourth, the new power of the neoliberal class coalition associating rentier capitalists and financiers
Resumo:
There is strong empirical evidence that risk premia in long-term interest rates are time-varying. These risk premia critically depend on interest rate volatility, yet existing research has not examined the im- pact of time-varying volatility on excess returns for long-term bonds. To address this issue, we incorporate interest rate option prices, which are very sensitive to interest rate volatility, into a dynamic model for the term structure of interest rates. We estimate three-factor affine term structure models using both swap rates and interest rate cap prices. When we incorporate option prices, the model better captures interest rate volatility and is better able to predict excess returns for long-term swaps over short-term swaps, both in- and out-of-sample. Our results indicate that interest rate options contain valuable infor- mation about risk premia and interest rate dynamics that cannot be extracted from interest rates alone.
Resumo:
This paper studies a model of a sequential auction where bidders are allowed to acquire further information about their valuations of the object in the middle of the auction. It is shown that, in any equilibrium where the distribution of the final price is atornless, a bidder's best response has a simple characterization. In particular, the optimal information acquisition point is the same, regardless of the other bidders' actions. This makes it natural to focus on symmetric, undominated equilibria, as in the Vickrey auction. An existence theorem for such a class of equilibria is presented. The paper also presents some results and numerical simulations that compare this sequential auction with the one-shot auction. 8equential auctions typically yield more expected revenue for the seller than their one-shot counterparts. 80 the possibility of mid-auction information acquisition can provide an explanation for why sequential procedures are more often adopted.
Resumo:
Starting from the idea that economic systems fall into complexity theory, where its many agents interact with each other without a central control and that these interactions are able to change the future behavior of the agents and the entire system, similar to a chaotic system we increase the model of Russo et al. (2014) to carry out three experiments focusing on the interaction between Banks and Firms in an artificial economy. The first experiment is relative to Relationship Banking where, according to the literature, the interaction over time between Banks and Firms are able to produce mutual benefits, mainly due to reduction of the information asymmetry between them. The following experiment is related to information heterogeneity in the credit market, where the larger the bank, the higher their visibility in the credit market, increasing the number of consult for new loans. Finally, the third experiment is about the effects on the credit market of the heterogeneity of prices that Firms faces in the goods market.
Resumo:
My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.
Resumo:
Develop software is still a risky business. After 60 years of experience, this community is still not able to consistently build Information Systems (IS) for organizations with predictable quality, within previously agreed budget and time constraints. Although software is changeable we are still unable to cope with the amount and complexity of change that organizations demand for their IS. To improve results, developers followed two alternatives: Frameworks that increase productivity but constrain the flexibility of possible solutions; Agile ways of developing software that keep flexibility with less upfront commitments. With strict frameworks, specific hacks have to be put in place to get around the framework construction options. In time this leads to inconsistent architectures that are harder to maintain due to incomplete documentation and human resources turnover. The main goals of this work is to create a new way to develop flexible IS for organizations, using web technologies, in a faster, better and cheaper way that is more suited to handle organizational change. To do so we propose an adaptive object model that uses a new ontology for data and action with strict normalizing rules. These rules should bound the effects of changes that can be better tested and therefore corrected. Interfaces are built with templates of resources that can be reused and extended in a flexible way. The “state of the world” for each IS is determined by all production and coordination acts that agents performed over time, even those performed by external systems. When bugs are found during maintenance, their past cascading effects can be checked through simulation, re-running the log of transaction acts over time and checking results with previous records. This work implements a prototype with part of the proposed system in order to have a preliminary assessment its feasibility and limitations.
Resumo:
The scalar sector of the simplest version of the 3-3-1 electroweak model is constructed with three Higgs triplets only. We show that a relation involving two of the constants of the model, two vacuum expectation values of the neutral scalars, and the mass of the doubly charged Higgs boson leads to important information concerning the signals of this scalar particle.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A neural model for solving nonlinear optimization problems is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The network is shown to be completely stable and globally convergent to the solutions of nonlinear optimization problems. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are presented to validate the developed methodology.