127 resultados para dynamic theory
Resumo:
This paper surveys the recent literature on convergence across countries and regions. I discuss the main convergence and divergence mechanisms identified in the literature and develop a simple model that illustrates their implications for income dynamics. I then review the existing empirical evidence and discuss its theoretical implications. Early optimism concerning the ability of a human capital-augmented neoclassical model to explain productivity differences across economies has been questioned on the basis of more recent contributions that make use of panel data techniques and obtain theoretically implausible results. Some recent research in this area tries to reconcile these findings with sensible theoretical models by exploring the role of alternative convergence mechanisms and the possible shortcomings of panel data techniques for convergence analysis.
Resumo:
We study pair-wise decentralized trade in dynamic markets with homogeneous, non-atomic, buyers and sellers that wish to exchange one unit. Pairs of traders are randomly matched and bargaining a price under rules that offer the freedom to quit the match at any time. Market equilbria, prices and trades over time, are characterized. The asymptotic behavior of prices and trades as frictions (search costs and impatience) vanish, and the conditions for (non) convergence to walrasian prices are explored. As a side product of independent interest, we present a self-contained theory of non-cooperative bargaining with two-sided, time-varying, outside options.
Resumo:
This paper develops a theory of the joint allocation of formal control and cash-flow rights in venture capital deals. We argue that when the need for investor support calls for very high-powered outside claims, entrepreneurs should optimally retain formal control in order to avoid excessive interference. Hence, we predict that risky claims should be be negatively correlated to control rights, both along the life of a start-up and across deals. This challenges the idea that risky claims should a ways be associated to more formal control, and is in line with contractual terms increasingly used in venture capital, in corporate venturing and in partnership deals between biotech start-ups and large drug companies. The paper provides a theoretical explanation to some puzzling evidence documented in Gompers (1997) and Kaplan and Stromberg (2000), namely the inclusion in venture capital contracts of contingencies that trigger both a reduction in VC control and the conversion! of her preferred stocks into common stocks.
Resumo:
We study the process by which subordinated regions of a country can obtain a more favourable political status. In our theoretical model a dominant and a dominated region first interact through a voting process that can lead to different degrees of autonomy. If this process fails then both regions engage in a costly political conflict which can only lead to the maintenance of the initial subordination of the region in question or to its complete independence. In the subgame-perfect equilibrium the voting process always leads to an intermediate arrangement acceptable for both parts. Hence, the costly political struggle never occurs. In contrast, in our experiments we observe a large amount of fighting involving high material losses, even in a case in which the possibilities for an arrangement without conflict are very salient. In our experimental environment intermediate solutions are feasible and stable, but purely emotional elements prevent them from being reached.
Resumo:
This paper resorts to the contribution of the science philosopher Gerald Holton to map some of the IR arguments and debates in an unconventional and more insightful way. From this starting point, it is sustained that the formerly all-pervading neorealism-neoinstitutionalism debate has lost its appeal and is attracting less and less interest among scholars. It does not structure the approach of the theoretically-oriented authors any more; at least, not with the habitual intensity. More specifically, we defend that the neo-neo rapprochement, even if it could have demonstrated that international cooperation is possible and relevant in a Realist world, it has also impoverished theoretical debate by hiding some of the most significant issues that preoccupied classical transnationalists. Hence, some authors appear to be trying to rescue some of these arguments in an analytical and systematic fashion, opening up a theoretical querelle that may be the next one to pay attention to.
Resumo:
This paper presents an outline of rationale and theory of the MuSIASEM scheme (Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism). First, three points of the rationale behind our MuSIASEM scheme are discussed: (i) endosomatic and exosomatic metabolism in relation to Georgescu-Roegen’s flow-fund scheme; (2) the bioeconomic analogy of hypercycle and dissipative parts in ecosystems; (3) the dramatic reallocation of human time and land use patterns in various sectors of modern economy. Next, a flow-fund representation of the MUSIASEM scheme on three levels (the whole national level, the paid work sectors level, and the agricultural sector level) is illustrated to look at the structure of the human economy in relation to two primary factors: (i) human time - a fund; and (ii) exosomatic energy - a flow. The three levels representation uses extensive and intensive variables simultaneously. Key conceptual tools of the MuSIASEM scheme - mosaic effects and impredicative loop analysis - are explained using the three level flow-fund representation. Finally, we claim that the MuSIASEM scheme can be seen as a multi-purpose grammar useful to deal with sustainability issues.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
The objective of this paper is to re-evaluate the attitude to effort of a risk-averse decision-maker in an evolving environment. In the classic analysis, the space of efforts is generally discretized. More realistic, this new approach emploies a continuum of effort levels. The presence of multiple possible efforts and performance levels provides a better basis for explaining real economic phenomena. The traditional approach (see, Laffont, J. J. & Tirole, J., 1993, Salanie, B., 1997, Laffont, J.J. and Martimort, D, 2002, among others) does not take into account the potential effect of the system dynamics on the agent's behavior to effort over time. In the context of a Principal-agent relationship, not only the incentives of the Principal can determine the private agent to allocate a good effort, but also the evolution of the dynamic system. The incentives can be ineffective when the environment does not incite the agent to invest a good effort. This explains why, some effici
Resumo:
A growing literature integrates theories of debt management into models of optimal fiscal policy. One promising theory argues that the composition of government debt should be chosen so that fluctuations in the market value of debt offset changes in expected future deficits. This complete market approach to debt management is valid even when the government only issues non-contingent bonds. A number of authors conclude from this approach that governments should issue long term debt and invest in short term assets. We argue that the conclusions of this approach are too fragile to serve as a basis for policy recommendations. This is because bonds at different maturities have highly correlated returns, causing the determination of the optimal portfolio to be ill-conditioned. To make this point concrete we examine the implications of this approach to debt management in various models, both analytically and using numerical methods calibrated to the US economy. We find the complete market approach recommends asset positions which are huge multiples of GDP. Introducing persistent shocks or capital accumulation only worsens this problem. Increasing the volatility of interest rates through habits partly reduces the size of these simulations we find no presumption that governments should issue long term debt ? policy recommendations can be easily reversed through small perturbations in the specification of shocks or small variations in the maturity of bonds issued. We further extend the literature by removing the assumption that governments every period costlessly repurchase all outstanding debt. This exacerbates the size of the required positions, worsens their volatility and in some cases produces instability in debt holdings. We conclude that it is very difficult to insulate fiscal policy from shocks by using the complete markets approach to debt management. Given the limited variability of the yield curve using maturities is a poor way to substitute for state contingent debt. The result is the positions recommended by this approach conflict with a number of features that we believe are important in making bond markets incomplete e.g allowing for transaction costs, liquidity effects, etc.. Until these features are all fully incorporated we remain in search of a theory of debt management capable of providing robust policy insights.
Resumo:
The first main result of the paper is a criterion for a partially commutative group G to be a domain. It allows us to reduce the study of algebraic sets over G to the study of irreducible algebraic sets, and reduce the elementary theory of G (of a coordinate group over G) to the elementary theories of the direct factors of G (to the elementary theory of coordinate groups of irreducible algebraic sets). Then we establish normal forms for quantifier-free formulas over a non-abelian directly indecomposable partially commutative group H. Analogously to the case of free groups, we introduce the notion of a generalised equation and prove that the positive theory of H has quantifier elimination and that arbitrary first-order formulas lift from H to H * F, where F is a free group of finite rank. As a consequence, the positive theory of an arbitrary partially commutative group is decidable.
Resumo:
We present a solution to the problem of defining a counterpart in Algebraic Set Theory of the construction of internal sheaves in Topos Theory. Our approach is general in that we consider sheaves as determined by Lawvere-Tierney coverages, rather than by Grothen-dieck coverages, and assume only a weakening of the axioms for small maps originally introduced by Joyal and Moerdijk, thus subsuming the existing topos-theoretic results.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
We extend the theory of Quillen adjunctions by combining ideas of homotopical algebra and of enriched category theory. Our results describe how the formulas for homotopy colimits of Bousfield and Kan arise from general formulas describing the derived functor of the weighted colimit functor.
Resumo:
The demand for computational power has been leading the improvement of the High Performance Computing (HPC) area, generally represented by the use of distributed systems like clusters of computers running parallel applications. In this area, fault tolerance plays an important role in order to provide high availability isolating the application from the faults effects. Performance and availability form an undissociable binomial for some kind of applications. Therefore, the fault tolerant solutions must take into consideration these two constraints when it has been designed. In this dissertation, we present a few side-effects that some fault tolerant solutions may presents when recovering a failed process. These effects may causes degradation of the system, affecting mainly the overall performance and availability. We introduce RADIC-II, a fault tolerant architecture for message passing based on RADIC (Redundant Array of Distributed Independent Fault Tolerance Controllers) architecture. RADIC-II keeps as maximum as possible the RADIC features of transparency, decentralization, flexibility and scalability, incorporating a flexible dynamic redundancy feature, allowing to mitigate or to avoid some recovery side-effects.