859 resultados para Convergence (Economics)
Resumo:
Employing a embodied technologic change model in which the time decision of scrapping old vintages of capital and adopt newer one is endogenous we show that the elasticity of substitutions among capital and labor plays a key role in determining the optimum life span of capital. In particular, for the CD case the life span of capital does not depend on the relative price of it. The estimation of the model's long-run investment function shows, for a Panel data set consisting of 125 economies for 25 years, that the price elasticity of investment is lower than one; we rejected the CD specification. Our calibration for the US suggests 0.4 for the technical elasticity of substitution. In order to get a theoretical consistent concept of aggregate capital we derive the relative price profile for a shadow second-hand market for capital. The shape of the model's theoretical price curve reproduces the empírical estimation of it. \lVe plug the calibrate version of the long-run solution of the model to a cross-section of economies data set to get the implied TFP, that is, the part of the productivity which is not explained by the model. We show that the mo dei represent a good improvement, comparing to the standard neoc!assical growth model with CD production function and disembodied technical change, in accounting the world diversity in productivity. In addition the model describes the fact that a very poor economy can experience fast growth based on capital accumulation until the point of becoming a middle income economy; from this point on it has to rely on TFP increase in order to keep growing.
Resumo:
O Diretor da London School of Economics and Political Science (LSE), Craig Calhoun, foi recebido na terça-feira (29) pelo Presidente da Fundação Getulio Vargas, Carlos Ivan Simonsen Leal, durante visita institucional à FGV. Pela manhã, os presidentes das duas instituições tiveram uma reunião com a presença do Secretário-Executivo da LSE, Hugh Martin, do Diretor da DAPP, Marco Aurélio Ruediger, do Diretor da EPGE (Escola Brasileira de Economia e Finanças), Rubens Cysne, da Diretora-Executiva da Editora FGV, Marieta de Moraes Ferreira, e do Prof. Antônio Carlos Porto Gonçalves, também da EPGE. No encontro, foi discutido o maior intercâmbio de alunos entre a LSE e a FGV e em projetos de pesquisa. À tarde, Calhoun realizou uma visita à sede da DAPP, onde participou de uma reunião de apresentação dos métodos de monitoramento e análise de rede desenvolvidos pela DAPP. Participaram da reunião, além do Diretor da DAPP, os pesquisadores Roberta Novis, Amaro Grassi e Pedro Lenhard.
Resumo:
Using quantitative data obtained from public available database, this paper discusses the difference between of the Brazilian GDP and the Brazilian Stock Exchange industry breakdown. I examined if, and to what extent, the industry breakdowns are similar. First, I found out that the Stock Exchange industry breakdown is overwhelming different from the GDP, which may present a potential problem to asset allocation and portfolio diversification in Brazil. Second, I identified an important evidence of a convergence between the GDP and the Stock Exchange in the last 9 years. Third, it became clear that the Privatizations in the late 90’s and IPO market from 2004 to 2008 change the dynamics of the Brazilian Stock Exchange. And fourth, I identified that Private Equity and Venture Capital industry may play an important role on the portfolio diversification in Brazil.
Resumo:
Conditionalites, measures that a borrowing country should adopt to obtain loans from the IMF, are pervasive in IMF programs. This paper estimates the effects of political and economic factors on the number of conditionalities and on the fiscal adjustment requested by the IMF. As found in the literature, political proximity of the borrowing country to the Fund’s major shareholders has an important effect on the number of conditions in an agreement. However, the fiscal adjusment requested by the IMF is strongly affected by the size of a country’s fiscal deficit but not by political proximity. We also find a very small correlation between the number of conditions and the fiscal adjustment requested by the IMF
Resumo:
Conventional wisdom holds that economic analysis of law is either embryonic or nonexistent outside of the United States generally and in civil law jurisdictions in particular. Existing explanations for the assumed lack of interest in the application of economic reasoning to legal problems range from the different structure of legal education and academia outside of the United States to the peculiar characteristics of civilian legal systems. This paper challenges this view by documenting and explaining the growing use of economic reasoning by Brazilian courts. We argue that, given the ever-greater role of courts in the formulation of public policies, the application of legal principles and rules increasingly calls for a theory of human behavior (such as that provided by economics) to help foresee the likely aggregate consequences of different interpretations of the law. Consistent with the traditional role of civilian legal scholarship in providing guidance for the application of law by courts, the further development of law and economics in Brazil is therefore likely to be mostly driven by judicial demand.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent.
Resumo:
The financial crisis and Great Recession have been followed by a jobs shortage crisis that most forecasts predict will persist for years given current policies. This paper argues for a wage-led recovery and growth program which is the only way to remedy the deep causes of the crisis and escape the jobs crisis. Such a program is the polar opposite of the current policy orthodoxy, showing how much is at stake. Winning the argument for wage-led recovery will require winning the war of ideas about economics that has its roots going back to Keynes’ challenge of classical macroeconomics in the 1920s and 1930s. That will involve showing how the financial crisis and Great Recession were the ultimate result of three decades of neoliberal policy, which produced wage stagnation by severing the wage productivity growth link and made asset price inflation and debt the engine of demand growth in place of wages; showing how wage-led policy resolves the current problem of global demand shortage without pricing out labor; and developing a detailed set of policy proposals that flow from these understandings. The essence of a wage-led policy approach is to rebuild the link between wages and productivity growth, combined with expansionary macroeconomic policy that fills the current demand shortfall so as to push the economy on to a recovery path. Both sets of measures are necessary. Expansionary macro policy (i.e. fiscal stimulus and easy monetary policy) without rebuilding the wage mechanism will not produce sustainable recovery and may end in fiscal crisis. Rebuilding the wage mechanism without expansionary macro policy is likely to leave the economy stuck in the orbit of stagnation.
Resumo:
This dissertation surveys the literature on economic growth. I review a substantial number of articles published by some of the most renowned researchers engaged in the study of economic growth. The literature is so vast that before undertaking new studies it is very important to know what has been done in the field. The dissertation has six chapters. In Chapter 1, I introduce the reader to the topic of economic growth. In Chapter 2, I present the Solow model and other contributions to the exogenous growth theory proposed in the literature. I also briefly discuss the endogenous approach to growth. In Chapter 3, I summarize the variety of econometric problems that affect the cross-country regressions. The factors that contribute to economic growth are highlighted and the validity of the empirical results is discussed. In Chapter 4, the existence of convergence, whether conditional or not, is analyzed. The literature using both cross-sectional and panel data is reviewed. An analysis on the topic of convergence using a quantile-regression framework is also provided. In Chapter 5, the controversial relationship between financial development and economic growth is analyzed. Particularly, I discuss the arguments in favour and against the Schumpeterian view that considers financial development as an important determinant of innovation and economic growth. Chapter 6 concludes the dissertation. Summing up, the literature appears to be not fully conclusive about the main determinants of economic growth, the existence of convergence and the impact of finance on growth.
Resumo:
A neural model for solving nonlinear optimization problems is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The network is shown to be completely stable and globally convergent to the solutions of nonlinear optimization problems. A study of the modified Hopfield model is also developed to analyze its stability and convergence. Simulation results are presented to validate the developed methodology.
Resumo:
The multilayer perceptron network has become one of the most used in the solution of a wide variety of problems. The training process is based on the supervised method where the inputs are presented to the neural network and the output is compared with a desired value. However, the algorithm presents convergence problems when the desired output of the network has small slope in the discrete time samples or the output is a quasi-constant value. The proposal of this paper is presenting an alternative approach to solve this convergence problem with a pre-conditioning method of the desired output data set before the training process and a post-conditioning when the generalization results are obtained. Simulations results are presented in order to validate the proposed approach.