914 resultados para Applied economics
Resumo:
We examine a method recently proposed by Hinich and Patterson (mimeo, University of Texas at Austin, 1995) for testing the validity of specifying a GARCH error structure for financial time series data in the context of a set of ten daily Sterling exchange rates. The results demonstrate that there are statistical structures present in the data that cannot be captured by a GARCH model, or any of its variants. This result has important implications for the interpretation of the recent voluminous literature which attempts to model financial asset returns using this family of models.
Resumo:
An alternative procedure to that of Lo is proposed for assessing whether there is significant evidence of persistence in time series. The technique estimates the Hurst exponent itself, and significance testing is based on an application of bootstrapping using surrogate data. The method is applied to a set of 10 daily pound exchange rates. A general lack of long-term memory is found to characterize all the series tested, in sympathy with the findings of a number of other recent papers which have used Lo's techniques.
Resumo:
We analyze the interaction between university professors’ teaching quality and their research and administrative activities. Our sample is a high-quality individual panel data set from a medium size public Spanish university that allows us to avoid several types of biases frequently encountered in the literature. Although researchers teach roughly 20% more than non-researchers, their teaching quality is also 20% higher. Instructors with no research are 5 times more likely than the rest to be among the worst teachers. Over much of the relevant range, we find a nonlinear and positive relationship between research output and teaching quantity on teaching quality. Our conclusions may be useful for decision makers in universities and governments.
Resumo:
Recent empirical works on the within-sector impact of inward investments on domestic firms’ productivity have found rather robust evidence of no (or even negative) effects. We suggest that, among other reasons, a specification error might explain some of these results. A more general specification, which includes the usual one as a special case, is proposed. Using data on Italian manufacturing firms in 1992–2000, we find positive externalities only once we allow for the more flexible specification.
Resumo:
Existing theoretical models of house prices and credit rely on continuous rationality of consumers, an assumption that has been frequently questioned in recent years. Meanwhile, empirical investigations of the relationship between prices and credit are often based on national-level data, which is then tested for structural breaks and asymmetric responses, usually with subsamples. Earlier author argues that local markets are structurally different from one another and so the coefficients of any estimated housing market model should vary from region to region. We investigate differences in the price–credit relationship for 12 regions of the UK. Markov-switching is introduced to capture asymmetric market behaviours and turning points. Results show that credit abundance had a large impact on house prices in Greater London and nearby regions alongside a strong positive feedback effect from past house price movements. This impact is even larger in Greater London and the South East of England when house prices are falling, which are the only instances where the credit effect is more prominent than the positive feedback effect. A strong positive feedback effect from past lending activity is also present in the loan dynamics. Furthermore, bubble probabilities extracted using a discrete Kalman filter neatly capture market turning points.
Resumo:
The aim of this article is to discuss the estimation of the systematic risk in capital asset pricing models with heavy-tailed error distributions to explain the asset returns. Diagnostic methods for assessing departures from the model assumptions as well as the influence of observations on the parameter estimates are also presented. It may be shown that outlying observations are down weighted in the maximum likelihood equations of linear models with heavy-tailed error distributions, such as Student-t, power exponential, logistic II, so on. This robustness aspect may also be extended to influential observations. An application in which the systematic risk estimate of Microsoft is compared under normal and heavy-tailed errors is presented for illustration.
Resumo:
The purpose of this work is to verify the stability of the relationship between real activity and interest rate spread. The test is based on Chen (1988) and Osorio and Galea (2006). The analysis is applied to Chile and the United States, from 1980 to 1999. In general, in both cases the relationship was statistically significant in early 80s, but a break point is found in both countries during that decades, suggesting that the relationship depends on the monetary rule follow by the Central Bank.
Resumo:
The commitments and working requirements of abstract, applied, and art of, economics are assessed within an analogy with the fields of inert matter and life. Abstract economics is the pure logic of the phenomenon. Applied positive economics presupposes many distinct abstract sciences. Art presupposes applied economics and direct knowledge of the specificities which characterize the time-space individuality of the phenomenon. This is an indetermination clearly formulated by Senior and Mill; its connection with institutionalism is discussed. The Ricardian Vice is the habit of ignoring the indetermination; its prevalence in mainstream economics is exemplified, and its causes analyzed.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The paper shows the advantages and handicaps of implementing an inflation target (IT) regime, from a Post-Keynesian and, thus, an institutional stance. It is Post-Keynesian as long as it does not perceive any benefit in the mainstream split between monetary and fiscal policies. And it is institutional insofar as it assumes that there are several ways of implementing a policy, such that the chosen one is determined by historical factors, as it is illustrated by the Brazilian case. One could even support IT policies if their targets were seen just as “focusing devices” guiding economic policy, notwithstanding other targets, as, in the short run, output growth and employment and, in the long run, technology and human development. Nevertheless, an IT is not necessary, although it can be admitted, mainly if the target is hidden from the public, in order to increase the flexibility of the Central Bank.
Resumo:
The aim of this article is to evaluate whether there is an association between decentralization and corruption. In order to do so we analyse Brazilian health-care programmes that are run locally. To construct objective measures of corruption, we use the information from the reports of the auditing programme of the local governments of Brazil. Results point that there is no relationship between decentralization and corruption, whatever the measure of decentralization used.