936 resultados para ECONOMIC MODELS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Länsimaat ovat rahoittaneet kehitysyhteistyöhankkeita jo lähes kuuden vuosikymmenen ajan, mutta kehitysavun tehokkuudesta ei olla edelleenkään päästy yksimielisyyteen. Yksi avunantajamaiden tapa vaikuttaa kehitysavun tehokkuuteen, eli avun vaikutukseen vastaanottajamaan taloudellisen kasvun kiihdyttäjänä, on sitoa ne julkisen sektorin infrastruktuurihankkeisiin. Joissain tapauksissa tämä vaikuttaa avun vastaanottajan käytökseen ja asenteisiin kehitysapua kohtaan. Tutkielmassa käsitellään kehitysavun tehokkuutta tilanteessa, jossa se on sidottu julkisen sektorin investointeihin kehitysmaassa. Tutkimus pohjaa Kalaitzidakisin ja Kalyvitisin (2008) malliin, jossa osa kehitysmaan julkisen talouden investoinneista rahoitetaan kehitysavulla. Seuraavaksi tarkastellaan ylijäämää tavoittelevan käyttäytymisen (rent- seeking) vaikutusta kehitysavun tehokkuuteen pohjaten Economidesin, Kalyvitisin ja Philippopoulosin (2008) malliin. Tutkielmassa referoidaan lisäksi tutkimuskysymystä sivuavia empiirisiä tutkimuksia, esitellään aluksi tavallisimmat kehitysyhteistyön muodot, sekä esitellään talousteoreettisia näkökulmia kehitysyhteistyön tehokkuuden määrittelylle. Tutkielma perustuu puhtaasti teoreettisiin malleihin ja niissä sovelletut menetelmät ovat matemaattisia. Tutkielmassa käsitellään ensin tapaus, jossa kehitysyhteistyöllä rahoitetaan julkisen sektorin investointihankkeita. Jossain tapauksissa kehitysavun kasvu lasku siirtää vastaanottajamaan kulutusta julkisista investoinneista kulutukseen, jolloin kehitysyhteistyövaroin osittain rahoitettujen hankkeiden koko pienenee, ja suhteellinen tehokkuus laskee. Seuraavaksi tarkastellaan tilannetta, jossa kehitysyhteistyövaroista vain osa päätyy hankkeen rahoittamiseen, ja todetaan, että kehitysavun tehokkuus ja vaikutus maan kansantulon kasvuun vähenee talouden toimijoiden ylijäämää tavoittelevan käyttäytymisen (mukaan lukien korruptio) myötä entisestään. Tämän tutkimuksen perusteella voidaan todeta, että kehitysapu vaikuttaa kehittyvän maan talouden kasvuun tapauksessa, jossa julkisia infrastruktuurihankkeita rahoitetaan osittain maan omin verovaroin ja osittain kehitysyhteistyövaroin. Ylijäämää tavoitteleva käyttäytyminen vaikuttaa kehitysavun tehokkuuteen negatiivistesti vähentäen kehitysavun positiivisia kasvuvaikutuksia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is composed of an introductory chapter and four applications each of them constituting an own chapter. The common element underlying each of the chapters is the econometric methodology. The applications rely mostly on the leading econometric techniques related to estimation of causal effects. The first chapter introduces the econometric techniques that are employed in the remaining chapters. Chapter 2 studies the effects of shocking news on student performance. It exploits the fact that the school shooting in Kauhajoki in 2008 coincided with the matriculation examination period of that fall. It shows that the performance of men declined due to the news of the school shooting. For women the similar pattern remains unobserved. Chapter 3 studies the effects of minimum wage on employment by employing the original Card and Krueger (1994; CK) and Neumark and Wascher (2000; NW) data together with the changes-in-changes (CIC) estimator. As the main result it shows that the employment effect of an increase in the minimum wage is positive for small fast-food restaurants and negative for big fast-food restaurants. Therefore, it shows that the controversial positive employment effect reported by CK is overturned for big fast-food restaurants and that the NW data are shown, in contrast to their original results, to provide support for the positive employment effect. Chapter 4 employs the state-specific U.S. data (collected by Cohen and Einav [2003; CE]) on traffic fatalities to re-evaluate the effects of seat belt laws on the traffic fatalities by using the CIC estimator. It confirms the CE results that on the average an implementation of a mandatory seat belt law results in an increase in the seat belt usage rate and a decrease in the total fatality rate. In contrast to CE, it also finds evidence on compensating-behavior theory, which is observed especially in the states by the border of the U.S. Chapter 5 studies the life cycle consumption in Finland, with the special interest laid on the baby boomers and the older households. It shows that the baby boomers smooth their consumption over the life cycle more than other generations. It also shows that the old households smoothed their life cycle consumption more as a result of the recession in the 1990s, compared to young households.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this dissertation is to model economic variables by a mixture autoregressive (MAR) model. The MAR model is a generalization of linear autoregressive (AR) model. The MAR -model consists of K linear autoregressive components. At any given point of time one of these autoregressive components is randomly selected to generate a new observation for the time series. The mixture probability can be constant over time or a direct function of a some observable variable. Many economic time series contain properties which cannot be described by linear and stationary time series models. A nonlinear autoregressive model such as MAR model can a plausible alternative in the case of these time series. In this dissertation the MAR model is used to model stock market bubbles and a relationship between inflation and the interest rate. In the case of the inflation rate we arrived at the MAR model where inflation process is less mean reverting in the case of high inflation than in the case of normal inflation. The interest rate move one-for-one with expected inflation. We use the data from the Livingston survey as a proxy for inflation expectations. We have found that survey inflation expectations are not perfectly rational. According to our results information stickiness play an important role in the expectation formation. We also found that survey participants have a tendency to underestimate inflation. A MAR model has also used to model stock market bubbles and crashes. This model has two regimes: the bubble regime and the error correction regime. In the error correction regime price depends on a fundamental factor, the price-dividend ratio, and in the bubble regime, price is independent of fundamentals. In this model a stock market crash is usually caused by a regime switch from a bubble regime to an error-correction regime. According to our empirical results bubbles are related to a low inflation. Our model also imply that bubbles have influences investment return distribution in both short and long run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tradeoffs are examined between mitigating black carbon (BC) and carbon dioxide (CO2) for limiting peak global mean warming, using the following set of methods. A two-box climate model is used to simulate temperatures of the atmosphere and ocean for different rates of mitigation. Mitigation rates for BC and CO2 are characterized by respective timescales for e-folding reduction in emissions intensity of gross global product. There are respective emissions models that force the box model. Lastly there is a simple economics model, with cost of mitigation varying inversely with emission intensity. Constant mitigation timescale corresponds to mitigation at a constant annual rate, for example an e-folding timescale of 40 years corresponds to 2.5% reduction each year. Discounted present cost depends only on respective mitigation timescale and respective mitigation cost at present levels of emission intensity. Least-cost mitigation is posed as choosing respective e-folding timescales, to minimize total mitigation cost under a temperature constraint (e.g. within 2 degrees C above preindustrial). Peak warming is more sensitive to mitigation timescale for CO2 than for BC. Therefore rapid mitigation of CO2 emission intensity is essential to limiting peak warming, but simultaneous mitigation of BC can reduce total mitigation expenditure. (c) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An economic expert working group (STECF/SGBRE-07-05) was convened in 2007 for evaluating the potential economic consequences of a Long-Term Management Plan for the northern hake. Analyzing all the scenarios proposed by biological assessment, they found that keeping the F in the status quo level was the best policy in terms of net present values for both yield and profits. This result is counter intuitive because it may indicate that effort costs do no affect the economic reference points. However, it is well accepted that the inclusion of costs affects negatively the economic reference points. In this paper, applying a dynamic agestructured model to the northern hake, we show that the optimal fishing mortality that maximizes the net present value of profits is lower than Fmax.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this article is to characterize dynamic optimal harvesting trajectories that maximize discounted utility assuming an age-structured population model, in the same line as Tahvonen (2009). The main novelty of our study is that uses as an age-structured population model the standard stochastic cohort framework applied in Virtual Population Analysis for fish stock assessment. This allows us to compare optimal harvesting in a discounted economic context with standard reference points used by fisheries agencies for long term management plans (e.g. Fmsy). Our main findings are the following. First, optimal steady state is characterized and sufficient conditions that guarantees its existence and uniqueness for the general case of n cohorts are shown. It is also proved that the optimal steady state coincides with the traditional target Fmsy when the utility function to be maximized is the yield and the discount rate is zero. Second, an algorithm to calculate the optimal path that easily drives the resource to the steady state is developed. And third, the algorithm is applied to the Northern Stock of hake. Results show that management plans based exclusively on traditional reference targets as Fmsy may drive fishery economic results far from the optimal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Published as an article in: Investigaciones Economicas, 2005, vol. 29, issue 3, pages 483-523.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the trend processes characterized by two standard growth models using simple econometrics. The first model is the basic neoclassical growth model that postulates a deterministic trend for output. The second model is the Uzawa-Lucas model that postulates a stochastic trend for output. The aim is to understand how the different trend processes for output assumed by these two standard growth models determine the ability of each model to explain the observed trend processes of other macroeconomic variables such as consumption and investment. The results show that the two models reproduce the output trend process. Moreover, the results show that the basic growth model captures properly the consumption trend process, but fails in characterizing the investment trend process. The reverse is true for the Uzawa-Lucas model.