865 resultados para DNA Error Correction
Resumo:
This paper examines the effects of liquidity during the 2007–09 crisis, focussing on the Senior Tranche of the CDX.NA.IG Index and on Moody's AAA Corporate Bond Index. It aims to understand whether the sharp increase in the credit spreads of these AAA-rated credit indices can be explained by worse credit fundamentals alone or whether it also reflects a lack of depth in the relevant markets, the scarcity of risk-capital, and the liquidity preference exhibited by investors. Using cointegration analysis and error correction models, the paper shows that during the crisis lower market and funding liquidity are important drivers of the increase in the credit spread of the AAA-rated structured product, whilst they are less significant in explaining credit spread changes for a portfolio of unstructured credit instruments. Looking at the experience of the subprime crisis, the study shows that when the conditions under which securitisation can work properly (liquidity, transparency and tradability) suddenly disappear, investors are left highly exposed to systemic risk.
Resumo:
In the present study, to shed light on a role of positional error correction mechanism and prediction mechanism in the proactive control discovered earlier, we carried out a visual tracking experiment, in which the region where target was shown, was regulated in a circular orbit. Main results found in this research were following. Recognition of a time step, obtained from the environmental stimuli, is required for the predictive function. The period of the rhythm in the brain obtained from environmental stimuli is shortened about 10%, when the visual information is cut-off. The shortening of the period of the rhythm in the brain accelerates the motion as soon as the visual information is cut-off, and lets the hand motion precedes the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand precedes in average the target when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
Reading aloud is apparently an indispensible part of teaching. Nevertheless, little is known about reading aloud across the curriculum by students and teachers in high schools. Nor do we understand teachers’ attitudes towards issues such as error correction, rehearsal time, and selecting students to read. A survey of 360 teachers in England shows that, although they have little training in reading aloud, they are extremely confident. Reading aloud by students and teachers is strongly related, and serves to further understanding rather than administrative purposes or pupils’ enjoyment. Unexpectedly, Modern Language teachers express views that set them apart from other subjects.
Resumo:
Unemployment as an unintended consequence of social assistance recipiency: results from a time-series analysis of aggregated population data Does the frequency of unemployment have a tendency to increase the number of social assistance recipients, or does the relationship work the other way around? This article utilizes Swedish annual data on aggregated unemployment and means-tested social assistance recipiency in the period 1946–1990 and proposes a multiple time-series approach based on vector error-correction modelling to establish the direction of influence. First, we show that rates of unemployment and receipt of social assistance is co-integrated. Second, we demonstrate that adjustments to the long-run equilibrium are made through adjustments of the unemployment. This indicates that the level of unemployment reacts to changes in rates of social assistance recipiency rather than vice versa. It is also shown that lagged changes in the level of unemployment do not predict changes in rates of social assistance recipients in short-term. Together these findings demonstrate that the number of social assistance recipients does increase the number of unemployed in a period characterized by low unemployment and high employment.
Resumo:
This paper examines the relationships among per capita CO2 emissions, per capita GDP and international trade based on panel data sets spanning the period 1960-2008: one for 150 countries and the others for sub-samples comprising OECD and Non-OECD economies. We apply panel unit root and cointegration tests, and estimate a panel error correction model. The results from the error correction model suggest that there are long-term relationships between the variables for the whole sample and for Non-OECD countries. Finally, Granger causality tests show that there is bi-directional short-term causality between per capita GDP and international trade for the whole sample and between per capita GDP and CO2 emissions for OECD countries
Resumo:
This thesis consists of a summary and four self-contained papers. Paper [I] Following the 1987 report by The World Commission on Environment and Development, the genuine saving has come to play a key role in the context of sustainable development, and the World Bank regularly publishes numbers for genuine saving on a national basis. However, these numbers are typically calculated as if the tax system is non-distortionary. This paper presents an analogue to genuine saving in a second best economy, where the government raises revenue by means of distortionary taxation. We show how the social cost of public debt, which depends on the marginal excess burden, ought to be reflected in the genuine saving. We also illustrate by presenting calculations for Greece, Japan, Portugal, U.K., U.S. and OECD average, showing that the numbers published by the World Bank are likely to be biased and may even give incorrect information as to whether the economy is locally sustainable. Paper [II] This paper examines the relationships among per capita CO2 emissions, per capita GDP and international trade based on panel data spanning the period 1960-2008 for 150 countries. A distinction is also made between OECD and Non-OECD countries to capture the differences of this relationship between developed and developing economies. We apply panel unit root and cointegration tests, and estimate a panel error correction model. The results from the error correction model suggest that there are long-term relationships between the variables for the whole sample and for Non-OECD countries. Finally, Granger causality tests show that there is bi-directional short-term causality between per capita GDP and international trade for the whole sample and between per capita GDP and CO2 emissions for OECD countries. Paper [III] Fundamental questions in economics are why some regions are richer than others, why their growth rates differ, whether their growth rates tend to converge, and what key factors contribute to explain economic growth. This paper deals with the average income growth, net migration, and changes in unemployment rates at the municipal level in Sweden. The aim is to explore in depth the effects of possible underlying determinants with a particular focus on local policy variables. The analysis is based on a three-equation model. Our results show, among other things, that increases in the local public expenditure and income taxe rate have negative effects on subsequent income income growth. In addition, the results show conditional convergence, i.e. that the average income among the municipal residents tends to grow more rapidly in relatively poor local jurisdictions than in initially “richer” jurisdictions, conditional on the other explanatory variables. Paper [IV] This paper explores the relationship between income growth and income inequality using data at the municipal level in Sweden for the period 1992-2007. We estimate a fixed effects panel data growth model, where the within-municipality income inequality is one of the explanatory variables. Different inequality measures (Gini coefficient, top income shares, and measures of inequality in the lower and upper part of the income distribution) are examined. We find a positive and significant relationship between income growth and income inequality measured as the Gini coefficient and top income shares, respectively. In addition, while inequality in the upper part of the income distribution is positively associated with the income growth rate, inequality in the lower part of the income distribution seems to be negatively related to the income growth. Our findings also suggest that increased income inequality enhances growth more in municipalities with a high level of average income than in municipalities with a low level of average income.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
Reduced form estimation of multivariate data sets currently takes into account long-run co-movement restrictions by using Vector Error Correction Models (VECM' s). However, short-run co-movement restrictions are completely ignored. This paper proposes a way of taking into account short-and long-run co-movement restrictions in multivariate data sets, leading to efficient estimation of VECM' s. It enables a more precise trend-cycle decomposition of the data which imposes no untested restrictions to recover these two components. The proposed methodology is applied to a multivariate data set containing U.S. per-capita output, consumption and investment Based on the results of a post-sample forecasting comparison between restricted and unrestricted VECM' s, we show that a non-trivial loss of efficiency results whenever short-run co-movement restrictions are ignored. While permanent shocks to consumption still play a very important role in explaining consumption’s variation, it seems that the improved estimates of trends and cycles of output, consumption, and investment show evidence of a more important role for transitory shocks than previously suspected. Furthermore, contrary to previous evidence, it seems that permanent shocks to output play a much more important role in explaining unemployment fluctuations.
Resumo:
This dissertation proposes a bivariate markov switching dynamic conditional correlation model for estimating the optimal hedge ratio between spot and futures contracts. It considers the cointegration between series and allows to capture the leverage efect in return equation. The model is applied using daily data of future and spot prices of Bovespa Index and R$/US$ exchange rate. The results in terms of variance reduction and utility show that the bivariate markov switching model outperforms the strategies based ordinary least squares and error correction models.
Resumo:
A lei do preço único afirma que o mesmo ativo negociado em diferentes mercados deve apresentar preços equivalentes. Este trabalho busca verificar se o risco de crédito soberano brasileiro negociado no mercado internacional é precificado de forma semelhante tanto nos tradicionais mercados de títulos quanto no novo e crescente mercado de derivativos de crédito. Adicionalmente, utiliza-se a análise de Price Discovery para examinar qual dos mercados se move mais rapidamente em resposta às mudanças nas condições de crédito da economia brasileira. A análise empírica é feita por meio de modelos de séries de tempo, mais especificamente análise de cointegração e vetor de correção de erros. Os resultados confirmam a predição teórica da lei do preço único de que o risco de crédito brasileiro, tanto nos mercados de títulos quanto no mercado de derivativos de crédito, movem-se juntos no longo prazo. Por fim, a maior parte do Price Discovery ocorre no mercado de derivativos de crédito.
Resumo:
Esta dissertação trata da questão dos preços administrados no Brasil sob a argumentação de que os mesmos apresentam uma persistência mais acentuada do que os demais preços da economia. Para alcançar este objetivo foram verificados alguns testes de persistência inflacionária. Em seguida, utilizou-se a metodologia dos Vetores de Correção de Erro (VEC) para estudar a relação dos preços administrados com as variáveis mais importantes da economia brasileira, tais como, produto, taxa de câmbio, preços livres e taxa de juros Selic. Por fim, utilizou-se do instrumental de Mankiw e Reis (2003) para verificar qual o índice de preços seria mais adequado para manter a atividade econômica brasileira mais próxima de seu nível potencial. Os resultados encontrados foram os seguintes: 1) observou-se persistência do IPCA representada pelos preços administrados; 2) a autoridade monetária responde a choques dos preços monitorados com maior veemência do que a choques nos preços livres; 3) o exercício de Mankiw e Reis (2003) apontou que a porcentagem dos preços monitorados deve ser menor que a atual do IPCA em um índice de preços estabilizador. Desta forma, mostra-se que a presença dos preços administrados dificulta pronunciadamente a condução de política monetária no Brasil.
Resumo:
The paper aims to investigate on empirical and theoretical grounds the Brazilian exchange rate dynamics under floating exchange rates. The empirical analysis examines the short and long term behavior of the exchange rate, interest rate (domestic and foreign) and country risk using econometric techniques such as variance decomposition, Granger causality, cointegration tests, error correction models, and a GARCH model to estimate the exchange rate volatility. The empirical findings suggest that one can argue in favor of a certain degree of endogeneity of the exchange rate and that flexible rates have not been able to insulate the Brazilian economy in the same patterns predicted by literature due to its own specificities (managed floating with the use of international reserves and domestic interest rates set according to inflation target) and to externally determined variables such as the country risk. Another important outcome is the lack of a closer association of domestic and foreign interest rates since the new exchange regime has been adopted. That is, from January 1999 to May 2004, the US monetary policy has no significant impact on the Brazilian exchange rate dynamics, which has been essentially endogenous primarily when we consider the fiscal dominance expressed by the probability of default.
Resumo:
Com a entrada do regime cambial flutuante no Brasil a partir de 1999, o mercado de derivativos cambiais se desenvolveu muito. A crescente demanda das empresas e instituições financeiras pelos produtos de hedge cambial junto a um novo panorama econômico mundial foram as causas desse desenvolvimento. Esse trabalho procura encontrar tendências para o mercado de derivativos cambiais brasileiro estimando parâmetros através de regressões entre séries não-estacionárias, porém cointegradas. E utilizado o modelo de correção de erros para fazer as previsões. Os resultados mostram que o crescimento do mercado ocorre em função da corrente de comércio exterior e PIB, que os produtos mais utilizados para operações de curto e longo prazos tendem a ser o dólar futuro e as opções cambiais e que, no futuro, algumas outras moedas terão participação significativa no mercado brasileiro.
Resumo:
This paper investigates whether there is evidence of structural change in the Brazilian term structure of interest rates. Multivariate cointegration techniques are used to verify this evidence. Two econometrics models are estimated. The rst one is a Vector Autoregressive Model with Error Correction Mechanism (VECM) with smooth transition in the deterministic coe¢ cients (Ripatti and Saikkonen [25]). The second one is a VECM with abrupt structural change formulated by Hansen [13]. Two datasets were analysed. The rst one contains a nominal interest rate with maturity up to three years. The second data set focuses on maturity up to one year. The rst data set focuses on a sample period from 1995 to 2010 and the second from 1998 to 2010. The frequency is monthly. The estimated models suggest the existence of structural change in the Brazilian term structure. It was possible to document the existence of multiple regimes using both techniques for both databases. The risk premium for di¤erent spreads varied considerably during the earliest period of both samples and seemed to converge to stable and lower values at the end of the sample period. Long-term risk premiums seemed to converge to inter-national standards, although the Brazilian term structure is still subject to liquidity problems for longer maturities.
Resumo:
The thesis at hand adds to the existing literature by investigating the relationship between economic growth and outward foreign direct investments (OFDI) on a set of 16 emerging countries. Two different econometric techniques are employed: a panel data regression analysis and a time-series causality analysis. Results from the regression analysis indicate a positive and significant correlation between OFDI and economic growth. Additionally, the coefficient for the OFDI variable is robust in the sense specified by the Extreme Bound Analysis (EBA). On the other hand, the findings of the causality analysis are particularly heterogeneous. The vector autoregression (VAR) and the vector error correction model (VECM) approaches identify unidirectional Granger causality running either from OFDI to GDP or from GDP to OFDI in six countries. In four economies causality among the two variables is bidirectional, whereas in five countries no causality relationship between OFDI and GDP seems to be present.