838 resultados para Vector error correction model
Resumo:
O objetivo do presente trabalho é utilizar modelos econométricos de séries de tempo para previsão do comportamento da inadimplência agregada utilizando um conjunto amplo de informação, através dos métodos FAVAR (Factor-Augmented Vector Autoregressive) de Bernanke, Boivin e Eliasz (2005) e FAVECM (Factor-augmented Error Correction Models) de Baneerjee e Marcellino (2008). A partir disso, foram construídas previsões fora da amostra de modo a comparar a eficácia de projeção dos modelos contra modelos univariados mais simples - ARIMA - modelo auto-regressivo integrado de média móvel e SARIMA - modelo sazonal auto-regressivo integrado de média móvel. Para avaliação da eficácia preditiva foi utilizada a metodologia MCS (Model Confidence Set) de Hansen, Lunde e James (2011) Essa metodologia permite comparar a superioridade de modelos temporais vis-à-vis a outros modelos.
Resumo:
This study aims to contribute on the forecasting literature in stock return for emerging markets. We use Autometrics to select relevant predictors among macroeconomic, microeconomic and technical variables. We develop predictive models for the Brazilian market premium, measured as the excess return over Selic interest rate, Itaú SA, Itaú-Unibanco and Bradesco stock returns. We nd that for the market premium, an ADL with error correction is able to outperform the benchmarks in terms of economic performance. For individual stock returns, there is a trade o between statistical properties and out-of-sample performance of the model.
Resumo:
This study aims to contribute on the forecasting literature in stock return for emerging markets. We use Autometrics to select relevant predictors among macroeconomic, microeconomic and technical variables. We develop predictive models for the Brazilian market premium, measured as the excess return over Selic interest rate, Itaú SA, Itaú-Unibanco and Bradesco stock returns. We find that for the market premium, an ADL with error correction is able to outperform the benchmarks in terms of economic performance. For individual stock returns, there is a trade o between statistical properties and out-of-sample performance of the model.
Resumo:
Há mais de uma década o controle dos níveis de preço na economia brasileira é realizado dentro do escopo do Regime de Metas de Inflação, que utiliza modelos macroeconômicos como instrumentos para guiar as tomadas de decisões sobre política monetária. Após um período de relativo êxito (2006 - 2009), nos últimos anos apesar dos esforços das autoridades monetárias na aplicação das políticas de contenção da inflação, seguindo os mandamentos do regime de metas, esta tem se mostrado resistente, provocando um debate em torno de fatores que podem estar ocasionando tal comportamento. Na literatura internacional, alguns trabalhos têm creditado aos choques de oferta, especialmente aos desencadeados pela variação dos preços das commodities, uma participação significativa na inflação, principalmente em economias onde os produtos primários figuram como maioria na pauta exportadora. Na literatura nacional, já existem alguns trabalhos que apontam nesta mesma direção. Sendo assim, buscou-se, como objetivo principal para o presente estudo, avaliar como os choques de oferta, mais especificamente os choques originados pelos preços das commodities, têm impactado na inflação brasileira e como e com que eficiência a política monetária do país tem reagido. Para tanto, foi estimado um modelo semiestrutural contendo uma curva de Phillips, uma curva IS e duas versões da Função de Reação do Banco Central, de modo a verificar como as decisões de política monetária são tomadas. O método de estimação empregado foi o de Autorregressão Vetorial com Correção de Erro (VEC) na sua versão estrutural, que permite uma avaliação dinâmica das relações de interdependência entre as variáveis do modelo proposto. Por meio da estimação da curva de Phillips foi possível observar que os choques de oferta, tanto das commodities como da produtividade do trabalho e do câmbio, não impactam a inflação imediatamente, porém sua relevância é crescente ao longo do tempo chegando a prevalecer sobre o efeito autorregressivo (indexação) verificado. Estes choques também se apresentaram importantes para o comportamento da expectativa de inflação, produzindo assim, uma indicação de que seus impactos tendem a se espalhar pelos demais setores da economia. Através dos resultados da curva IS constatou-se a forte inter-relação entre o hiato do produto e a taxa de juros, o que indica que a política monetária, por meio da fixação de tal taxa, influencia fortemente a demanda agregada. Já por meio da estimação da primeira função de reação, foi possível perceber que há uma relação contemporânea relevante entre o desvio da expectativa de inflação em relação à meta e a taxa Selic, ao passo que a relação contemporânea do hiato do produto sobre a taxa Selic se mostrou pequena. Por fim, os resultados obtidos com a segunda função de reação, confirmaram que as autoridades monetárias reagem mais fortemente aos sinais inflacionários da economia do que às movimentações que acontecem na atividade econômica e mostraram que uma elevação nos preços das commodities, em si, não provoca diretamente um aumento na taxa básica de juros da economia.
Resumo:
Objective: Biomedical events extraction concerns about events describing changes on the state of bio-molecules from literature. Comparing to the protein-protein interactions (PPIs) extraction task which often only involves the extraction of binary relations between two proteins, biomedical events extraction is much harder since it needs to deal with complex events consisting of embedded or hierarchical relations among proteins, events, and their textual triggers. In this paper, we propose an information extraction system based on the hidden vector state (HVS) model, called HVS-BioEvent, for biomedical events extraction, and investigate its capability in extracting complex events. Methods and material: HVS has been previously employed for extracting PPIs. In HVS-BioEvent, we propose an automated way to generate abstract annotations for HVS training and further propose novel machine learning approaches for event trigger words identification, and for biomedical events extraction from the HVS parse results. Results: Our proposed system achieves an F-score of 49.57% on the corpus used in the BioNLP'09 shared task, which is only 2.38% lower than the best performing system by UTurku in the BioNLP'09 shared task. Nevertheless, HVS-BioEvent outperforms UTurku's system on complex events extraction with 36.57% vs. 30.52% being achieved for extracting regulation events, and 40.61% vs. 38.99% for negative regulation events. Conclusions: The results suggest that the HVS model with the hierarchical hidden state structure is indeed more suitable for complex event extraction since it could naturally model embedded structural context in sentences.
Resumo:
A major challenge in text mining for biomedicine is automatically extracting protein-protein interactions from the vast amount of biomedical literature. We have constructed an information extraction system based on the Hidden Vector State (HVS) model for protein-protein interactions. The HVS model can be trained using only lightly annotated data whilst simultaneously retaining sufficient ability to capture the hierarchical structure. When applied in extracting protein-protein interactions, we found that it performed better than other established statistical methods and achieved 61.5% in F-score with balanced recall and precision values. Moreover, the statistical nature of the pure data-driven HVS model makes it intrinsically robust and it can be easily adapted to other domains.
Resumo:
This paper proposes a novel framework of incorporating protein-protein interactions (PPI) ontology knowledge into PPI extraction from biomedical literature in order to address the emerging challenges of deep natural language understanding. It is built upon the existing work on relation extraction using the Hidden Vector State (HVS) model. The HVS model belongs to the category of statistical learning methods. It can be trained directly from un-annotated data in a constrained way whilst at the same time being able to capture the underlying named entity relationships. However, it is difficult to incorporate background knowledge or non-local information into the HVS model. This paper proposes to represent the HVS model as a conditionally trained undirected graphical model in which non-local features derived from PPI ontology through inference would be easily incorporated. The seamless fusion of ontology inference with statistical learning produces a new paradigm to information extraction.
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Resumo:
Bangla OCR (Optical Character Recognition) is a long deserving software for Bengali community all over the world. Numerous e efforts suggest that due to the inherent complex nature of Bangla alphabet and its word formation process development of high fidelity OCR producing a reasonably acceptable output still remains a challenge. One possible way of improvement is by using post processing of OCR’s output; algorithms such as Edit Distance and the use of n-grams statistical information have been used to rectify misspelled words in language processing. This work presents the first known approach to use these algorithms to replace misrecognized words produced by Bangla OCR. The assessment is made on a set of fifty documents written in Bangla script and uses a dictionary of 541,167 words. The proposed correction model can correct several words lowering the recognition error rate by 2.87% and 3.18% for the character based n- gram and edit distance algorithms respectively. The developed system suggests a list of 5 (five) alternatives for a misspelled word. It is found that in 33.82% cases, the correct word is the topmost suggestion of 5 words list for n-gram algorithm while using Edit distance algorithm the first word in the suggestion properly matches 36.31% of the cases. This work will ignite rooms of thoughts for possible improvements in character recognition endeavour.
Resumo:
This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah [1989. The dynamic effects of aggregate demand and supply disturbances. The American Economic Review 79, 655–673], and shows that structural equations with known permanent shocks cannot contain error correction terms, thereby freeing up the latter to be used as instruments in estimating their parameters. The approach is illustrated by a re-examination of the identification schemes used by Wickens and Motto [2001. Estimating shocks and impulse response functions. Journal of Applied Econometrics 16, 371–387], Shapiro and Watson [1988. Sources of business cycle fluctuations. NBER Macroeconomics Annual 3, 111–148], King et al. [1991. Stochastic trends and economic fluctuations. American Economic Review 81, 819–840], Gali [1992. How well does the ISLM model fit postwar US data? Quarterly Journal of Economics 107, 709–735; 1999. Technology, employment, and the business cycle: Do technology shocks explain aggregate fluctuations? American Economic Review 89, 249–271] and Fisher [2006. The dynamic effects of neutral and investment-specific technology shocks. Journal of Political Economy 114, 413–451].
Resumo:
The world we live in is well labeled for the benefit of humans but to date robots have made little use of this resource. In this paper we describe a system that allows robots to read and interpret visible text and use it to understand the content of the scene. We use a generative probabilistic model that explains spotted text in terms of arbitrary search terms. This allows the robot to understand the underlying function of the scene it is looking at, such as whether it is a bank or a restaurant. We describe the text spotting engine at the heart of our system that is able to detect and parse wild text in images, and the generative model, and present results from images obtained with a robot in a busy city setting.
Resumo:
At Eurocrypt’04, Freedman, Nissim and Pinkas introduced a fuzzy private matching problem. The problem is defined as follows. Given two parties, each of them having a set of vectors where each vector has T integer components, the fuzzy private matching is to securely test if each vector of one set matches any vector of another set for at least t components where t < T. In the conclusion of their paper, they asked whether it was possible to design a fuzzy private matching protocol without incurring a communication complexity with the factor (T t ) . We answer their question in the affirmative by presenting a protocol based on homomorphic encryption, combined with the novel notion of a share-hiding error-correcting secret sharing scheme, which we show how to implement with efficient decoding using interleaved Reed-Solomon codes. This scheme may be of independent interest. Our protocol is provably secure against passive adversaries, and has better efficiency than previous protocols for certain parameter values.
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
Read the original article via 'related work' link below