928 resultados para Sparse time-varying VAR models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the relationship between consumer demand and corporate performance in several consumer industries in the UK, using two independent datasets. It uses data on consumer expenditures and the retail price index to estimate Almost Ideal Demand Systems on micro-data and compute timevarying price elasticities of demand for disaggregated commodity groups. Then, it matches the product definitions to the Standard Industry Classification and uses the estimated elasticities to investigate the impact of consumer behaviour on firm-level profitability equations. The time-varying household characteristics are ideal instruments for the demand effects in the firms' supply equation. The paper concludes that demand elasticities have a significant and tangible impact on the profitability of UK firms and that this impact can shed some light on the relationship between market structure and economic performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work evaluates empirically the Taylor rule for the US and Brazil using Kalman Filter and Markov-Switching Regimes. We show that the parameters of the rule change significantly with variations in both output and output gap proxies, considering hidden variables and states. Such conclusions call naturally for robust optimal monetary rules. We also show that Brazil and US have very contrasting parameters, first because Brazil presents time-varying intercept, second because of the rigidity in the parameters of the Brazilian Taylor rule, regardless the output gap proxy, data frequency or sample data. Finally, we show that the long-run inflation parameter of the US Taylor rule is less than one in many periods, contrasting strongly with Orphanides (forthcoming) and Clarida, Gal´i and Gertler (2000), and the same happens with Brazilian monthly data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we show that the widely used stationarity tests such as the KPSS test has power close to size in the presence of time-varying unconditional variance. We propose a new test as a complement of the existing tests. Monte Carlo experiments show that the proposed test possesses the following characteristics: (i) In the presence of unit root or a structural change in the mean, the proposed test is as powerful as the KPSS and other tests; (ii) In the presence a changing variance, the traditional tests perform badly whereas the proposed test has high power comparing to the existing tests; (iii) The proposed test has the same size as traditional stationarity tests under the null hypothesis of covariance stationarity. An application to daily observations of return on US Dollar/Euro exchange rate reveals the existence of instability in the unconditional variance when the entire sample is considered, but stability is found in sub-samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese é constituída por três ensaios. O primeiro ensaio analisa a informação pública disponível sobre o risco das carteiras de crédito dos bancos brasileiros, sendo dividido em dois capítulos. O primeiro analisa a limitação da informação pública disponibilizada pelos bancos e pelo Banco Central, quando comparada a informação gerencial disponível internamente pelos bancos. Concluiu-se que existe espaço para o aumento da transparência na divulgação das informações, fato que vem ocorrendo gradativamente no Brasil através de novas normas relacionadas ao Pilar 3 de Basileia II e à divulgação de informações mais detalhas pelo Bacen, como, por exemplo, aquelas do “Top50” . A segunda parte do primeiro ensaio mostra a discrepância entre o índice de inadimplência contábil (NPL) e a probabilidade de inadimplência (PD) e também discute a relação entre provisão e perda esperada. Através da utilização de matrizes de migração e de uma simulação baseada na sobreposição de safras de carteira de crédito de grandes bancos, concluiu-se que o índice de inadimplência subestima a PD e que a provisão constituída pelos bancos é menor que a perda esperada do SFN. O segundo ensaio relaciona a gestão de risco à discriminação de preço. Foi desenvolvido um modelo que consiste em um duopólio de Cournot em um mercado de crédito de varejo, em que os bancos podem realizar discriminação de terceiro grau. Neste modelo, os potenciais tomadores de crédito podem ser de dois tipos, de baixo ou de alto risco, sendo que tomadores de baixo risco possuem demanda mais elástica. Segundo o modelo, se o custo para observar o tipo do cliente for alto, a estratégia dos bancos será não discriminar (pooling equilibrium). Mas, se este custo for suficientemente baixo, será ótimo para os bancos cobrarem taxas diferentes para cada grupo. É argumentado que o Acordo de Basileia II funcionou como um choque exógeno que deslocou o equilíbrio para uma situação com maior discriminação. O terceiro ensaio é divido em dois capítulos. O primeiro discute a aplicação dos conceitos de probabilidade subjetiva e incerteza Knigthiana a modelos de VaR e a importância da avaliação do “risco de modelo”, que compreende os riscos de estimação, especificação e identificação. O ensaio propõe que a metodologia dos “quatro elementos” de risco operacional (dados internos, externos, ambiente de negócios e cenários) seja estendida à mensuração de outros riscos (risco de mercado e risco de crédito). A segunda parte deste último ensaio trata da aplicação do elemento análise de cenários para a mensuração da volatilidade condicional nas datas de divulgação econômica relevante, especificamente nos dias de reuniões do Copom.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho visa analisar a relação entre política monetária e persistência inflacionária no período recente, após a introdução do regime de metas de inflação no Brasil. Através de um modelo novo-keynesiano simplificado, o grau de persistência do hiato de inflação é modelado como função dos pesos da regra de política monetária. A evolução temporal da regra de Taylor é confrontada com a curva estimada de persistência do hiato de inflação, demonstrando que mudanças na condução da política monetária levam a alterações do nível de persistência inflacionária na economia. Uma adaptação do modelo, com uma regra de Taylor que incorpora expectativas do hiato do produto, chega aos mesmos resultados com maior precisão.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Taking into account previous research we could assume to be beneficial to diversify investments in emerging economies. We investigate in the paper International Portfolio Diversification: evidence from Emerging Markets if it still holds true, given the assumption of larger world markets integration. Our results suggest a wide spread positive time-varying correlations of emerging and developed markets. However, pair-wise cross-country correlations gave evidence that emerging markets have low integration with developed markets. Consequently, we evaluate out-of-sample performance of a portfolio with emerging equity countries, confirming the initial statement that it has a better a risk-adjusted performance over a purely developed markets portfolio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Composite resins have been subjected to structural modifications aiming at improved optical and mechanical properties. The present study consisted in an in vitro evaluation of the staining behavior of two nanohybrid resins (NH1 and NH2), a nanoparticulated resin (NP) and a microhybrid resin (MH). Samples of these materials were prepared and immersed in commonly ingested drinks, i.e., coffee, red wine and acai berry for periods of time varying from 1 to 60 days. Cylindrical samples of each resin were shaped using a metallic die and polymerized during 30 s both on the bottom and top of its disk. All samples were polished and immersed in the staining solutions. After 24 hours, three samples of each resin immersed in each solution were removed and placed in a spectrofotome ter for analysis. To that end, the samples were previously diluted in HCl at 50%. Tukey tests were carried out in the statistical analysis of the results. The results revealed that there was a clear difference in the staining behavior of each material. The nanoparticulated resin did not show better color stability compared to the microhybrid resin. Moreover, all resins stained with time. The degree of staining decreased in the sequence nanoparticulated, microhybrid, nanohybrid MH2 and MH1. Wine was the most aggressive drink followed by coffee and acai berry. SEM and image analysis revealed significant porosity on the surface of MH resin and relatively large pores on a NP sample. The NH2 resin was characterized by homogeneous dispersion of particles and limited porosity. Finally, the NH1 resin depicted the lowest porosity level. The results revealed that staining is likely related to the concentration of inorganic pa rticles and surface porosity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Postsurgical complication of hypertension may occur in cardiac patients. To decrease the chances of complication it is necessary to reduce elevated blood pressure as soon as possible. Continuous infusion of vasodilator drugs, such as sodium nitroprusside (Nipride), would quickly lower the blood pressure in most patients. However, each patient has a different sensitivity to infusion of Nipride. The parameters and the time delays of the system are initially unknown. Moreover, the parameters of the transfer function associated with a particular patient are time varying. the objective of the study is to develop a procedure for blood pressure control i the presence of uncertainty of parameters and considerable time delays. So, a methodology was developed multi-model, and for each such model a Preditive Controller can be a priori designed. An adaptive mechanism is then needed for deciding which controller should be dominant for a given plant

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex network analysis is a powerful tool into research of complex systems like brain networks. This work aims to describe the topological changes in neural functional connectivity networks of neocortex and hippocampus during slow-wave sleep (SWS) in animals submited to a novel experience exposure. Slow-wave sleep is an important sleep stage where occurs reverberations of electrical activities patterns of wakeness, playing a fundamental role in memory consolidation. Although its importance there s a lack of studies that characterize the topological dynamical of functional connectivity networks during that sleep stage. There s no studies that describe the topological modifications that novel exposure leads to this networks. We have observed that several topological properties have been modified after novel exposure and this modification remains for a long time. Major part of this changes in topological properties by novel exposure are related to fault tolerance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wavelet coding has emerged as an alternative coding technique to minimize the fading effects of wireless channels. This work evaluates the performance of wavelet coding, in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, main international standards reference for GSM, UMTS, and EDGE applications. The results show the wavelet coding s efficiency against the inter symbolic interference which characterizes these communication scenarios. This robustness of the presented technique enables its usage in different environments, bringing it one step closer to be applied in practical wireless communication systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the cosmology of the vacuum energy decaying into cold dark matter according to thermodynamics description of Alcaniz & Lima. We apply this model to analyze the evolution of primordial density perturbations in the matter that gave rise to the first generation of structures bounded by gravity in the Universe, called Population III Objects. The analysis of the dynamics of those systems will involve the calculation of a differential equation system governing the evolution of perturbations to the case of two coupled fluids (dark matter and baryonic matter), modeled with a Top-Hat profile based in the perturbation of the hydrodynamics equations, an efficient analytical tool to study the properties of dark energy models such as the behavior of the linear growth factor and the linear growth index, physical quantities closely related to the fields of peculiar velocities at any time, for different models of dark energy. The properties and the dynamics of current Universe are analyzed through the exact analytical form of the linear growth factor of density fluctuations, taking into account the influence of several physical cooling mechanisms acting on the density fluctuations of the baryonic component of matter during the evolution of the clouds of matter, studied from the primordial hydrogen recombination. This study is naturally extended to more general models of dark energy with constant equation of state parameter in a flat Universe

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper an alternative method based on artificial neural networks is presented to determine harmonic components in the load current of a single-phase electric power system with nonlinear loads, whose parameters can vary so much in reason of the loads characteristic behaviors as because of the human intervention. The first six components in the load current are determined using the information contained in the time-varying waveforms. The effectiveness of this method is verified by using it in a single-phase active power filter with selective compensation of the current drained by an AC controller. The proposed method is compared with the fast Fourier transform.