927 resultados para Time-series Analysis
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
Resumo:
Following the thermodynamic formulation of a multifractal measure that was shown to enable the detection of large fluctuations at an early stage, here we propose a new index which permits us to distinguish events like financial crises in real time. We calculate the partition function from which we can obtain thermodynamic quantities analogous to the free energy and specific heat. The index is defined as the normalized energy variation and it can be used to study the behavior of stochastic time series, such as financial market daily data. Famous financial market crashes-Black Thursday (1929), Black Monday (1987) and the subprime crisis (2008)-are identified with clear and robust results. The method is also applied to the market fluctuations of 2011. From these results it appears as if the apparent crisis of 2011 is of a different nature to the other three. We also show that the analysis has forecasting capabilities. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Nowadays the method based on demodulation by envelope finds wide application in industry as a technique for evaluation of bearings and other components in rotating machinery. In recent years the application of Wavelets for fault diagnosis in machinery has also obtained good development. This article demonstrates the effectiveness of the combined application of Wavelets and envelope technique (also known as HFRT High-Frequency Resonance Technique) to remove background noise from signals collected from defect bearings and identification of the characteristic frequencies of defects. A comparison of the results obtained with the isolated application of only one method against the combined technique is performed showing the increased capacity in detection of faults in rolling bearings. © (2013) Trans Tech Publications, Switzerland.
Resumo:
Includes bibliography
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The Box-Cox transformation is a technique mostly utilized to turn the probabilistic distribution of a time series data into approximately normal. And this helps statistical and neural models to perform more accurate forecastings. However, it introduces a bias when the reversion of the transformation is conducted with the predicted data. The statistical methods to perform a bias-free reversion require, necessarily, the assumption of Gaussianity of the transformed data distribution, which is a rare event in real-world time series. So, the aim of this study was to provide an effective method of removing the bias when the reversion of the Box-Cox transformation is executed. Thus, the developed method is based on a focused time lagged feedforward neural network, which does not require any assumption about the transformed data distribution. Therefore, to evaluate the performance of the proposed method, numerical simulations were conducted and the Mean Absolute Percentage Error, the Theil Inequality Index and the Signal-to-Noise ratio of 20-step-ahead forecasts of 40 time series were compared, and the results obtained indicate that the proposed reversion method is valid and justifies new studies. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The leaf area index (LAI) is a key characteristic of forest ecosystems. Estimations of LAI from satellite images generally rely on spectral vegetation indices (SVIs) or radiative transfer model (RTM) inversions. We have developed a new and precise method suitable for practical application, consisting of building a species-specific SVI that is best-suited to both sensor and vegetation characteristics. Such an SVI requires calibration on a large number of representative vegetation conditions. We developed a two-step approach: (1) estimation of LAI on a subset of satellite data through RTM inversion; and (2) the calibration of a vegetation index on these estimated LAI. We applied this methodology to Eucalyptus plantations which have highly variable LAI in time and space. Previous results showed that an RTM inversion of Moderate Resolution Imaging Spectroradiometer (MODIS) near-infrared and red reflectance allowed good retrieval performance (R-2 = 0.80, RMSE = 0.41), but was computationally difficult. Here, the RTM results were used to calibrate a dedicated vegetation index (called "EucVI") which gave similar LAI retrieval results but in a simpler way. The R-2 of the regression between measured and EucVI-simulated LAI values on a validation dataset was 0.68, and the RMSE was 0.49. The additional use of stand age and day of year in the SVI equation slightly increased the performance of the index (R-2 = 0.77 and RMSE = 0.41). This simple index opens the way to an easily applicable retrieval of Eucalyptus LAI from MODIS data, which could be used in an operational way.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
This work is supported by Brazilian agencies Fapesp, CAPES and CNPq
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
Resumo:
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.