17 resultados para Non-linear time series
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
In this paper, we present approximate distributions for the ratio of the cumulative wavelet periodograms considering stationary and non-stationary time series generated from independent Gaussian processes. We also adapt an existing procedure to use this statistic and its approximate distribution in order to test if two regularly or irregularly spaced time series are realizations of the same generating process. Simulation studies show good size and power properties for the test statistic. An application with financial microdata illustrates the test usefulness. We conclude advocating the use of these approximate distributions instead of the ones obtained through randomizations, mainly in the case of irregular time series. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This work proposes a system for classification of industrial steel pieces by means of magnetic nondestructive device. The proposed classification system presents two main stages, online system stage and off-line system stage. In online stage, the system classifies inputs and saves misclassification information in order to perform posterior analyses. In the off-line optimization stage, the topology of a Probabilistic Neural Network is optimized by a Feature Selection algorithm combined with the Probabilistic Neural Network to increase the classification rate. The proposed Feature Selection algorithm searches for the signal spectrogram by combining three basic elements: a Sequential Forward Selection algorithm, a Feature Cluster Grow algorithm with classification rate gradient analysis and a Sequential Backward Selection. Also, a trash-data recycling algorithm is proposed to obtain the optimal feedback samples selected from the misclassified ones.
Resumo:
Further advances in magnetic hyperthermia might be limited by biological constraints, such as using sufficiently low frequencies and low field amplitudes to inhibit harmful eddy currents inside the patient's body. These incite the need to optimize the heating efficiency of the nanoparticles, referred to as the specific absorption rate (SAR). Among the several properties currently under research, one of particular importance is the transition from the linear to the non-linear regime that takes place as the field amplitude is increased, an aspect where the magnetic anisotropy is expected to play a fundamental role. In this paper we investigate the heating properties of cobalt ferrite and maghemite nanoparticles under the influence of a 500 kHz sinusoidal magnetic field with varying amplitude, up to 134 Oe. The particles were characterized by TEM, XRD, FMR and VSM, from which most relevant morphological, structural and magnetic properties were inferred. Both materials have similar size distributions and saturation magnetization, but strikingly different magnetic anisotropies. From magnetic hyperthermia experiments we found that, while at low fields maghemite is the best nanomaterial for hyperthermia applications, above a critical field, close to the transition from the linear to the non-linear regime, cobalt ferrite becomes more efficient. The results were also analyzed with respect to the energy conversion efficiency and compared with dynamic hysteresis simulations. Additional analysis with nickel, zinc and copper-ferrite nanoparticles of similar sizes confirmed the importance of the magnetic anisotropy and the damping factor. Further, the analysis of the characterization parameters suggested core-shell nanostructures, probably due to a surface passivation process during the nanoparticle synthesis. Finally, we discussed the effect of particle-particle interactions and its consequences, in particular regarding discrepancies between estimated parameters and expected theoretical predictions. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi. org/10.1063/1.4739533]
Resumo:
We consider modifications of the nonlinear Schrodinger model (NLS) to look at the recently introduced concept of quasi-integrability. We show that such models possess an in finite number of quasi-conserved charges which present intriguing properties in relation to very specific space-time parity transformations. For the case of two-soliton solutions where the fields are eigenstates of this parity, those charges are asymptotically conserved in the scattering process of the solitons. Even though the charges vary in time their values in the far past and the far future are the same. Such results are obtained through analytical and numerical methods, and employ adaptations of algebraic techniques used in integrable field theories. Our findings may have important consequences on the applications of these models in several areas of non-linear science. We make a detailed numerical study of the modified NLS potential of the form V similar to (vertical bar psi vertical bar(2))(2+epsilon), with epsilon being a perturbation parameter. We perform numerical simulations of the scattering of solitons for this model and find a good agreement with the results predicted by the analytical considerations. Our paper shows that the quasi-integrability concepts recently proposed in the context of modifications of the sine-Gordon model remain valid for perturbations of the NLS model.
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
Resumo:
The scope of this paper was to analyze the association between homicides and public security indicators in Sao Paulo between 1996 and 2008, after monitoring the unemployment rate and the proportion of youths in the population. A time-series ecological study for 1996 and 2008 was conducted with Sao Paulo as the unit of analysis. Dependent variable: number of deaths by homicide per year. Main independent variables: arrest-incarceration rate, access to firearms, police activity. Data analysis was conducted using Stata. IC 10.0 software. Simple and multivariate negative binomial regression models were created. Deaths by homicide and arrest-incarceration, as well as police activity were significantly associated in simple regression analysis. Access to firearms was not significantly associated to the reduction in the number of deaths by homicide (p>0,05). After adjustment, the associations with both the public security indicators were not significant. In Sao Paulo the role of public security indicators are less important as explanatory factors for a reduction in homicide rates, after adjustment for unemployment rate and a reduction in the proportion of youths. The results reinforce the importance of socioeconomic and demographic factors for a change in the public security scenario in Sao Paulo.
Resumo:
The plasma density evolution in sawtooth regime on the Tore Supra tokamak is analyzed. The density is measured using fast-sweeping X-mode reflectometry which allows tomographic reconstructions. There is evidence that density is governed by the perpendicular electric flows, while temperature evolution is dominated by parallel diffusion. Postcursor oscillations sometimes lead to the formation of a density plateau, which is explained in terms of convection cells associated with the kink mode. A crescent-shaped density structure located inside q = 1 is often visible just after the crash and indicates that some part of the density withstands the crash. 3D full MHD nonlinear simulations with the code XTOR-2F recover this structure and show that it arises from the perpendicular flows emerging from the reconnection layer. The proportion of density reinjected inside the q = 1 surface is determined, and the implications in terms of helium ash transport are discussed. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4766893]
Resumo:
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiff(max)) for q not equal 1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiff(max) values were capable of distinguish HRV groups (p-values 5.10 x 10(-3); 1.11 x 10(-7), and 5.50 x 10(-7) for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4758815]
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
The leaf area index (LAI) is a key characteristic of forest ecosystems. Estimations of LAI from satellite images generally rely on spectral vegetation indices (SVIs) or radiative transfer model (RTM) inversions. We have developed a new and precise method suitable for practical application, consisting of building a species-specific SVI that is best-suited to both sensor and vegetation characteristics. Such an SVI requires calibration on a large number of representative vegetation conditions. We developed a two-step approach: (1) estimation of LAI on a subset of satellite data through RTM inversion; and (2) the calibration of a vegetation index on these estimated LAI. We applied this methodology to Eucalyptus plantations which have highly variable LAI in time and space. Previous results showed that an RTM inversion of Moderate Resolution Imaging Spectroradiometer (MODIS) near-infrared and red reflectance allowed good retrieval performance (R-2 = 0.80, RMSE = 0.41), but was computationally difficult. Here, the RTM results were used to calibrate a dedicated vegetation index (called "EucVI") which gave similar LAI retrieval results but in a simpler way. The R-2 of the regression between measured and EucVI-simulated LAI values on a validation dataset was 0.68, and the RMSE was 0.49. The additional use of stand age and day of year in the SVI equation slightly increased the performance of the index (R-2 = 0.77 and RMSE = 0.41). This simple index opens the way to an easily applicable retrieval of Eucalyptus LAI from MODIS data, which could be used in an operational way.
Resumo:
Brazil is the largest sugarcane producer in the world and has a privileged position to attend to national and international market places. To maintain the high production of sugarcane, it is fundamental to improve the forecasting models of crop seasons through the use of alternative technologies, such as remote sensing. Thus, the main purpose of this article is to assess the results of two different statistical forecasting methods applied to an agroclimatic index (the water requirement satisfaction index; WRSI) and the sugarcane spectral response (normalized difference vegetation index; NDVI) registered on National Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer (NOAA-AVHRR) satellite images. We also evaluated the cross-correlation between these two indexes. According to the results obtained, there are meaningful correlations between NDVI and WRSI with time lags. Additionally, the adjusted model for NDVI presented more accurate results than the forecasting models for WRSI. Finally, the analyses indicate that NDVI is more predictable due to its seasonality and the WRSI values are more variable making it difficult to forecast.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
This work investigates the behavior of the sunspot number and Southern Oscillation Index (SOI) signal recorded in the tree ring time series for three different locations in Brazil: Humaita in Amaznia State, Porto Ferreira in So Paulo State, and Passo Fundo in Rio Grande do Sul State, using wavelet and cross-wavelet analysis techniques. The wavelet spectra of tree ring time series showed signs of 11 and 22 years, possibly related to the solar activity, and periods of 2-8 years, possibly related to El Nio events. The cross-wavelet spectra for all tree ring time series from Brazil present a significant response to the 11-year solar cycle in the time interval between 1921 to after 1981. These tree ring time series still have a response to the second harmonic of the solar cycle (5.5 years), but in different time intervals. The cross-wavelet maps also showed that the relationship between the SOI x tree ring time series is more intense, for oscillation in the range of 4-8 years.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.
Resumo:
This work is supported by Brazilian agencies Fapesp, CAPES and CNPq