108 resultados para Non Linear Time Series
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
Abstract—DC testing of parametric faults in non-linear analog circuits based on a new transformation, entitled, V-Transform acting on polynomial coefficient expansion of the circuit function is presented. V-Transform serves the dual purpose of monotonizing polynomial coefficients of circuit function expansion and increasing the sensitivity of these coefficients to circuit parameters. The sensitivity of V-Transform Coefficients (VTC) to circuit parameters is up to 3x-5x more than sensitivity of polynomial coefficients. As a case study, we consider a benchmark elliptic filter to validate our method. The technique is shown to uncover hitherto untestable parametric faults whose sizes are smaller than 10 % of the nominal values. I.
Resumo:
In the present article we take up the study of nonlinear localization induced base isolation of a 3 degree of freedom system having cubic nonlinearities under sinusoidal base excitation. The damping forces in the system are described by functions of fractional derivative of the instantaneous displacements, typically linear and quadratic damping are considered here separately. Under the assumption of smallness of certain system parameters and nonlinear terms an approximate estimate of the response at each degree of freedom of the system is obtained by the Method of Multiple Scales approach. We then consider a similar system where the nonlinear terms and certain other parameters are no longer small. Direct numerical simulation is made use of to obtain the amplitude plot in the frequency domain for this case, which helps us to establish the efficacy of this method of base isolation for a broad class of systems. Base isolation obtained this way has no counterpart in the linear theory.
Resumo:
A class of linear time-varying discrete systems is considered, and closed-form solutions are obtained in different cases. Some comments on stability are also included.
Resumo:
Investigations on the switching behaviour of arsenic-tellurium glasses with Ge or Al additives, yield interesting information about the dependence of switching on network rigidity, co-ordination of the constituents, glass transition & ambient temperature and glass forming ability.
Resumo:
Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.
Resumo:
Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called `early warning signals', and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data.
Resumo:
This work aims at dimensional reduction of non-linear isotropic hyperelastic plates in an asymptotically accurate manner. The problem is both geometrically and materially non-linear. The geometric non-linearity is handled by allowing for finite deformations and generalized warping while the material non-linearity is incorporated through hyperelastic material model. The development, based on the Variational Asymptotic Method (VAM) with moderate strains and very small thickness to shortest wavelength of the deformation along the plate reference surface as small parameters, begins with three-dimensional (3-D) non-linear elasticity and mathematically splits the analysis into a one-dimensional (1-D) through-the-thickness analysis and a two-dimensional (2-D) plate analysis. Major contributions of this paper are derivation of closed-form analytical expressions for warping functions and stiffness coefficients and a set of recovery relations to express approximately the 3-D displacement, strain and stress fields. Consistent with the 2-D non-linear constitutive laws, 2-D plate theory and corresponding finite element program have been developed. Validation of present theory is carried out with a standard test case and the results match well. Distributions of 3-D results are provided for another test case. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Time series classification deals with the problem of classification of data that is multivariate in nature. This means that one or more of the attributes is in the form of a sequence. The notion of similarity or distance, used in time series data, is significant and affects the accuracy, time, and space complexity of the classification algorithm. There exist numerous similarity measures for time series data, but each of them has its own disadvantages. Instead of relying upon a single similarity measure, our aim is to find the near optimal solution to the classification problem by combining different similarity measures. In this work, we use genetic algorithms to combine the similarity measures so as to get the best performance. The weightage given to different similarity measures evolves over a number of generations so as to get the best combination. We test our approach on a number of benchmark time series datasets and present promising results.