917 resultados para system parameter identification
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
The recursive least-squares algorithm with a forgetting factor has been extensively applied and studied for the on-line parameter estimation of linear dynamic systems. This paper explores the use of genetic algorithms to improve the performance of the recursive least-squares algorithm in the parameter estimation of time-varying systems. Simulation results show that the hybrid recursive algorithm (GARLS), combining recursive least-squares with genetic algorithms, can achieve better results than the standard recursive least-squares algorithm using only a forgetting factor.
Resumo:
A novel partitioned least squares (PLS) algorithm is presented, in which estimates from several simple system models are combined by means of a Bayesian methodology of pooling partial knowledge. The method has the added advantage that, when the simple models are of a similar structure, it lends itself directly to parallel processing procedures, thereby speeding up the entire parameter estimation process by several factors.
Resumo:
A simple and effective algorithm is introduced for the system identification of Wiener system based on the observational input/output data. The B-spline neural network is used to approximate the nonlinear static function in the Wiener system. We incorporate the Gauss-Newton algorithm with De Boor algorithm (both curve and the first order derivatives) for the parameter estimation of the Wiener model, together with the use of a parameter initialization scheme. The efficacy of the proposed approach is demonstrated using an illustrative example.
Resumo:
In this article a simple and effective algorithm is introduced for the system identification of the Wiener system using observational input/output data. The nonlinear static function in the Wiener system is modelled using a B-spline neural network. The Gauss–Newton algorithm is combined with De Boor algorithm (both curve and the first order derivatives) for the parameter estimation of the Wiener model, together with the use of a parameter initialisation scheme. Numerical examples are utilised to demonstrate the efficacy of the proposed approach.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
With the main focus on safety, design of structures for vibration serviceability is often overlooked or mismanaged, resulting in some high profile structures failing publicly to perform adequately under human dynamic loading due to walking, running or jumping. A standard tool to inform better design, prove fitness for purpose before entering service and design retrofits is modal testing, a procedure that typically involves acceleration measurements using an array of wired sensors and force generation using a mechanical shaker. A critical but often overlooked aspect is using input (force) to output (response) relationships to enable estimation of modal mass, which is a key parameter directly controlling vibration levels in service.
This paper describes the use of wireless inertial measurement units (IMUs), designed for biomechanics motion capture applications, for the modal testing of a 109 m footbridge. IMUs were first used for an output-only vibration survey to identify mode frequencies, shapes and damping ratios, then for simultaneous measurement of body accelerations of a human subject jumping to excite specific vibrations modes and build up bridge deck accelerations at the jumping location. Using the mode shapes and the vertical acceleration data from a suitable body landmark scaled by body mass, thus providing jumping force data, it was possible to create frequency response functions and estimate modal masses.
The modal mass estimates for this bridge were checked against estimates obtained using an instrumented hammer and known mass distributions, showing consistency among the experimental estimates. Finally, the method was used in an applied research application on a short span footbridge where the benefits of logistical and operational simplicity afforded by the highly portable and easy to use IMUs proved extremely useful for an efficient evaluation of vibration serviceability, including estimation of modal masses.
Resumo:
A susceptible-infective-recovered (SIR) epidemiological model based on probabilistic cellular automaton (PCA) is employed for simulating the temporal evolution of the registered cases of chickenpox in Arizona, USA, between 1994 and 2004. At each time step, every individual is in one of the states S, I, or R. The parameters of this model are the probabilities of each individual (each cell forming the PCA lattice ) passing from a state to another state. Here, the values of these probabilities are identified by using a genetic algorithm. If nonrealistic values are allowed to the parameters, the predictions present better agreement with the historical series than if they are forced to present realistic values. A discussion about how the size of the PCA lattice affects the quality of the model predictions is presented. Copyright (C) 2009 L. H. A. Monteiro et al.
Resumo:
Identification, prediction, and control of a system are engineering subjects, regardless of the nature of the system. Here, the temporal evolution of the number of individuals with dengue fever weekly recorded in the city of Rio de Janeiro, Brazil, during 2007, is used to identify SIS (susceptible-infective-susceptible) and SIR (susceptible-infective-removed) models formulated in terms of cellular automaton (CA). In the identification process, a genetic algorithm (GA) is utilized to find the probabilities of the state transition S -> I able of reproducing in the CA lattice the historical series of 2007. These probabilities depend on the number of infective neighbors. Time-varying and non-time-varying probabilities, three different sizes of lattices, and two kinds of coupling topology among the cells are taken into consideration. Then, these epidemiological models built by combining CA and GA are employed for predicting the cases of sick persons in 2008. Such models can be useful for forecasting and controlling the spreading of this infectious disease.
Resumo:
The dynamics of a dissipative vibro-impact system called impact-pair is investigated. This system is similar to Fermi-Ulam accelerator model and consists of an oscillating one-dimensional box containing a point mass moving freely between successive inelastic collisions with the rigid walls of the box. In our numerical simulations, we observed multistable regimes, for which the corresponding basins of attraction present a quite complicated structure with smooth boundary. In addition, we characterize the system in a two-dimensional parameter space by using the largest Lyapunov exponents, identifying self-similar periodic sets. Copyright (C) 2009 Silvio L.T. de Souza et al.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
This paper presents both the theoretical and the experimental approaches of the development of a mathematical model to be used in multi-variable control system designs of an active suspension for a sport utility vehicle (SUV), in this case a light pickup truck. A complete seven-degree-of-freedom model is successfully quickly identified, with very satisfactory results in simulations and in real experiments conducted with the pickup truth. The novelty of the proposed methodology is the use of commercial software in the early stages of the identification to speed up the process and to minimize the need for a large number of costly experiments. The paper also presents major contributions to the identification of uncertainties in vehicle suspension models and in the development of identification methods using the sequential quadratic programming, where an innovation regarding the calculation of the objective function is proposed and implemented. Results from simulations of and practical experiments with the real SUV are presented, analysed, and compared, showing the potential of the method.
Resumo:
Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.