32 resultados para Neural networks training
Resumo:
This experimental study focuses on a detection system at the seismic station level that should have a similar role to the detection algorithms based on the ratio STA/LTA. We tested two types of neural network: Multi-Layer Perceptrons and Support Vector Machines, trained in supervised mode. The universe of data consisted of 2903 patterns extracted from records of the PVAQ station, of the seismography network of the Institute of Meteorology of Portugal. The spectral characteristics of the records and its variation in time were reflected in the input patterns, consisting in a set of values of power spectral density in selected frequencies, extracted from a spectro gram calculated over a segment of record of pre-determined duration. The universe of data was divided, with about 60% for the training and the remainder reserved for testing and validation. To ensure that all patterns in the universe of data were within the range of variation of the training set, we used an algorithm to separate the universe of data by hyper-convex polyhedrons, determining in this manner a set of patterns that have a mandatory part of the training set. Additionally, an active learning strategy was conducted, by iteratively incorporating poorly classified cases in the training set. The best results, in terms of sensitivity and selectivity in the whole data ranged between 98% and 100%. These results compare very favorably with the ones obtained by the existing detection system, 50%.
Resumo:
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large field of applications. In control and signal processing applications, MLPs are mainly used as nonlinear mapping approximators. The most common training algorithm used with MLPs is the error back-propagation (BP) alg. (1).
Resumo:
Multilayer perceptrons (MLPs) (1) are the most common artificial neural networks employed in a large field of applications. In control and signal processing applications, MLPs are mainly used as nonlinear mapping approximators. The most common training algorithm used with MLPs is the error back-propagation (BP) alg. (1).
Resumo:
The normal design process for neural networks or fuzzy systems involve two different phases: the determination of the best topology, which can be seen as a system identification problem, and the determination of its parameters, which can be envisaged as a parameter estimation problem. This latter issue, the determination of the model parameters (linear weights and interior knots) is the simplest task and is usually solved using gradient or hybrid schemes. The former issue, the topology determination, is an extremely complex task, especially if dealing with real-world problems.
Resumo:
In previous papers from the authors fuzzy model identification methods were discussed. The bacterial algorithm for extracting fuzzy rule base from a training set was presented. The Levenberg-Marquardt algorithm was also proposed for determining membership functions in fuzzy systems. In this paper the Levenberg-Marquardt technique is improved to optimise the membership functions in the fuzzy rules without Ruspini-partition. The class of membership functions investigated is the trapezoidal one as it is general enough and widely used. The method can be easily extended to arbitrary piecewise linear functions as well.
Resumo:
This paper presents a comparison between a physical model and an artificial neural network model (NN) for temperature estimation inside a building room. Despite the obvious advantages of the physical model for structure optimisation purposes, this paper will test the performance of neural models for inside temperature estimation. The great advantage of the NN model is a big reduction of human effort time, because it is not needed to develop the structural geometry and structural thermal capacities and to simulate, which consumes a great human effort and great computation time. The NN model deals with this problem as a “black box” problem. We describe the use of the Radial Basis Function (RBF), the training method and a multi-objective genetic algorithm for optimisation/selection of the RBF neural network inputs and number of neurons.
Resumo:
In modern measurement and control systems, the available time and resources are often not only limited, but could change during the operation of the system. In these cases, the so-called anytime algorithms could be used advantageously. While diflerent soft computing methods are wide-spreadly used in system modeling, their usability in these cases are limited.
Resumo:
A gas turbine is made up of three basic components: a compressor, a combustion chamber and a turbine. Air is drawn into the engine by the compressor, which compresses it and delivers it to the combustion chamber. There, the air is mixed with the fuel and the mixture ignited, producing a rise of temperature and therefore an expansion of the gases. These are expelled through the engine nozzle, but first pass through the turbine, designed to extract energy to keep the compressor rotating [1]. The work described here uses data recorded from a Rolls Royce Spey MK 202 turbine, whose simplified diagram can be seen in Fig. 1. Both the compressor and the turbine are split into low pressure (LP) and high pressure (HP) stages. The HP turbine drives the HP compressor and the LP turbine drives the LP compressor. They are connected by concentric shafts that rotate at different speeds, denoted as NH and NL.
Resumo:
The presence of circulating cerebral emboli represents an increased risk of stroke. The detection of such emboli is possible with the use of a transcranial Doppler ultrasound (TCD) system.
Resumo:
This papers describes an extantion of previous works on the subject of neural network proportional, integral and derivative (PID) autotuning. Basically, neural networks are employed to supply the three PID parameters, according to the integral of time multiplied by the absolute error (ITAE) criterion, to a standard PID controller.
Resumo:
The Proportional Integral and Devirative (PID) controller autotuning is an important problem, both in practical and theoretical terms. The autotuning procedure must take place in real-time, and therefore the corresponding optimisation procedure must also be executed in real-time and without disturbing on-line control.
Resumo:
PID controllers are widely used in industrial applications. Because the plant can be time variant, methods of autotuning of this type of controllers, are of great economical importance, see (Astrom, 1996). Since 1942, with the work of Ziegler and Nichols (Ziegler and Nichols, 1942), several methods have been proposed in the literature. Recently, a new technique using neural networks was proposed (Ruano et al., 1992). This technique has been shown to produce good tunings as long as certain limitations are met.
Resumo:
A recent servey (1) has reported that the majority of industrial loops are controlled by PID-type controllers and many of the PID controllers in operation are poorly tuned. poor PID tuning is due to the lack of a simple and practical tuning method for avarage users, and due to the tedious procedurs involved in the tuning and retuning of PID controllers.
Resumo:
In this paper, a scheme for the automatic tuning of PID controllers on-line, with the assistance of trained neural networks, is proposed. The alternative approaches are presented and compared.
Resumo:
Proportional, Integral and Derivative (PID) regulators are standard building blocks for industrial automation. Their popularity comes from their rebust performance and also from their functional simplicity. Whether because the plant is time-varying, or because of components ageing, these controllers need to be regularly retuned.