769 resultados para Probabilistic neural network
Resumo:
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz-1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of +/-10 degrees is used. For angular resolutions down to 2.5 degrees , it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance.
Resumo:
In this paper, a spiking neural network (SNN) architecture to simulate the sound localization ability of the mammalian auditory pathways using the interaural intensity difference cue is presented. The lateral superior olive was the inspiration for the architecture, which required the integration of an auditory periphery (cochlea) model and a model of the medial nucleus of the trapezoid body. The SNN uses leaky integrateand-fire excitatory and inhibitory spiking neurons, facilitating synapses and receptive fields. Experimentally derived headrelated transfer function (HRTF) acoustical data from adult domestic cats were employed to train and validate the localization ability of the architecture, training used the supervised learning algorithm called the remote supervision method to determine the azimuthal angles. The experimental results demonstrate that the architecture performs best when it is localizing high-frequency sound data in agreement with the biology, and also shows a high degree of robustness when the HRTF acoustical data is corrupted by noise.
Resumo:
The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well-understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modeling of neural circuits found in the brain.
Resumo:
The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modelling of neural circuits found in the brain. In recent years, much of the focus in neuron modelling has moved to the study of the connectivity of spiking neural networks. Spiking neural networks provide a vehicle to understand from a computational perspective, aspects of the brain’s neural circuitry. This understanding can then be used to tackle some of the historically intractable issues with artificial neurons, such as scalability and lack of variable binding. Current knowledge of feed-forward, lateral, and recurrent connectivity of spiking neurons, and the interplay between excitatory and inhibitory neurons is beginning to shed light on these issues, by improved understanding of the temporal processing capabilities and synchronous behaviour of biological neurons. This research topic aims to amalgamate current research aimed at tackling these phenomena.
Resumo:
This experimental study focuses on a detection system at the seismic station level that should have a similar role to the detection algorithms based on the ratio STA/LTA. We tested two types of neural network: Multi-Layer Perceptrons and Support Vector Machines, trained in supervised mode. The universe of data consisted of 2903 patterns extracted from records of the PVAQ station, of the seismography network of the Institute of Meteorology of Portugal. The spectral characteristics of the records and its variation in time were reflected in the input patterns, consisting in a set of values of power spectral density in selected frequencies, extracted from a spectro gram calculated over a segment of record of pre-determined duration. The universe of data was divided, with about 60% for the training and the remainder reserved for testing and validation. To ensure that all patterns in the universe of data were within the range of variation of the training set, we used an algorithm to separate the universe of data by hyper-convex polyhedrons, determining in this manner a set of patterns that have a mandatory part of the training set. Additionally, an active learning strategy was conducted, by iteratively incorporating poorly classified cases in the training set. The best results, in terms of sensitivity and selectivity in the whole data ranged between 98% and 100%. These results compare very favorably with the ones obtained by the existing detection system, 50%.
Resumo:
In modern measurement and control systems, the available time and resources are often not only limited, but could change during the operation of the system. In these cases, the so-called anytime algorithms could be used advantageously. While diflerent soft computing methods are wide-spreadly used in system modeling, their usability in these cases are limited.
Resumo:
The presence of circulating cerebral emboli represents an increased risk of stroke. The detection of such emboli is possible with the use of a transcranial Doppler ultrasound (TCD) system.
Resumo:
This papers describes an extantion of previous works on the subject of neural network proportional, integral and derivative (PID) autotuning. Basically, neural networks are employed to supply the three PID parameters, according to the integral of time multiplied by the absolute error (ITAE) criterion, to a standard PID controller.
Resumo:
The Proportional Integral and Devirative (PID) controller autotuning is an important problem, both in practical and theoretical terms. The autotuning procedure must take place in real-time, and therefore the corresponding optimisation procedure must also be executed in real-time and without disturbing on-line control.
Resumo:
PID controllers are widely used in industrial applications. Because the plant can be time variant, methods of autotuning of this type of controllers, are of great economical importance, see (Astrom, 1996). Since 1942, with the work of Ziegler and Nichols (Ziegler and Nichols, 1942), several methods have been proposed in the literature. Recently, a new technique using neural networks was proposed (Ruano et al., 1992). This technique has been shown to produce good tunings as long as certain limitations are met.
Resumo:
A recent servey (1) has reported that the majority of industrial loops are controlled by PID-type controllers and many of the PID controllers in operation are poorly tuned. poor PID tuning is due to the lack of a simple and practical tuning method for avarage users, and due to the tedious procedurs involved in the tuning and retuning of PID controllers.
Resumo:
In this paper, a scheme for the automatic tuning of PID controllers on-line, with the assistance of trained neural networks, is proposed. The alternative approaches are presented and compared.
Resumo:
Proportional, Integral and Derivative (PID) regulators are standard building blocks for industrial automation. Their popularity comes from their rebust performance and also from their functional simplicity. Whether because the plant is time-varying, or because of components ageing, these controllers need to be regularly retuned.
Resumo:
Proportional, Integral and Derivative (PID) regulators are standard building blocks for industrial automation. The popularity of these regulators comes from their rebust performance in a wide range of operating conditions, and also from their functional simplicity, which makes them suitable for manual tuning.
Resumo:
In this paper we consider the learning problem for a class of multilayer perceptrons which is practically relevant in control systems applications. By reformulating this problem, a new criterion is developed, which reduces the number of iterations required for the learning phase.