993 resultados para Neural compensation
Resumo:
In this work physical and behavioral models for a bulk Reflective Semiconductor Optical Amplifier (RSOA) modulator in Radio over Fiber (RoF) links are proposed. The transmission performance of the RSOA modulator is predicted under broadband signal drive. At first, the simplified physical model for the RSOA modulator in RoF links is proposed, which is based on the rate equation and traveling-wave equations with several assumptions. The model is implemented with the Symbolically Defined Devices (SDD) in Advanced Design System (ADS) and validated with experimental results. Detailed analysis regarding optical gain, harmonic and intermodulation distortions, and transmission performance is performed. The distribution of the carrier and Amplified Spontaneous Emission (ASE) is also demonstrated. Behavioral modeling of the RSOA modulator is to enable us to investigate the nonlinear distortion of the RSOA modulator from another perspective in system level. The Amplitude-to-Amplitude Conversion (AM-AM) and Amplitude-to-Phase Conversion (AM-PM) distortions of the RSOA modulator are demonstrated based on an Artificial Neural Network (ANN) and a generalized polynomial model. Another behavioral model based on Xparameters was obtained from the physical model. Compensation of the nonlinearity of the RSOA modulator is carried out based on a memory polynomial model. The nonlinear distortion of the RSOA modulator is reduced successfully. The improvement of the 3rd order intermodulation distortion is up to 17 dB. The Error Vector Magnitude (EVM) is improved from 6.1% to 2.0%. In the last part of this work, the performance of Fibre Optic Networks for Distributed and Extendible Heterogeneous Radio Architectures and Service Provisioning (FUTON) systems, which is the four-channel virtual Multiple Input Multiple Output (MIMO), is predicted by using the developed physical model. Based on Subcarrier Multiplexing (SCM) techniques, four-channel signals with 100 MHz bandwidth per channel are generated and used to drive the RSOA modulator. The transmission performance of the RSOA modulator under the broadband multi channels is depicted with the figure of merit, EVM under di erent adrature Amplitude Modulation (QAM) level of 64 and 254 for various number of Orthogonal Frequency Division Multiplexing (OFDM) subcarriers of 64, 512, 1024 and 2048.
Resumo:
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz-1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of +/-10 degrees is used. For angular resolutions down to 2.5 degrees , it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance.
Resumo:
In this paper, a spiking neural network (SNN) architecture to simulate the sound localization ability of the mammalian auditory pathways using the interaural intensity difference cue is presented. The lateral superior olive was the inspiration for the architecture, which required the integration of an auditory periphery (cochlea) model and a model of the medial nucleus of the trapezoid body. The SNN uses leaky integrateand-fire excitatory and inhibitory spiking neurons, facilitating synapses and receptive fields. Experimentally derived headrelated transfer function (HRTF) acoustical data from adult domestic cats were employed to train and validate the localization ability of the architecture, training used the supervised learning algorithm called the remote supervision method to determine the azimuthal angles. The experimental results demonstrate that the architecture performs best when it is localizing high-frequency sound data in agreement with the biology, and also shows a high degree of robustness when the HRTF acoustical data is corrupted by noise.
Resumo:
The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well-understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modeling of neural circuits found in the brain.
Resumo:
The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modelling of neural circuits found in the brain. In recent years, much of the focus in neuron modelling has moved to the study of the connectivity of spiking neural networks. Spiking neural networks provide a vehicle to understand from a computational perspective, aspects of the brain’s neural circuitry. This understanding can then be used to tackle some of the historically intractable issues with artificial neurons, such as scalability and lack of variable binding. Current knowledge of feed-forward, lateral, and recurrent connectivity of spiking neurons, and the interplay between excitatory and inhibitory neurons is beginning to shed light on these issues, by improved understanding of the temporal processing capabilities and synchronous behaviour of biological neurons. This research topic aims to amalgamate current research aimed at tackling these phenomena.
Resumo:
In the field of control systems it is common to use techniques based on model adaptation to carry out control for plants for which mathematical analysis may be intricate. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this line, this paper gives a perspective on the quality of results given by two different biologically connected learning algorithms for the design of B-spline neural networks (BNN) and fuzzy systems (FS). One approach used is the Genetic Programming (GP) for BNN design and the other is the Bacterial Evolutionary Algorithm (BEA) applied for fuzzy rule extraction. Also, the facility to incorporate a multi-objective approach to the GP algorithm is outlined, enabling the designer to obtain models more adequate for their intended use.
Resumo:
The design phase of B-spline neural networks is a highly computationally complex task. Existent heuristics have been found to be highly dependent on the initial conditions employed. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this paper, the Bacterial Programming approach is presented, which is based on the replication of the microbial evolution phenomenon. This technique produces an efficient topology search, obtaining additionally more consistent solutions.
Resumo:
Current and past research has brought up new views related to the optimization of neural networks. For a fixed structure, second order methods are seen as the most promising. From previous works we have shown how second order methods are of easy applicability to a neural network. Namely, we have proved how the Levenberg-Marquard possesses not only better convergence but how it can assure the convergence to a local minima. However, as any gradient-based method, the results obtained depend on the startup point. In this work, a reformulated Evolutionary algorithm - the Bacterial Programming for Levenberg-Marquardt is proposed, as an heuristic which can be used to determine the most suitable starting points, therefore achieving, in most cases, the global optimum.
Resumo:
The design phase of B-spline neural networks represents a very high computational task. For this purpose, heuristics have been developed, but have been shown to be dependent on the initial conditions employed. In this paper a new technique, Bacterial Programming, is proposed, whose principles are based on the replication of the microbial evolution phenomenon. The performance of this approach is illustrated and compared with existing alternatives.
Resumo:
This experimental study focuses on a detection system at the seismic station level that should have a similar role to the detection algorithms based on the ratio STA/LTA. We tested two types of neural network: Multi-Layer Perceptrons and Support Vector Machines, trained in supervised mode. The universe of data consisted of 2903 patterns extracted from records of the PVAQ station, of the seismography network of the Institute of Meteorology of Portugal. The spectral characteristics of the records and its variation in time were reflected in the input patterns, consisting in a set of values of power spectral density in selected frequencies, extracted from a spectro gram calculated over a segment of record of pre-determined duration. The universe of data was divided, with about 60% for the training and the remainder reserved for testing and validation. To ensure that all patterns in the universe of data were within the range of variation of the training set, we used an algorithm to separate the universe of data by hyper-convex polyhedrons, determining in this manner a set of patterns that have a mandatory part of the training set. Additionally, an active learning strategy was conducted, by iteratively incorporating poorly classified cases in the training set. The best results, in terms of sensitivity and selectivity in the whole data ranged between 98% and 100%. These results compare very favorably with the ones obtained by the existing detection system, 50%.
Resumo:
The aim of this chapter is to introduce background concepts in nonlinear systems identification and control with artificial neural networks. As this chapter is just an overview, with a limited page space, only the basic ideas will be explained here. The reader is encouraged, for a more detailed explanation of a specific topic of interest, to consult the references given throughout the text. Additionally, as general books in the field of neural networks, the books by Haykin [1] and Principe et al. [2] are suggested. Regarding nonlinear systems identification, covering both classical and neural and neuro-fuzzy methodologies, Reference 3 is recommended. References 4 and 5 should be used in the context of B-spline networks.
Resumo:
In modern measurement and control systems, the available time and resources are often not only limited, but could change during the operation of the system. In these cases, the so-called anytime algorithms could be used advantageously. While diflerent soft computing methods are wide-spreadly used in system modeling, their usability in these cases are limited.
Resumo:
Complete supervised training algorithms for B-spline neural networks and fuzzy rule-based systems are discussed. By interducing the relationship between B-spline neural networks and certain types of fuzzy models, training algorithms developed initially for neural networks can be adapted by fuzzy systems.
Resumo:
A gas turbine is made up of three basic components: a compressor, a combustion chamber and a turbine. Air is drawn into the engine by the compressor, which compresses it and delivers it to the combustion chamber. There, the air is mixed with the fuel and the mixture ignited, producing a rise of temperature and therefore an expansion of the gases. These are expelled through the engine nozzle, but first pass through the turbine, designed to extract energy to keep the compressor rotating [1]. The work described here uses data recorded from a Rolls Royce Spey MK 202 turbine, whose simplified diagram can be seen in Fig. 1. Both the compressor and the turbine are split into low pressure (LP) and high pressure (HP) stages. The HP turbine drives the HP compressor and the LP turbine drives the LP compressor. They are connected by concentric shafts that rotate at different speeds, denoted as NH and NL.
Resumo:
The presence of circulating cerebral emboli represents an increased risk of stroke. The detection of such emboli is possible with the use of a transcranial Doppler ultrasound (TCD) system.