115 resultados para neural network model
Resumo:
The development of a combined engineering and statistical Artificial Neural Network model of UK domestic appliance load profiles is presented. The model uses diary-style appliance use data and a survey questionnaire collected from 51 suburban households and 46 rural households during the summer of 2010 and2011 respectively. It also incorporates measured energy data and is sensitive to socioeconomic, physical dwelling and temperature variables. A prototype model is constructed in MATLAB using a two layer feed forward network with back propagation training which has a 12:10:24 architecture. Model outputs include appliance load profiles which can be applied to the fields of energy planning (microrenewables and smart grids), building simulation tools and energy policy.
Resumo:
The general stability theory of nonlinear receding horizon controllers has attracted much attention over the last fifteen years, and many algorithms have been proposed to ensure closed-loop stability. On the other hand many reports exist regarding the use of artificial neural network models in nonlinear receding horizon control. However, little attention has been given to the stability issue of these specific controllers. This paper addresses this problem and proposes to cast the nonlinear receding horizon control based on neural network models within the framework of an existing stabilising algorithm.
Resumo:
A system identification algorithm is introduced for Hammerstein systems that are modelled using a non-uniform rational B-spline (NURB) neural network. The proposed algorithm consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples are utilized to demonstrate the efficacy of the proposed approach.
Resumo:
Spontaneous activity of the brain at rest frequently has been considered a mere backdrop to the salient activity evoked by external stimuli or tasks. However, the resting state of the brain consumes most of its energy budget, which suggests a far more important role. An intriguing hint comes from experimental observations of spontaneous activity patterns, which closely resemble those evoked by visual stimulation with oriented gratings, except that cortex appeared to cycle between different orientation maps. Moreover, patterns similar to those evoked by the behaviorally most relevant horizontal and vertical orientations occurred more often than those corresponding to oblique angles. We hypothesize that this kind of spontaneous activity develops at least to some degree autonomously, providing a dynamical reservoir of cortical states, which are then associated with visual stimuli through learning. To test this hypothesis, we use a biologically inspired neural mass model to simulate a patch of cat visual cortex. Spontaneous transitions between orientation states were induced by modest modifications of the neural connectivity, establishing a stable heteroclinic channel. Significantly, the experimentally observed greater frequency of states representing the behaviorally important horizontal and vertical orientations emerged spontaneously from these simulations. We then applied bar-shaped inputs to the model cortex and used Hebbian learning rules to modify the corresponding synaptic strengths. After unsupervised learning, different bar inputs reliably and exclusively evoked their associated orientation state; whereas in the absence of input, the model cortex resumed its spontaneous cycling. We conclude that the experimentally observed similarities between spontaneous and evoked activity in visual cortex can be explained as the outcome of a learning process that associates external stimuli with a preexisting reservoir of autonomous neural activity states. Our findings hence demonstrate how cortical connectivity can link the maintenance of spontaneous activity in the brain mechanistically to its core cognitive functions.
Resumo:
Single-carrier (SC) block transmission with frequency-domain equalisation (FDE) offers a viable transmission technology for combating the adverse effects of long dispersive channels encountered in high-rate broadband wireless communication systems. However, for high bandwidthefficiency and high power-efficiency systems, the channel can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such nonlinear Hammerstein channels, the standard SC-FDE scheme no longer works. This paper advocates a complex-valued (CV) B-spline neural network based nonlinear SC-FDE scheme for Hammerstein channels. Specifically, We model the nonlinear HPA, which represents the CV static nonlinearity of the Hammerstein channel, by a CV B-spline neural network, and we develop two efficient alternating least squares schemes for estimating the parameters of the Hammerstein channel, including both the channel impulse response coefficients and the parameters of the CV B-spline model. We also use another CV B-spline neural network to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse B-spline neural network model obtained in time domain. Extensive simulation results are included to demonstrate the effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels.
Resumo:
This paper discusses ECG classification after parametrizing the ECG waveforms in the wavelet domain. The aim of the work is to develop an accurate classification algorithm that can be used to diagnose cardiac beat abnormalities detected using a mobile platform such as smart-phones. Continuous time recurrent neural network classifiers are considered for this task. Records from the European ST-T Database are decomposed in the wavelet domain using discrete wavelet transform (DWT) filter banks and the resulting DWT coefficients are filtered and used as inputs for training the neural network classifier. Advantages of the proposed methodology are the reduced memory requirement for the signals which is of relevance to mobile applications as well as an improvement in the ability of the neural network in its generalization ability due to the more parsimonious representation of the signal to its inputs.
Resumo:
Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex.
Resumo:
We develop a complex-valued (CV) B-spline neural network approach for efficient identification and inversion of CV Wiener systems. The CV nonlinear static function in the Wiener system is represented using the tensor product of two univariate B-spline neural networks. With the aid of a least squares parameter initialisation, the Gauss-Newton algorithm effectively estimates the model parameters that include the CV linear dynamic model coefficients and B-spline neural network weights. The identification algorithm naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. An accurate inverse of the CV Wiener system is then obtained, in which the inverse of the CV nonlinear static function of the Wiener system is calculated efficiently using the Gaussian-Newton algorithm based on the estimated B-spline neural network model, with the aid of the De Boor recursions. The effectiveness of our approach for identification and inversion of CV Wiener systems is demonstrated using the application of digital predistorter design for high power amplifiers with memory
Resumo:
A practical orthogonal frequency-division multiplexing (OFDM) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. In this contribution, we advocate a novel nonlinear equalization scheme for OFDM Hammerstein systems. We model the nonlinear HPA, which represents the static nonlinearity of the OFDM Hammerstein channel, by a B-spline neural network, and we develop a highly effective alternating least squares algorithm for estimating the parameters of the OFDM Hammerstein channel, including channel impulse response coefficients and the parameters of the B-spline model. Moreover, we also use another B-spline neural network to model the inversion of the HPA’s nonlinearity, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalization of the OFDM Hammerstein channel can then be accomplished by the usual one-tap linear equalization as well as the inverse B-spline neural network model obtained. The effectiveness of our nonlinear equalization scheme for OFDM Hammerstein channels is demonstrated by simulation results.
Resumo:
In this paper, we show how a set of recently derived theoretical results for recurrent neural networks can be applied to the production of an internal model control system for a nonlinear plant. The results include determination of the relative order of a recurrent neural network and invertibility of such a network. A closed loop controller is produced without the need to retrain the neural network plant model. Stability of the closed-loop controller is also demonstrated.
Resumo:
In this paper, a new model-based proportional–integral–derivative (PID) tuning and controller approach is introduced for Hammerstein systems that are identified on the basis of the observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a B-spline neural network. The control signal is composed of a PID controller, together with a correction term. Both the parameters in the PID controller and the correction term are optimized on the basis of minimizing the multistep ahead prediction errors. In order to update the control signal, the multistep ahead predictions of the Hammerstein system based on B-spline neural networks and the associated Jacobian matrix are calculated using the de Boor algorithms, including both the functional and derivative recursions. Numerical examples are utilized to demonstrate the efficacy of the proposed approaches.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
When people monitor a visual stream of rapidly presented stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset-the attentional blink (AB). We provide an overview of recent neuroscientific studies devoted to analyze the neural processes underlying the AB and their temporal dynamics. The available evidence points to an attentional network involving temporal, right-parietal and frontal cortex, and suggests that the components of this neural network interact by means of synchronization and stimulus-induced desynchronization in the beta frequency range. We set up a neurocognitive scenario describing how the AB might emerge and why it depends on the presence of masks and the other event(s) the targets are embedded in. The scenario supports the idea that the AB arises from "biased competition", with the top-down bias being generated by parietal-frontal interactions and the competition taking place between stimulus codes in temporal cortex.
Resumo:
The human electroencephalogram (EEG) is globally characterized by a 1/f power spectrum superimposed with certain peaks, whereby the "alpha peak" in a frequency range of 8-14 Hz is the most prominent one for relaxed states of wakefulness. We present simulations of a minimal dynamical network model of leaky integrator neurons attached to the nodes of an evolving directed and weighted random graph (an Erdos-Renyi graph). We derive a model of the dendritic field potential (DFP) for the neurons leading to a simulated EEG that describes the global activity of the network. Depending on the network size, we find an oscillatory transition of the simulated EEG when the network reaches a critical connectivity. This transition, indicated by a suitably defined order parameter, is reflected by a sudden change of the network's topology when super-cycles are formed from merging isolated loops. After the oscillatory transition, the power spectra of simulated EEG time series exhibit a 1/f continuum superimposed with certain peaks. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.