758 resultados para neural network model
Resumo:
Presents a technique for incorporating a priori knowledge from a state space system into a neural network training algorithm. The training algorithm considered is that of chemotaxis and the networks being trained are recurrent neural networks. Incorporation of the a priori knowledge ensures that the resultant network has behaviour similar to the system which it is modelling.
Resumo:
A neural network was used to map three PID operating regions for a two-input two-output steam generator system. The network was used in stand alone feedforward operation to control the whole operating range of the process, after being trained from the PID controllers corresponding to each control region. The network inputs are the plant error signals, their integral, their derivative and a 4-error delay train.
Resumo:
A dynamic recurrent neural network (DRNN) is used to input/output linearize a control affine system in the globally linearizing control (GLC) structure. The network is trained as a part of a closed loop that involves a PI controller, the goal is to use the network, as a dynamic feedback, to cancel the nonlinear terms of the plant. The stability of the configuration is guarantee if the network and the plant are asymptotically stable and the linearizing input is bounded.
Resumo:
In this paper the use of neural networks for the control of dynamical systems is considered. Both identification and feedback control aspects are discussed as well as the types of system for which neural networks can provide a useful technique. Multi-layer Perceptron and Radial Basis function neural network types are looked at, with an emphasis on the latter. It is shown how basis function centre selection is a critical part of the implementation process and that multivariate clustering algorithms can be an extremely useful tool for finding centres.
Resumo:
A simple and effective algorithm is introduced for the system identification of Wiener system based on the observational input/output data. The B-spline neural network is used to approximate the nonlinear static function in the Wiener system. We incorporate the Gauss-Newton algorithm with De Boor algorithm (both curve and the first order derivatives) for the parameter estimation of the Wiener model, together with the use of a parameter initialization scheme. The efficacy of the proposed approach is demonstrated using an illustrative example.
Resumo:
World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.
Resumo:
Deep Brain Stimulation has been used in the study of and for treating Parkinson’s Disease (PD) tremor symptoms since the 1980s. In the research reported here we have carried out a comparative analysis to classify tremor onset based on intraoperative microelectrode recordings of a PD patient’s brain Local Field Potential (LFP) signals. In particular, we compared the performance of a Support Vector Machine (SVM) with two well known artificial neural network classifiers, namely a Multiple Layer Perceptron (MLP) and a Radial Basis Function Network (RBN). The results show that in this study, using specifically PD data, the SVM provided an overall better classification rate achieving an accuracy of 81% recognition.
Resumo:
In this article a simple and effective algorithm is introduced for the system identification of the Wiener system using observational input/output data. The nonlinear static function in the Wiener system is modelled using a B-spline neural network. The Gauss–Newton algorithm is combined with De Boor algorithm (both curve and the first order derivatives) for the parameter estimation of the Wiener model, together with the use of a parameter initialisation scheme. Numerical examples are utilised to demonstrate the efficacy of the proposed approach.
Resumo:
Using NCANDS data of US child maltreatment reports for 2009, logistic regression, probit analysis, discriminant analysis and an artificial neural network are used to determine the factors which explain the decision to place a child in out-of-home care. As well as developing a new model for 2009, a previous study using 2005 data is replicated. While there are many small differences, the four estimation techniques give broadly the same results, demonstrating the robustness of the results. Similarly, apart from age and sexual abuse, the 2005 and 2009 results are roughly similar. For 2009, child characteristics (particularly child emotional problems) are more important than the nature of the abuse and the situation of the household; while caregiver characteristics are the least important. All these models have low explanatory power.
Resumo:
In this paper, we will address the endeavors of three disciplines, Psychology, Neuroscience, and Artificial Neural Network (ANN) modeling, in explaining how the mind perceives and attends information. More precisely, we will shed some light on the efforts to understand the allocation of attentional resources to the processing of emotional stimuli. This review aims at informing the three disciplines about converging points of their research and to provide a starting point for discussion.
Resumo:
By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, long-range propagation of activity in MFMs is often approximated with partial differential equations (PDEs). However, PDE approximations in current use correspond to underlying axonal velocity distributions incompatible with experimental measurements. In order to rectify this deficiency, we here introduce novel propagation PDEs that give rise to smooth unimodal distributions of axonal conduction velocities. We also argue that velocities estimated from fibre diameters in slice and from latency measurements, respectively, relate quite differently to such distributions, a significant point for any phenomenological description. Our PDEs are then successfully fit to fibre diameter data from human corpus callosum and rat subcortical white matter. This allows for the first time to simulate long-range conduction in the mammalian brain with realistic, convenient PDEs. Furthermore, the obtained results suggest that the propagation of activity in rat and human differs significantly beyond mere scaling. The dynamical consequences of our new formulation are investigated in the context of a well known neural field model. On the basis of Turing instability analyses, we conclude that pattern formation is more easily initiated using our more realistic propagator. By increasing characteristic conduction velocities, a smooth transition can occur from self-sustaining bulk oscillations to travelling waves of various wavelengths, which may influence axonal growth during development. Our analytic results are also corroborated numerically using simulations on a large spatial grid. Thus we provide here a comprehensive analysis of empirically constrained activity propagation in the context of MFMs, which will allow more realistic studies of mammalian brain activity in the future.
Resumo:
Anesthetic and analgesic agents act through a diverse range of pharmacological mechanisms. Existing empirical data clearly shows that such "microscopic" pharmacological diversity is reflected in their "macroscopic" effects on the human electroencephalogram (EEG). Based on a detailed mesoscopic neural field model we theoretically posit that anesthetic induced EEG activity is due to selective parametric changes in synaptic efficacy and dynamics. Specifically, on the basis of physiologically constrained modeling, it is speculated that the selective modification of inhibitory or excitatory synaptic activity may differentially effect the EEG spectrum. Such results emphasize the importance of neural field theories of brain electrical activity for elucidating the principles whereby pharmacological agents effect the EEG. Such insights will contribute to improved methods for monitoring depth of anesthesia using the EEG.
Resumo:
This paper considers variations of a neuron pool selection method known as Affordable Neural Network (AfNN). A saliency measure, based on the second derivative of the objective function is proposed to assess the ability of a trained AfNN to provide neuronal redundancy. The discrepancies between the various affordability variants are explained by correlating unique sub group selections with relevant saliency variations. Overall this study shows that the method in which neurons are selected from a pool is more relevant to how salient individual neurons are, than how often a particular neuron is used during training. The findings herein are relevant to not only providing an analogy to brain function but, also, in optimizing the way a neural network using the affordability method is trained.
Resumo:
Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.