998 resultados para Neural Conduction
Resumo:
Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.
Resumo:
Neural networks (NNs) are discussed in connection with their possible use in induction machine drives. The mathematical model of the NN as well as a commonly used learning algorithm is presented. Possible applications of NNs to induction machine control are discussed. A simulation of an NN successfully identifying the nonlinear multivariable model of an induction-machine stator transfer function is presented. Previously published applications are discussed, and some possible future applications are proposed.
Resumo:
The use of artificial neural networks (ANNs) to identify and control induction machines is proposed. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics, and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Both systems are inherently adaptive as well as self-commissioning. The current controller is a completely general nonlinear controller which can be used together with any drive algorithm. Various advantages of these control schemes over conventional schemes are cited, and the combined speed and current control scheme is compared with the standard vector control scheme
Resumo:
This paper proposes the use of artificial neural networks (ANNs) to identify and control an induction machine. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics; and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Various advantages of these control schemes over other conventional schemes are cited and the performance of the combined speed and current control scheme is compared with that of the standard vector control scheme
Resumo:
A hybrid genetic algorithm/scaled conjugate gradient regularisation method is designed to alleviate ANN `over-fitting'. In application to day-ahead load forecasting, the proposed algorithm performs better than early-stopping and Bayesian regularisation, showing promising initial results.
Resumo:
The indoline dyes D102, D131, D149, and D205 have been characterized when adsorved on fluorine-doped tin oxide (FTO) and TiO2 electrode surfaces. Adsorption from 50:50 acetonitrile - tert-butanol onto flourine-doped tin oxide (FTO) allows approximate Langmuirian binding constants of 6.5 x 10(4), 2.01 x 10(3), 2.0 x 10(4), and 1.5 x 10(4) mol-1 dm3, respectively, to be determined. Voltammetric data obtained in acetonitrile/0.1 M NBu4PF6 indicate reversible on-electron oxidation at Emid = 0.94, 0.91, 0.88, and 0.88 V vs Ag/AgCI(3 M KCI), respectively, with dye aggregation (at high coverage) causing additional peak features at more positive potentials. Slow chemical degradation processes and electron transfer catalysis for iodine oxidation were observed for all four oxidezed indolinium cations. When adsorbed onto TiO2 nanoparticle films (ca. 9nm particle diameter and ca.3/um thickness of FTO0, reversible voltammetric responses with Emid = 1.08, 1.156, 0.92 and 0.95 V vs Ag/AgCI(3 M KCI), respectively, suggest exceptionally fast hole hopping diffusion (with Dapp > 5 x 10(-9) m2 s-1) for adsorbed layers of four indoline dyes, presumably due to pie-pie stacking in surface aggregates. Slow dye degradation is shown to affect charge transport via electron hopping. Spectrelectrochemical data for the adsorbed indoline dyes on FTO-TiO2 revealed a red-shift of absorption peaks after oxidation and the presence of a strong charge transfer band in the near-IR region. The implications of the indoline dye reactivity and fast hole mobility for solar cell devices are discussed.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics