765 resultados para Neural network architecture
Resumo:
This paper presents an efficient neural network for solving constrained nonlinear optimization problems. More specifically, a two-stage neural network architecture is developed and its internal parameters are computed using the valid-subspace technique. The main advantage of the developed network is that it treats optimization and constraint terms in different stages with no interference with each other. Moreover, the proposed approach does not require specification of penalty or weighting parameters for its initialization.
Resumo:
A new classification of microtidal sand and gravel beaches with very different morphologies is presented below. In 557 studied transects, 14 variables were used. Among the variables to be emphasized is the depth of the Posidonia oceanica. The classification was performed for 9 types of beaches: Type 1: Sand and gravel beaches, Type 2: Sand and gravel separated beaches, Type 3: Gravel and sand beaches, Type 4: Gravel and sand separated beaches, Type 5: Pure gravel beaches, Type 6: Open sand beaches, Type 7: Supported sand beaches, Type 8: Bisupported sand beaches and Type 9: Enclosed beaches. For the classification, several tools were used: discriminant analysis, neural networks and Support Vector Machines (SVM), the results were then compared. As there is no theory for deciding which is the most convenient neural network architecture to deal with a particular data set, an experimental study was performed with different numbers of neuron in the hidden layer. Finally, an architecture with 30 neurons was chosen. Different kernels were employed for SVM (Linear, Polynomial, Radial basis function and Sigmoid). The results obtained for the discriminant analysis were not as good as those obtained for the other two methods (ANN and SVM) which showed similar success.
Resumo:
* Supported by INTAS 00-626 and TIC 2003-09319-c03-03.
Resumo:
Dissertação de mest. em Engenharia de Sistemas e Computação - Área de Sistemas de Controlo, Faculdade de Ciências e Tecnologia, Univ.do Algarve, 2001
Resumo:
Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the �Single Network Adaptive Critic (SNAC)� is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.
Resumo:
This paper describes a special-purpose neural computing system for face identification. The system architecture and hardware implementation are introduced in detail. An algorithm based on biomimetic pattern recognition has been embedded. For the total 1200 tests for face identification, the false rejection rate is 3.7% and the false acceptance rate is 0.7%.
Resumo:
A number of researchers have investigated the impact of network architecture on the performance of artificial neural networks. Particular attention has been paid to the impact on the performance of the multi-layer perceptron of architectural issues, and the use of various strategies to attain an optimal network structure. However, there are still perceived limitations with the multi-layer perceptron and networks that employ a different architecture to the multi-layer perceptron have gained in popularity in recent years, particularly, networks that implement a more localised solution, where the solution in one area of the problem space does not impact, or has a minimal impact, on other areas of the space. In this study, we discuss the major architectural issues affecting the performance of a multi-layer perceptron, before moving on to examine in detail the performance of a new localised network, namely the bumptree. The work presented here examines the impact on the performance of artificial neural networks of employing alternative networks to the long established multi-layer perceptron. In particular, networks that impose a solution where the impact of each parameter in the final network architecture has a localised impact on the problem space being modelled are examined. The alternatives examined are the radial basis function and bumptree neural networks, and the impact of architectural issues on the performance of these networks is examined. Particular attention is paid to the bumptree, with new techniques for both developing the bumptree structure and employing this structure to classify patterns being examined.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
The head direction (HD) system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular) inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that ‘grounding’ of modeled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology), and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.
Resumo:
An Artificial Neural Network (ANN) is a computational modeling tool which has found extensive acceptance in many disciplines for modeling complex real world problems. An ANN can model problems through learning by example, rather than by fully understanding the detailed characteristics and physics of the system. In the present study, the accuracy and predictive power of an ANN was evaluated in predicting kinetic viscosity of biodiesels over a wide range of temperatures typically encountered in diesel engine operation. In this model, temperature and chemical composition of biodiesel were used as input variables. In order to obtain the necessary data for model development, the chemical composition and temperature dependent fuel properties of ten different types of biodiesels were measured experimentally using laboratory standard testing equipments following internationally recognized testing procedures. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture was optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the absolute fraction of variance (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found that ANN is highly accurate in predicting the viscosity of biodiesel and demonstrates the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties at different temperature levels. Therefore the model developed in this study can be a useful tool in accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.
Resumo:
This paper presents an off-line (finite time interval) and on-line learning direct adaptive neural controller for an unstable helicopter. The neural controller is designed to track pitch rate command signal generated using the reference model. A helicopter having a soft inplane four-bladed hingeless main rotor and a four-bladed tail rotor with conventional mechanical controls is used for the simulation studies. For the simulation study, a linearized helicopter model at different straight and level flight conditions is considered. A neural network with a linear filter architecture trained using backpropagation through time is used to approximate the control law. The controller network parameters are adapted using updated rules Lyapunov synthesis. The off-line trained (for finite time interval) network provides the necessary stability and tracking performance. The on-line learning is used to adapt the network under varying flight conditions. The on-line learning ability is demonstrated through parameter uncertainties. The performance of the proposed direct adaptive neural controller (DANC) is compared with feedback error learning neural controller (FENC).
Resumo:
The recently developed single network adaptive critic (SNAC) design has been used in this study to design a power system stabiliser (PSS) for enhancing the small-signal stability of power systems over a wide range of operating conditions. PSS design is formulated as a discrete non-linear quadratic regulator problem. SNAC is then used to solve the resulting discrete-time optimal control problem. SNAC uses only a single critic neural network instead of the action-critic dual network architecture of typical adaptive critic designs. SNAC eliminates the iterative training loops between the action and critic networks and greatly simplifies the training procedure. The performance of the proposed PSS has been tested on a single machine infinite bus test system for various system and loading conditions. The proposed stabiliser, which is relatively easier to synthesise, consistently outperformed stabilisers based on conventional lead-lag and linear quadratic regulator designs.
Resumo:
Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.
Resumo:
This paper proposes a Single Network Adaptive Critic (SNAC) based Power System Stabilizer (PSS) for enhancing the small-signal stability of power systems over a wide range of operating conditions. SNAC uses only a single critic neural network instead of the action-critic dual network architecture of typical adaptive critic designs. SNAC eliminates the iterative training loops between the action and critic networks and greatly simplifies the training procedure. The performance of the proposed PSS has been tested on a Single Machine Infinite Bus test system for various system and loading conditions. The proposed stabilizer, which is relatively easier to synthesize, consistently outperformed stabilizers based on conventional lead-lag and linear quadratic regulator designs.
Resumo:
This paper elucidates the methodology of applying artificial neural network model (ANNM) to predict the percent swell of calcitic soil in sulphuric acid solutions, a complex phenomenon involving many parameters. Swell data required for modelling is experimentally obtained using conventional oedometer tests under nominal surcharge. The phases in ANN include optimal design of architecture, operation and training of architecture. The designed optimal neural model (3-5-1) is a fully connected three layer feed forward network with symmetric sigmoid activation function and trained by the back propagation algorithm to minimize a quadratic error criterion.The used model requires parameters such as duration of interaction, calcite mineral content and acid concentration for prediction of swell. The observed strong correlation coefficient (R2 = 0.9979) between the values determined by the experiment and predicted using the developed model demonstrates that the network can provide answers to complex problems in geotechnical engineering.