163 resultados para Neural Differentiation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel image segmentation method based on a constraint satisfaction neural network (CSNN) is presented. The new method uses CSNN-based relaxation but with a modified scanning scheme of the image. The pixels are visited with more distant intervals and wider neighborhoods in the first level of the algorithm. The intervals between pixels and their neighborhoods are reduced in the following stages of the algorithm. This method contributes to the formation of more regular segments rapidly and consistently. A cluster validity index to determine the number of segments is also added to complete the proposed method into a fully automatic unsupervised segmentation scheme. The results are compared quantitatively by means of a novel segmentation evaluation criterion. The results are promising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article discusses the identification of nonlinear dynamic systems using multi-layer perceptrons (MLPs). It focuses on both structure uncertainty and parameter uncertainty, which have been widely explored in the literature of nonlinear system identification. The main contribution is that an integrated analytic framework is proposed for automated neural network structure selection, parameter identification and hysteresis network switching with guaranteed neural identification performance. First, an automated network structure selection procedure is proposed within a fixed time interval for a given network construction criterion. Then, the network parameter updating algorithm is proposed with guaranteed bounded identification error. To cope with structure uncertainty, a hysteresis strategy is proposed to enable neural identifier switching with guaranteed network performance along the switching process. Both theoretic analysis and a simulation example show the efficacy of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we intoduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre-and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean leaxning time increases with the number of patterns to be learned polynomially, indicating efficient learning.