2 resultados para Neural networks training

em DRUM (Digital Repository at the University of Maryland)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

(Deep) neural networks are increasingly being used for various computer vision and pattern recognition tasks due to their strong ability to learn highly discriminative features. However, quantitative analysis of their classication ability and design philosophies are still nebulous. In this work, we use information theory to analyze the concatenated restricted Boltzmann machines (RBMs) and propose a mutual information-based RBM neural networks (MI-RBM). We develop a novel pretraining algorithm to maximize the mutual information between RBMs. Extensive experimental results on various classication tasks show the eectiveness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent efforts to develop large-scale neural architectures have paid relatively little attention to the use of self-organizing maps (SOMs). Part of the reason is that most conventional SOMs use a static encoding representation: Each input is typically represented by the fixed activation of a single node in the map layer. This not only carries information in an inefficient and unreliable way that impedes building robust multi-SOM neural architectures, but it is also inconsistent with rhythmic oscillations in biological neural networks. Here I develop and study an alternative encoding scheme that instead uses limit cycle attractors of multi-focal activity patterns to represent input patterns/sequences. Such a fundamental change in representation raises several questions: Can this be done effectively and reliably? If so, will map formation still occur? What properties would limit cycle SOMs exhibit? Could multiple such SOMs interact effectively? Could robust architectures based on such SOMs be built for practical applications? The principal results of examining these questions are as follows. First, conditions are established for limit cycle attractors to emerge in a SOM through self-organization when encoding both static and temporal sequence inputs. It is found that under appropriate conditions a set of learned limit cycles are stable, unique, and preserve input relationships. In spite of the continually changing activity in a limit cycle SOM, map formation continues to occur reliably. Next, associations between limit cycles in different SOMs are learned. It is shown that limit cycles in one SOM can be successfully retrieved by another SOM’s limit cycle activity. Control timings can be set quite arbitrarily during both training and activation. Importantly, the learned associations generalize to new inputs that have never been seen during training. Finally, a complete neural architecture based on multiple limit cycle SOMs is presented for robotic arm control. This architecture combines open-loop and closed-loop methods to achieve high accuracy and fast movements through smooth trajectories. The architecture is robust in that disrupting or damaging the system in a variety of ways does not completely destroy the system. I conclude that limit cycle SOMs have great potentials for use in constructing robust neural architectures.