20 resultados para Associative Memory
Resumo:
Working memory neural networks are characterized which encode the invariant temporal order of sequential events that may be presented at widely differing speeds, durations, and interstimulus intervals. This temporal order code is designed to enable all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described that is based on the model of Seibert and Waxman [1].
Resumo:
Working memory neural networks are characterized which encode the invariant temporal order of sequential events. Inputs to the networks, called Sustained Temporal Order REcurrent (STORE) models, may be presented at widely differing speeds, durations, and interstimulus intervals. The STORE temporal order code is designed to enable all emergent groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.
Resumo:
This paper introduces a new class of predictive ART architectures, called Adaptive Resonance Associative Map (ARAM) which performs rapid, yet stable heteroassociative learning in real time environment. ARAM can be visualized as two ART modules sharing a single recognition code layer. The unit for recruiting a recognition code is a pattern pair. Code stabilization is ensured by restricting coding to states where resonances are reached in both modules. Simulation results have shown that ARAM is capable of self-stabilizing association of arbitrary pattern pairs of arbitrary complexity appearing in arbitrary sequence by fast learning in real time environment. Due to the symmetrical network structure, associative recall can be performed in both directions.
Resumo:
The processes by which humans and other primates learn to recognize objects have been the subject of many models. Processes such as learning, categorization, attention, memory search, expectation, and novelty detection work together at different stages to realize object recognition. In this article, Gail Carpenter and Stephen Grossberg describe one such model class (Adaptive Resonance Theory, ART) and discuss how its structure and function might relate to known neurological learning and memory processes, such as how inferotemporal cortex can recognize both specialized and abstract information, and how medial temporal amnesia may be caused by lesions in the hippocampal formation. The model also suggests how hippocampal and inferotemporal processing may be linked during recognition learning.
Resumo:
Neural network models of working memory, called Sustained Temporal Order REcurrent (STORE) models, are described. They encode the invariant temporal order of sequential events in short term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items is invariant in the sense that, relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.