983 resultados para Declarative anterograde memory
Resumo:
Communication and synchronization stand as the dual bottlenecks in the performance of parallel systems, and especially those that attempt to alleviate the programming burden by incurring overhead in these two domains. We formulate the notions of communicable memory and lazy barriers to help achieve efficient communication and synchronization. These concepts are developed in the context of BSPk, a toolkit library for programming networks of workstations|and other distributed memory architectures in general|based on the Bulk Synchronous Parallel (BSP) model. BSPk emphasizes efficiency in communication by minimizing local memory-to-memory copying, and in barrier synchronization by not forcing a process to wait unless it needs remote data. Both the message passing (MP) and distributed shared memory (DSM) programming styles are supported in BSPk. MP helps processes efficiently exchange short-lived unnamed data values, when the identity of either the sender or receiver is known to the other party. By contrast, DSM supports communication between processes that may be mutually anonymous, so long as they can agree on variable names in which to store shared temporary or long-lived data.
Resumo:
Transport protocols are an integral part of the inter-process communication (IPC) service used by application processes to communicate over the network infrastructure. With almost 30 years of research on transport, one would have hoped that we have a good handle on the problem. Unfortunately, that is not true. As the Internet continues to grow, new network technologies and new applications continue to emerge putting transport protocols in a never-ending flux as they are continuously adapted for these new environments. In this work, we propose a clean-slate transport architecture that renders all possible transport solutions as simply combinations of policies instantiated on a single common structure. We identify a minimal set of mechanisms that once instantiated with the appropriate policies allows any transport solution to be realized. Given our proposed architecture, we contend that there are no more transport protocols to design—only policies to specify. We implement our transport architecture in a declarative language, Network Datalog (NDlog), making the specification of different transport policies easy, compact, reusable, dynamically configurable and potentially verifiable. In NDlog, transport state is represented as database relations, state is updated/queried using database operations, and transport policies are specified using declarative rules. We identify limitations with NDlog that could potentially threaten the correctness of our specification. We propose several language extensions to NDlog that would significantly improve the programmability of transport policies.
Resumo:
This paper focuses on an efficient user-level method for the deployment of application-specific extensions, using commodity operating systems and hardware. A sandboxing technique is described that supports multiple extensions within a shared virtual address space. Applications can register sandboxed code with the system, so that it may be executed in the context of any process. Such code may be used to implement generic routines and handlers for a class of applications, or system service extensions that complement the functionality of the core kernel. Using our approach, application-specific extensions can be written like conventional user-level code, utilizing libraries and system calls, with the advantage that they may be executed without the traditional costs of scheduling and context-switching between process-level protection domains. No special hardware support such as segmentation or tagged translation look-aside buffers (TLBs) is required. Instead, our ``user-level sandboxing'' mechanism requires only paged-based virtual memory support, given that sandboxed extensions are either written by a trusted source or are guaranteed to be memory-safe (e.g., using type-safe languages). Using a fast method of upcalls, we show how our mechanism provides significant performance improvements over traditional methods of invoking user-level services. As an application of our approach, we have implemented a user-level network subsystem that avoids data copying via the kernel and, in many cases, yields far greater network throughput than kernel-level approaches.
Resumo:
This paper is centered around the design of a thread- and memory-safe language, primarily for the compilation of application-specific services for extensible operating systems. We describe various issues that have influenced the design of our language, called Cuckoo, that guarantees safety of programs with potentially asynchronous flows of control. Comparisons are drawn between Cuckoo and related software safety techniques, including Cyclone and software-based fault isolation (SFI), and performance results suggest our prototype compiler is capable of generating safe code that executes with low runtime overheads, even without potential code optimizations. Compared to Cyclone, Cuckoo is able to safely guard accesses to memory when programs are multithreaded. Similarly, Cuckoo is capable of enforcing memory safety in situations that are potentially troublesome for techniques such as SFI.
Resumo:
Neural models have proposed how short-term memory (STM) storage in working memory and long-term memory (LTM) storage and recall are linked and interact, but are realized by different mechanisms that obey different laws. The authors' data can be understood in the light of these models, which suggest that the authors may have gone too far in obscuring the differences between these processes.
Resumo:
Most associative memory models perform one level mapping between predefined sets of input and output patterns1 and are unable to represent hierarchical knowledge. Complex AI systems allow hierarchical representation of concepts, but generally do not have learning capabilities. In this paper, a memory model is proposed which forms concept hierarchy by learning sample relations between concepts. All concepts are represented in a concept layer. Relations between a concept and its defining lower level concepts, are chunked as cognitive codes represented in a coding layer. By updating memory contents in the concept layer through code firing in the coding layer, the system is able to perform an important class of commonsense reasoning, namely recognition and inheritance.
Resumo:
A model which extends the adaptive resonance theory model to sequential memory is presented. This new model learns sequences of events and recalls a sequence when presented with parts of the sequence. A sequence can have repeated events and different sequences can share events. The ART model is modified by creating interconnected sublayers within ART's F2 layer. Nodes within F2 learn temporal patterns by forming recency gradients within LTM. Versions of the ART model like ART I, ART 2, and fuzzy ART can be used.
Resumo:
We can recognize objects through receiving continuously huge temporal information including redundancy and noise, and can memorize them. This paper proposes a neural network model which extracts pre-recognized patterns from temporally sequential patterns which include redundancy, and memorizes the patterns temporarily. This model consists of an adaptive resonance system and a recurrent time-delay network. The extraction is executed by the matching mechanism of the adaptive resonance system, and the temporal information is processed and stored by the recurrent network. Simple simulations are examined to exemplify the property of extraction.
Resumo:
Advanced Research Projects Agency (ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309)
Resumo:
The concepts of declarative memory and procedural memory have been used to distinguish two basic types of learning. A neural network model suggests how such memory processes work together as recognition learning, reinforcement learning, and sensory-motor learning take place during adaptive behaviors. To coordinate these processes, the hippocampal formation and cerebellum each contain circuits that learn to adaptively time their outputs. Within the model, hippocampal timing helps to maintain attention on motivationally salient goal objects during variable task-related delays, and cerebellar timing controls the release of conditioned responses. This property is part of the model's description of how cognitive-emotional interactions focus attention on motivationally valued cues, and how this process breaks down due to hippocampal ablation. The model suggests that the hippocampal mechanisms that help to rapidly draw attention to salient cues could prematurely release motor commands were not the release of these commands adaptively timed by the cerebellum. The model hippocampal system modulates cortical recognition learning without actually encoding the representational information that the cortex encodes. These properties avoid the difficulties faced by several models that propose a direct hippocampal role in recognition learning. Learning within the model hippocampal system controls adaptive timing and spatial orientation. Model properties hereby clarify how hippocampal ablations cause amnesic symptoms and difficulties with tasks which combine task delays, novelty detection, and attention towards goal objects amid distractions. When these model recognition, reinforcement, sensory-motor, and timing processes work together, they suggest how the brain can accomplish conditioning of multiple sensory events to delayed rewards, as during serial compound conditioning.
Resumo:
How do the layered circuits of prefrontal and motor cortex carry out working memory storage, sequence learning, and voluntary sequential item selection and performance? A neural model called LIST PARSE is presented to explain and quantitatively simulate cognitive data about both immediate serial recall and free recall, including bowing of the serial position performance curves, error-type distributions, temporal limitations upon recall, and list length effects. The model also qualitatively explains cognitive effects related to attentional modulation, temporal grouping, variable presentation rates, phonemic similarity, presentation of non-words, word frequency/item familiarity and list strength, distracters and modality effects. In addition, the model quantitatively simulates neurophysiological data from the macaque prefrontal cortex obtained during sequential sensory-motor imitation and planned performance. The article further develops a theory concerning how the cerebral cortex works by showing how variations of the laminar circuits that have previously clarified how the visual cortex sees can also support cognitive processing of sequentially organized behaviors.
Resumo:
The hippocampus participates in multiple functions, including spatial navigation, adaptive timing, and declarative (notably, episodic) memory. How does it carry out these particular functions? The present article proposes that hippocampal spatial and temporal processing are carried out by parallel circuits within entorhinal cortex, dentate gyrus, and CA3 that are variations of the same circuit design. In particular, interactions between these brain regions transform fine spatial and temporal scales into population codes that are capable of representing the much larger spatial and temporal scales that are needed to control adaptive behaviors. Previous models of adaptively timed learning propose how a spectrum of cells tuned to brief but different delays are combined and modulated by learning to create a population code for controlling goal-oriented behaviors that span hundreds of milliseconds or even seconds. Here it is proposed how projections from entorhinal grid cells can undergo a similar learning process to create hippocampal place cells that can cover a space of many meters that are needed to control navigational behaviors. The suggested homology between spatial and temporal processing may clarify how spatial and temporal information may be integrated into an episodic memory.
Resumo:
Working memory neural networks are characterized which encode the invariant temporal order of sequential events that may be presented at widely differing speeds, durations, and interstimulus intervals. This temporal order code is designed to enable all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described that is based on the model of Seibert and Waxman [1].
Resumo:
Working memory neural networks are characterized which encode the invariant temporal order of sequential events. Inputs to the networks, called Sustained Temporal Order REcurrent (STORE) models, may be presented at widely differing speeds, durations, and interstimulus intervals. The STORE temporal order code is designed to enable all emergent groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.
Resumo:
The processes by which humans and other primates learn to recognize objects have been the subject of many models. Processes such as learning, categorization, attention, memory search, expectation, and novelty detection work together at different stages to realize object recognition. In this article, Gail Carpenter and Stephen Grossberg describe one such model class (Adaptive Resonance Theory, ART) and discuss how its structure and function might relate to known neurological learning and memory processes, such as how inferotemporal cortex can recognize both specialized and abstract information, and how medial temporal amnesia may be caused by lesions in the hippocampal formation. The model also suggests how hippocampal and inferotemporal processing may be linked during recognition learning.