928 resultados para Catch-and-release
Resumo:
CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.
Resumo:
How do humans rapidly recognize a scene? How can neural models capture this biological competence to achieve state-of-the-art scene classification? The ARTSCENE neural system classifies natural scene photographs by using multiple spatial scales to efficiently accumulate evidence for gist and texture. ARTSCENE embodies a coarse-to-fine Texture Size Ranking Principle whereby spatial attention processes multiple scales of scenic information, ranging from global gist to local properties of textures. The model can incrementally learn and predict scene identity by gist information alone and can improve performance through selective attention to scenic textures of progressively smaller size. ARTSCENE discriminates 4 landscape scene categories (coast, forest, mountain and countryside) with up to 91.58% correct on a test set, outperforms alternative models in the literature which use biologically implausible computations, and outperforms component systems that use either gist or texture information alone. Model simulations also show that adjacent textures form higher-order features that are also informative for scene recognition.
Resumo:
National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624)
Resumo:
Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. Such a transformation enables speech to be understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitchindependent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.
Resumo:
Making use of very detailed neurophysiological, anatomical, and behavioral data to build biologically-realistic computational models of animal behavior is often a difficult task. Until recently, many software packages have tried to resolve this mismatched granularity with different approaches. This paper presents KInNeSS, the KDE Integrated NeuroSimulation Software environment, as an alternative solution to bridge the gap between data and model behavior. This open source neural simulation software package provides an expandable framework incorporating features such as ease of use, scalability, an XML based schema, and multiple levels of granularity within a modern object oriented programming design. KInNeSS is best suited to simulate networks of hundreds to thousands of branched multi-compartmental neurons with biophysical properties such as membrane potential, voltage-gated and ligand-gated channels, the presence of gap junctions or ionic diffusion, neuromodulation channel gating, the mechanism for habituative or depressive synapses, axonal delays, and synaptic plasticity. KInNeSS outputs include compartment membrane voltage, spikes, local-field potentials, and current source densities, as well as visualization of the behavior of a simulated agent. An explanation of the modeling philosophy and plug-in development is also presented. Further development of KInNeSS is ongoing with the ultimate goal of creating a modular framework that will help researchers across different disciplines to effectively collaborate using a modern neural simulation platform.
Resumo:
Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.
Resumo:
Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Twodimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/.
Resumo:
Grouping of collinear boundary contours is a fundamental process during visual perception. Illusory contour completion vividly illustrates how stable perceptual boundaries interpolate between pairs of contour inducers, but do not extrapolate from a single inducer. Neural models have simulated how perceptual grouping occurs in laminar visual cortical circuits. These models predicted the existence of grouping cells that obey a bipole property whereby grouping can occur inwardly between pairs or greater numbers of similarly oriented and co-axial inducers, but not outwardly from individual inducers. These models have not, however, incorporated spiking dynamics. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity to inducer configurations occur despite irregularities in spike timing across all the interacting cells. Other models have demonstrated spiking dynamics in laminar neocortical circuits, but not how perceptual grouping occurs. The current model begins to unify these two modeling streams by implementing a laminar cortical network of spiking cells whose intracellular temporal dynamics interact with recurrent intercellular spiking interactions to quantitatively simulate data from neurophysiological experiments about perceptual grouping, the structure of non-classical visual receptive fields, and gamma oscillations.
Resumo:
Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semisupervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative lowdimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.bu.edu/SSART/.
Resumo:
Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are of great controversy, largely because such learning can often be attributed to plasticity in later stages of sensory processing or in the decision processes. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity, by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in conjunction with the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show learning for the exposed contrast polarity and that this learning does not transfer to the unexposed contrast polarity. These results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells.
Resumo:
SyNAPSE program of the Defense Advanced Projects Research Agency (Hewlett-Packard Company, subcontract under DARPA prime contract HR0011-09-3-0001, and HRL Laboratories LLC, subcontract #801881-BS under DARPA prime contract HR0011-09-C-0001); CELEST, an NSF Science of Learning Center (SBE-0354378)
Resumo:
Grid cells in the dorsal segment of the medial entorhinal cortex (dMEC) show remarkable hexagonal activity patterns, at multiple spatial scales, during spatial navigation. How these hexagonal patterns arise has excited intense interest. It has previously been shown how a selforganizing map can convert firing patterns across entorhinal grid cells into hippocampal place cells that are capable of representing much larger spatial scales. Can grid cell firing fields also arise during navigation through learning within a self-organizing map? A neural model is proposed that converts path integration signals into hexagonal grid cell patterns of multiple scales. This GRID model creates only grid cell patterns with the observed hexagonal structure, predicts how these hexagonal patterns can be learned from experience, and can process biologically plausible neural input and output signals during navigation. These results support a unified computational framework for explaining how entorhinal-hippocampal interactions support spatial navigation.
Resumo:
Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.
Resumo:
SyNAPSE program of the Defense Advanced Projects Research Agency (HRL Laboratories LLC, subcontract #801881-BS under DARPA prime contract HR0011-09-C-0001); CELEST, a National Science Foundation Science of Learning Center (SBE-0354378)
Resumo:
An active, attentionally-modulated recognition architecture is proposed for object recognition and scene analysis. The proposed architecture forms part of navigation and trajectory planning modules for mobile robots. Key characteristics of the system include movement planning and execution based on environmental factors and internal goal definitions. Real-time implementation of the system is based on space-variant representation of the visual field, as well as an optimal visual processing scheme utilizing separate and parallel channels for the extraction of boundaries and stimulus qualities. A spatial and temporal grouping module (VWM) allows for scene scanning, multi-object segmentation, and featural/object priming. VWM is used to modulate a tn~ectory formation module capable of redirecting the focus of spatial attention. Finally, an object recognition module based on adaptive resonance theory is interfaced through VWM to the visual processing module. The system is capable of using information from different modalities to disambiguate sensory input.