160 resultados para Táctica naval
Resumo:
A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.
Resumo:
The concept of attention has been used in many senses, often without clarifying how or why attention works as it does. Attention, like consciousness, is often described in a disembodied way. The present article summarizes neural models and supportive data and how attention is linked to processes of learning, expectation, competition, and consciousness. A key them is that attention modulates cortical self-organization and stability. Perceptual and cognitive neocortex is organized into six main cell layers, with characteristic sub-lamina. Attention is part of unified design of bottom-up, horizontal, and top-down interactions among indentified cells in laminar cortical circuits. Neural models clarify how attention may be allocated during processes of visual perception, learning and search; auditory streaming and speech perception; movement target selection during sensory-motor control; mental imagery and fantasy; and hallucination during mental disorders, among other processes.
Resumo:
The default ARTMAP algorithm and its parameter values specified here define a ready-to-use general-purpose neural network system for supervised learning and recognition.
Resumo:
What brain mechanisms underlie autism and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the iSTART model, which proposes how cognitive, emotional, timing, and motor processes may interact together to create and perpetuate autistic symptoms. These model processes were originally developed to explain data concerning how the brain controls normal behaviors. The iSTART model shows how autistic behavioral symptoms may arise from prescribed breakdowns in these brain processes.
Resumo:
Neural models have proposed how short-term memory (STM) storage in working memory and long-term memory (LTM) storage and recall are linked and interact, but are realized by different mechanisms that obey different laws. The authors' data can be understood in the light of these models, which suggest that the authors may have gone too far in obscuring the differences between these processes.
Resumo:
ACT is compared with a particular type of connectionist model that cannot handle symbols and use non-biological operations that cannot learn in real time. This focus continues an unfortunate trend of straw man "debates" in cognitive science. Adaptive Resonance Theory, or ART, neural models of cognition can handle both symbols and sub-symbolic representations, and meets the Newell criteria at least as well as these models.
Resumo:
The combinatorial Dirichlet problem is formulated, and an algorithm for solving it is presented. This provides an effective method for interpolating missing data on weighted graphs of arbitrary connectivity. Image processing examples are shown, and the relation to anistropic diffusion is discussed.
Resumo:
Spectral methods of graph partitioning have been shown to provide a powerful approach to the image segmentation problem. In this paper, we adopt a different approach, based on estimating the isoperimetric constant of an image graph. Our algorithm produces the high quality segmentations and data clustering of spectral methods, but with improved speed and stability.
Resumo:
This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.
Resumo:
Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.
Resumo:
Much sensory-motor behavior develops through imitation, as during the learning of handwriting by children. Such complex sequential acts are broken down into distinct motor control synergies, or muscle groups, whose activities overlap in time to generate continuous, curved movements that obey an intense relation between curvature and speed. The Adaptive Vector Integration to Endpoint (AVITEWRITE) model of Grossberg and Paine (2000) proposed how such complex movements may be learned through attentive imitation. The model suggest how frontal, parietal, and motor cortical mechanisms, such as difference vector encoding, under volitional control from the basal ganglia, interact with adaptively-timed, predictive cerebellar learning during movement imitation and predictive performance. Key psycophysical and neural data about learning to make curved movements were simulated, including a decrease in writing time as learning progresses; generation of unimodal, bell-shaped velocity profiles for each movement synergy; size scaling with isochrony, and speed scaling with preservation of the letter shape and the shapes of the velocity profiles; an inverse relation between curvature and tangential velocity; and a Two-Thirds Power Law relation between angular velocity and curvature. However, the model learned from letter trajectories of only one subject, and only qualitative kinematic comparisons were made with previously published human data. The present work describes a quantitative test of AVITEWRITE through direct comparison of a corpus of human handwriting data with the model's performance when it learns by tracing human trajectories. The results show that model performance was variable across subjects, with an average correlation between the model and human data of 89+/-10%. The present data from simulations using the AVITEWRITE model highlight some of its strengths while focusing attention on areas, such as novel shape learning in children, where all models of handwriting and learning of other complex sensory-motor skills would benefit from further research.
Resumo:
Office of Naval Research (N00014-01-1-0624)
Resumo:
Mapping novel terrain from sparse, complex data often requires the resolution of conflicting information from sensors working at different times, locations, and scales, and from experts with different goals and situations. Information fusion methods help resolve inconsistencies in order to distinguish correct from incorrect answers, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods developed here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an objects class is car, vehicle, or man-made. Underlying relationships among objects are assumed to be unknown to the automated system of the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchial knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples.
Resumo:
A growing wave of behavioral studies, using a wide variety of paradigms that were introduced or greatly refined in recent years, has generated a new wealth of parametric observations about serial order behavior. What was a mere trickle of neurophysiological studies has grown to a more steady stream of probes of neural sites and mechanisms underlying sequential behavior. Moreover, simulation models of serial behavior generation have begun to open a channel to link cellular dynamics with cognitive and behavioral dynamics. Here we summarize the major results from prominent sequence learning and performance tasks, namely immediate serial recall, typing, 2XN, discrete sequence production, and serial reaction time. These populate a continuum from higher to lower degrees of internal control of sequential organization. The main movement classes covered are speech and keypressing, both involving small amplitude movements that are very amenable to parametric study. A brief synopsis of classes of serial order models, vis-à-vis the detailing of major effects found in the behavioral data, leads to a focus on competitive queuing (CQ) models. Recently, the many behavioral predictive successes of CQ models have been joined by successful prediction of distinctively patterend electrophysiological recordings in prefrontal cortex, wherein parallel activation dynamics of multiple neural ensembles strikingly matches the parallel dynamics predicted by CQ theory. An extended CQ simulation model-the N-STREAMS neural network model-is then examined to highlight issues in ongoing attemptes to accomodate a broader range of behavioral and neurophysiological data within a CQ-consistent theory. Important contemporary issues such as the nature of working memory representations for sequential behavior, and the development and role of chunks in hierarchial control are prominent throughout.
Resumo:
Classifying novel terrain or objects front sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods described here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among objects are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system used distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships.