991 resultados para motor perception
Resumo:
This paper describes a model of speech production called DIVA that highlights issues of self-organization and motor equivalent production of phonological units. The model uses a circular reaction strategy to learn two mappings between three levels of representation. Data on the plasticity of phonemic perceptual boundaries motivates a learned mapping between phoneme representations and vocal tract variables. A second mapping between vocal tract variables and articulator movements is also learned. To achieve the flexible control made possible by the redundancy of this mapping, desired directions in vocal tract configuration space are mapped into articulator velocity commands. Because each vocal tract direction cell learns to activate several articulator velocities during babbling, the model provides a natural account of the formation of coordinative structures. Model simulations show automatic compensation for unexpected constraints despite no previous experience or learning under these constraints.
Resumo:
A neural network model is presented to account for the three dimensional perception of visual space by way of an analog Gestalt-like perceptual mechanism.
Resumo:
A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.
Resumo:
This article describes a neural network model that addresses the acquisition of speaking skills by infants and subsequent motor equivalent production of speech sounds. The model learns two mappings during a babbling phase. A phonetic-to-orosensory mapping specifies a vocal tract target for each speech sound; these targets take the form of convex regions in orosensory coordinates defining the shape of the vocal tract. The babbling process wherein these convex region targets are formed explains how an infant can learn phoneme-specific and language-specific limits on acceptable variability of articulator movements. The model also learns an orosensory-to-articulatory mapping wherein cells coding desired movement directions in orosensory space learn articulator movements that achieve these orosensory movement directions. The resulting mapping provides a natural explanation for the formation of coordinative structures. This mapping also makes efficient use of redundancy in the articulator system, thereby providing the model with motor equivalent capabilities. Simulations verify the model's ability to compensate for constraints or perturbations applied to the articulators automatically and without new learning and to explain contextual variability seen in human speech production.
Resumo:
How do the layered circuits of prefrontal and motor cortex carry out working memory storage, sequence learning, and voluntary sequential item selection and performance? A neural model called LIST PARSE is presented to explain and quantitatively simulate cognitive data about both immediate serial recall and free recall, including bowing of the serial position performance curves, error-type distributions, temporal limitations upon recall, and list length effects. The model also qualitatively explains cognitive effects related to attentional modulation, temporal grouping, variable presentation rates, phonemic similarity, presentation of non-words, word frequency/item familiarity and list strength, distracters and modality effects. In addition, the model quantitatively simulates neurophysiological data from the macaque prefrontal cortex obtained during sequential sensory-motor imitation and planned performance. The article further develops a theory concerning how the cerebral cortex works by showing how variations of the laminar circuits that have previously clarified how the visual cortex sees can also support cognitive processing of sequentially organized behaviors.
Resumo:
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain’s form and motion systems that address such situations. Because the model’s stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse “feature tracking signals” from, e.g., line ends, are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and longrange cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.
Resumo:
This article describes further evidence for a new neural network theory of biological motion perception that is called a Motion Boundary Contour System. This theory clarifies why parallel streams Vl-> V2 and Vl-> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The Motion Boundary Contour System consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a Motion Oriented Contrast Filter, or MOC Filter, for preprocessing moving images; and a Cooperative-Competitive Feedback Loop, or CC Loop, for generating emergent boundary segmentations of the filtered signals. The present article uses the MOC Filter to explain a variety of classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed-up of motion velocity as interfiash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte's Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem, including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90°, whereas opposite directions differ by 180°, and why a cortical stream Vl -> V2 -> MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the Motion Boundary Contour System design.
Resumo:
Working memory neural networks are characterized which encode the invariant temporal order of sequential events that may be presented at widely differing speeds, durations, and interstimulus intervals. This temporal order code is designed to enable all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described that is based on the model of Seibert and Waxman [1].
Synchronized Oscillations During Cooperative Feature Lining in a Cortical Model of Visual Perception
Resumo:
A neural network model of synchronized oscillations in visual cortex is presented to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these experiments, synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained for single bar stimuli and also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli using different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models.
Resumo:
Working memory neural networks are characterized which encode the invariant temporal order of sequential events. Inputs to the networks, called Sustained Temporal Order REcurrent (STORE) models, may be presented at widely differing speeds, durations, and interstimulus intervals. The STORE temporal order code is designed to enable all emergent groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.
Resumo:
This article describes further evidence for a new neural network theory of biological motion perception. The theory clarifies why parallel streams Vl --> V2, Vl --> MT, and Vl --> V2 --> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The theory suggests that the static form system (Static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction-ofcontrast and insensitive to direction-of-motion, whereas the motion form system (Motion BCS) generates emergent boundary segmentations whose outputs are insensitive to directionof-contrast but sensitive to direction-of-motion. The theory is used to explain classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include beta motion; split motion; gamma motion and reverse-contrast gamma motion; delta motion; visual inertia; the transition from group motion to element motion in response to a Ternus display as the interstimulus interval (ISI) decreases; group motion in response to a reverse-contrast Ternus display even at short ISIs; speed-up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, ISI, and motion threshold known as Korte's Laws; dependence of motion strength on stimulus orientation and spatial frequency; short-range and long-range form-color interactions; and binocular interactions of flashes to different eyes.
Resumo:
A neural network model of synchronized oscillator activity in visual cortex is presented in order to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these recent experiments, it was reported that synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained not only for single bar stimuli but also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli. For the double bar case, synchronous oscillations are induced in the region between the bars, but no oscillations are induced in the regions beyond the stimuli. These results were achieved with cellular units that exhibit limit cycle oscillations for a robust range of input values, but which approach an equilibrium state when undriven. Single and double bar synchronization of these oscillators was achieved by different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models. In preattentive visual segmentation, synchronous oscillations may reflect the binding of local feature detectors into a globally coherent grouping. In object recognition, synchronous oscillations may occur during an attentive resonant state that triggers new learning. These modelling results support earlier theoretical predictions of synchronous visual cortical oscillations and demonstrate the robustness of the mechanisms capable of generating synchrony.
Resumo:
This paper attempts a rational, step-by-step reconstruction of many aspects of the mammalian neural circuitry known to be involved in the spinal cord's regulation of opposing muscles acting on skeletal segments. Mathematical analyses and local circuit simulations based on neural membrane equations are used to clarify the behavioral function of five fundamental cell types, their complex connectivities, and their physiological actions. These cell types are: α-MNs, γ-MNs, IaINs, IbINs, and Renshaw cells. It is shown that many of the complexities of spinal circuitry are necessary to ensure near invariant realization of motor intentions when descending signals of two basic types independently vary over large ranges of magnitude and rate of change. Because these two types of signal afford independent control, or Factorization, of muscle LEngth and muscle TEnsion, our construction was named the FLETE model (Bullock and Grossberg, 1988b, 1989). The present paper significantly extends the range of experimental data encompassed by this evolving model.
Resumo:
How do human observers perceive a coherent pattern of motion from a disparate set of local motion measures? Our research has examined how ambiguous motion signals along straight contours are spatially integrated to obtain a globally coherent perception of motion. Observers viewed displays containing a large number of apertures, with each aperture containing one or more contours whose orientations and velocities could be independently specified. The total pattern of the contour trajectories across the individual apertures was manipulated to produce globally coherent motions, such as rotations, expansions, or translations. For displays containing only straight contours extending to the circumferences of the apertures, observers' reports of global motion direction were biased whenever the sampling of contour orientations was asymmetric relative to the direction of motion. Performance was improved by the presence of identifiable features, such as line ends or crossings, whose trajectories could be tracked over time. The reports of our observers were consistent with a pooling process involving a vector average of measures of the component of velocity normal to contour orientation, rather than with the predictions of the intersection-of-constraints analysis in velocity space.
Resumo:
A neural network is introduced which provides a solution of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space. To do this, the network self-organizes a mapping from motion directions in 3-D space to velocity commands in joint space. Computer simulations demonstrate that, without any additional learning, the network can generate accurate movement commands that compensate for variable tool lengths, clamping of joints, distortions of visual input by a prism, and unexpected limb perturbations. Blind reaches have also been simulated.