3 resultados para Modalities of representation
em National Center for Biotechnology Information - NCBI
Resumo:
We examined the effects of eye position on saccades evoked by electrical stimulation of the intraparietal sulcus (IPS) of rhesus monkeys. Microstimulation evoked saccades from sites on the posterior bank, floor, and the medial bank of the IPS. The size and direction of the eye movements varied as a function of initial eye position before microstimulation. At many stimulation sites, eye position affected primarily the amplitude and not the direction of the evoked saccades. These "modified vector saccades" were characteristic of most stimulation-sensitive zones in the IPS, with the exception of a narrow strip located mainly on the floor of the sulcus. Stimulation in this "intercalated zone" evoked saccades that moved the eyes into a particular region in head-centered space, independent of the starting position of the eyes. This latter response is compatible with the stimulation site representing a goal zone in head-centered coordinates. On the other hand, the modified vector saccades observed outside the intercalated zone are indicative of a more distributed representation of head-centered space. A convergent projection from many modified vector sites onto each intercalated site may be a basis for a transition from a distributed to a more explicit representation of space in head-centered coordinates.
Resumo:
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.
Resumo:
In both humans and animals, the hippocampus is critical to memory across modalities of information (e.g., spatial and nonspatial memory) and plays a critical role in the organization and flexible expression of memories. Recent studies have advanced our understanding of cellular basis of hippocampal function, showing that N-methyl-d-aspartate (NMDA) receptors in area CA1 are required in both the spatial and nonspatial domains of learning. Here we examined whether CA1 NMDA receptors are specifically required for the acquisition and flexible expression of nonspatial memory. Mice lacking CA1 NMDA receptors were impaired in solving a transverse patterning problem that required the simultaneous acquisition of three overlapping odor discriminations, and their impairment was related to an abnormal strategy by which they failed to adequately sample and compare the critical odor stimuli. By contrast, they performed normally, and used normal stimulus sampling strategies, in the concurrent learning of three nonoverlapping concurrent odor discriminations. These results suggest that CA1 NMDA receptors play a crucial role in the encoding and flexible expression of stimulus relations in nonspatial memory.