11 resultados para Visual Evoked Potentials
em Duke University
Resumo:
Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
Resumo:
OBJECTIVE: To review the experience at a single institution with motor evoked potential (MEP) monitoring during intracranial aneurysm surgery to determine the incidence of unacceptable movement. METHODS: Neurophysiology event logs and anesthetic records from 220 craniotomies for aneurysm clipping were reviewed for unacceptable patient movement or reason for cessation of MEPs. Muscle relaxants were not given after intubation. Transcranial MEPs were recorded from bilateral abductor hallucis and abductor pollicis muscles. MEP stimulus intensity was increased up to 500 V until evoked potential responses were detectable. RESULTS: Out of 220 patients, 7 (3.2%) exhibited unacceptable movement with MEP stimulation-2 had nociception-induced movement and 5 had excessive field movement. In all but one case, MEP monitoring could be resumed, yielding a 99.5% monitoring rate. CONCLUSIONS: With the anesthetic and monitoring regimen, the authors were able to record MEPs of the upper and lower extremities in all patients and found only 3.2% demonstrated unacceptable movement. With a suitable anesthetic technique, MEP monitoring in the upper and lower extremities appears to be feasible in most patients and should not be withheld because of concern for movement during neurovascular surgery.
Resumo:
The spiking activity of nearby cortical neurons is correlated on both short and long time scales. Understanding this shared variability in firing patterns is critical for appreciating the representation of sensory stimuli in ensembles of neurons, the coincident influences of neurons on common targets, and the functional implications of microcircuitry. Our knowledge about neuronal correlations, however, derives largely from experiments that used different recording methods, analysis techniques, and cortical regions. Here we studied the structure of neuronal correlation in area V4 of alert macaques using recording and analysis procedures designed to match those used previously in primary visual cortex (V1), the major input to V4. We found that the spatial and temporal properties of correlations in V4 were remarkably similar to those of V1, with two notable differences: correlated variability in V4 was approximately one-third the magnitude of that in V1 and synchrony in V4 was less temporally precise than in V1. In both areas, spontaneous activity (measured during fixation while viewing a blank screen) was approximately twice as correlated as visual-evoked activity. The results provide a foundation for understanding how the structure of neuronal correlation differs among brain regions and stages in cortical processing and suggest that it is likely governed by features of neuronal circuits that are shared across the visual cortex.
Resumo:
Successful interaction with the world depends on accurate perception of the timing of external events. Neurons at early stages of the primate visual system represent time-varying stimuli with high precision. However, it is unknown whether this temporal fidelity is maintained in the prefrontal cortex, where changes in neuronal activity generally correlate with changes in perception. One reason to suspect that it is not maintained is that humans experience surprisingly large fluctuations in the perception of time. To investigate the neuronal correlates of time perception, we recorded from neurons in the prefrontal cortex and midbrain of monkeys performing a temporal-discrimination task. Visual time intervals were presented at a timescale relevant to natural behavior (<500 ms). At this brief timescale, neuronal adaptation--time-dependent changes in the size of successive responses--occurs. We found that visual activity fluctuated with timing judgments in the prefrontal cortex but not in comparable midbrain areas. Surprisingly, only response strength, not timing, predicted task performance. Intervals perceived as longer were associated with larger visual responses and shorter intervals with smaller responses, matching the dynamics of adaptation. These results suggest that the magnitude of prefrontal activity may be read out to provide temporal information that contributes to judging the passage of time.
Resumo:
Our percept of visual stability across saccadic eye movements may be mediated by presaccadic remapping. Just before a saccade, neurons that remap become visually responsive at a future field (FF), which anticipates the saccade vector. Hence, the neurons use corollary discharge of saccades. Many of the neurons also decrease their response at the receptive field (RF). Presaccadic remapping occurs in several brain areas including the frontal eye field (FEF), which receives corollary discharge of saccades in its layer IV from a collicular-thalamic pathway. We studied, at two levels, the microcircuitry of remapping in the FEF. At the laminar level, we compared remapping between layers IV and V. At the cellular level, we compared remapping between different neuron types of layer IV. In the FEF in four monkeys (Macaca mulatta), we identified 27 layer IV neurons with orthodromic stimulation and 57 layer V neurons with antidromic stimulation from the superior colliculus. With the use of established criteria, we classified the layer IV neurons as putative excitatory (n = 11), putative inhibitory (n = 12), or ambiguous (n = 4). We found that just before a saccade, putative excitatory neurons increased their visual response at the RF, putative inhibitory neurons showed no change, and ambiguous neurons increased their visual response at the FF. None of the neurons showed presaccadic visual changes at both RF and FF. In contrast, neurons in layer V showed full remapping (at both the RF and FF). Our data suggest that elemental signals for remapping are distributed across neuron types in early cortical processing and combined in later stages of cortical microcircuitry.
Resumo:
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.
Resumo:
Recently, a number of investigators have examined the neural loci of psychological processes enabling the control of visual spatial attention using cued-attention paradigms in combination with event-related functional magnetic resonance imaging. Findings from these studies have provided strong evidence for the involvement of a fronto-parietal network in attentional control. In the present study, we build upon this previous work to further investigate these attentional control systems. In particular, we employed additional controls for nonattentional sensory and interpretative aspects of cue processing to determine whether distinct regions in the fronto-parietal network are involved in different aspects of cue processing, such as cue-symbol interpretation and attentional orienting. In addition, we used shorter cue-target intervals that were closer to those used in the behavioral and event-related potential cueing literatures. Twenty participants performed a cued spatial attention task while brain activity was recorded with functional magnetic resonance imaging. We found functional specialization for different aspects of cue processing in the lateral and medial subregions of the frontal and parietal cortex. In particular, the medial subregions were more specific to the orienting of visual spatial attention, while the lateral subregions were associated with more general aspects of cue processing, such as cue-symbol interpretation. Additional cue-related effects included differential activations in midline frontal regions and pretarget enhancements in the thalamus and early visual cortical areas.
Resumo:
The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.
Resumo:
Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.
Resumo:
The image on the retina may move because the eyes move, or because something in the visual scene moves. The brain is not fooled by this ambiguity. Even as we make saccades, we are able to detect whether visual objects remain stable or move. Here we test whether this ability to assess visual stability across saccades is present at the single-neuron level in the frontal eye field (FEF), an area that receives both visual input and information about imminent saccades. Our hypothesis was that neurons in the FEF report whether a visual stimulus remains stable or moves as a saccade is made. Monkeys made saccades in the presence of a visual stimulus outside of the receptive field. In some trials, the stimulus remained stable, but in other trials, it moved during the saccade. In every trial, the stimulus occupied the center of the receptive field after the saccade, thus evoking a reafferent visual response. We found that many FEF neurons signaled, in the strength and timing of their reafferent response, whether the stimulus had remained stable or moved. Reafferent responses were tuned for the amount of stimulus translation, and, in accordance with human psychophysics, tuning was better (more prevalent, stronger, and quicker) for stimuli that moved perpendicular, rather than parallel, to the saccade. Tuning was sometimes present as well for nonspatial transaccadic changes (in color, size, or both). Our results indicate that FEF neurons evaluate visual stability during saccades and may be general purpose detectors of transaccadic visual change.
Resumo:
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.