21 resultados para Reconhecimento visual
em Duke University
Resumo:
Emotional and attentional functions are known to be distributed along ventral and dorsal networks in the brain, respectively. However, the interactions between these systems remain to be specified. The present study used event-related functional magnetic resonance imaging (fMRI) to investigate how attentional focus can modulate the neural activity elicited by scenes that vary in emotional content. In a visual oddball task, aversive and neutral scenes were presented intermittently among circles and squares. The squares were frequent standard events, whereas the other novel stimulus categories occurred rarely. One experimental group [N=10] was instructed to count the circles, whereas another group [N=12] counted the emotional scenes. A main effect of emotion was found in the amygdala (AMG) and ventral frontotemporal cortices. In these regions, activation was significantly greater for emotional than neutral stimuli but was invariant to attentional focus. A main effect of attentional focus was found in dorsal frontoparietal cortices, whose activity signaled task-relevant target events irrespective of emotional content. The only brain region that was sensitive to both emotion and attentional focus was the anterior cingulate gyrus (ACG). When circles were task-relevant, the ACG responded equally to circle targets and distracting emotional scenes. The ACG response to emotional scenes increased when they were task-relevant, and the response to circles concomitantly decreased. These findings support and extend prominent network theories of emotion-attention interactions that highlight the integrative role played by the anterior cingulate.
Resumo:
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.
Resumo:
This dissertation centers on the relationship between art and politics in postwar Central America as materialized in the specific issues of racial and gendered violence that derive from the region's geopolitical location and history. It argues that the decade of the 1990s marks a moment of change in the region's cultural infrastructure, both institutionally and conceptually, in which artists seek a new visual language of experimental art practices to articulate and conceptualize a critical understanding of place, experience and knowledge. It posits that visual and conceptual manifestations of violence in Central American performance, conceptual art and installation extend beyond a critique of the state, and beyond the scope of political parties in perpetuating violent circumstances in these countries. It argues that instead artists use experimental practices in art to locate manifestations of racial violence in an historical system of domination and as a legacy of colonialism still witnessed, lived, and learned by multiple subjectivities in the region. In this postwar period artists move beyond the cold-war rhetoric of the previous decades and instead root the current social and political injustices in what Aníbal Quijano calls the `coloniality of power.' Through an engagement of decolonial methodologies, this dissertation challenges the label "political art" in Central America and offers what I call "visual disobedience" as a response to the coloniality of seeing. I posit that visual colonization is yet another aspect of the coloniality of power and indispensable to projects of decolonization. It offers an analysis of various works to show how visual disobedience responds specifically to racial and gender violence and the equally violent colonization of visuality in Mesoamerica. Such geopolitical critiques through art unmask themes specific to life and identity in contemporary Central America, from indigenous genocide, femicide, transnational gangs, to mass imprisonments and a new wave of social cleansing. I propose that Central American artists--beyond an anti-colonial stance--are engaging in visual disobedience so as to construct decolonial epistemologies in art, through art, and as art as decolonial gestures for healing.
Resumo:
Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.
Resumo:
When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives relating to the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated.
Resumo:
The number of studies examining visual perspective during retrieval has recently grown. However, the way in which perspective has been conceptualized differs across studies. Some studies have suggested perspective is experienced as either a first-person or a third-person perspective, whereas others have suggested both perspectives can be experienced during a single retrieval attempt. This aspect of perspective was examined across three studies, which used different measurement techniques commonly used in studies of perspective. Results suggest that individuals can experience more than one perspective when recalling events. Furthermore, the experience of the two perspectives correlated differentially with ratings of vividness, suggesting that the two perspectives should not be considered in opposition of one another. We also found evidence of a gender effect in the experience of perspective, with females experiencing third-person perspectives more often than males. Future studies should allow for the experience of more than one perspective during retrieval.
Resumo:
Amnesia typically results from trauma to the medial temporal regions that coordinate activation among the disparate areas of cortex that represent the information that make up autobiographical memories. We proposed that amnesia should also result from damage to these regions, particularly regions that subserve long-term visual memory [Rubin, D. C., & Greenberg, D. L. (1998). Visual memory-deficit amnesia: A distinct amnesic presentation and etiology. Proceedings of the National Academy of Sciences of the USA, 95, 5413-5416]. We previously found 11 such cases in the literature, and all 11 had amnesia. We now present a detailed investigation of one of these patients. M.S. suffers from long-term visual memory loss along with some semantic deficits; he also manifests a severe retrograde amnesia and moderate anterograde amnesia. The presentation of his amnesia differs from that of the typical medial-temporal or lateral-temporal amnesic; we suggest that his visual deficits may be contributing to his autobiographical amnesia.
Resumo:
We describe a form of amnesia, which we have called visual memory-deficit amnesia, that is caused by damage to areas of the visual system that store visual information. Because it is caused by a deficit in access to stored visual material and not by an impaired ability to encode or retrieve new material, it has the otherwise infrequent properties of a more severe retrograde than anterograde amnesia with no temporal gradient in the retrograde amnesia. Of the 11 cases of long-term visual memory loss found in the literature, all had amnesia extending beyond a loss of visual memory, often including a near total loss of pretraumatic episodic memory. Of the 6 cases in which both the severity of retrograde and anterograde amnesia and the temporal gradient of the retrograde amnesia were noted, 4 had a more severe retrograde amnesia with no temporal gradient and 2 had a less severe retrograde amnesia with a temporal gradient.
Resumo:
The spiking activity of nearby cortical neurons is correlated on both short and long time scales. Understanding this shared variability in firing patterns is critical for appreciating the representation of sensory stimuli in ensembles of neurons, the coincident influences of neurons on common targets, and the functional implications of microcircuitry. Our knowledge about neuronal correlations, however, derives largely from experiments that used different recording methods, analysis techniques, and cortical regions. Here we studied the structure of neuronal correlation in area V4 of alert macaques using recording and analysis procedures designed to match those used previously in primary visual cortex (V1), the major input to V4. We found that the spatial and temporal properties of correlations in V4 were remarkably similar to those of V1, with two notable differences: correlated variability in V4 was approximately one-third the magnitude of that in V1 and synchrony in V4 was less temporally precise than in V1. In both areas, spontaneous activity (measured during fixation while viewing a blank screen) was approximately twice as correlated as visual-evoked activity. The results provide a foundation for understanding how the structure of neuronal correlation differs among brain regions and stages in cortical processing and suggest that it is likely governed by features of neuronal circuits that are shared across the visual cortex.
Resumo:
Successful interaction with the world depends on accurate perception of the timing of external events. Neurons at early stages of the primate visual system represent time-varying stimuli with high precision. However, it is unknown whether this temporal fidelity is maintained in the prefrontal cortex, where changes in neuronal activity generally correlate with changes in perception. One reason to suspect that it is not maintained is that humans experience surprisingly large fluctuations in the perception of time. To investigate the neuronal correlates of time perception, we recorded from neurons in the prefrontal cortex and midbrain of monkeys performing a temporal-discrimination task. Visual time intervals were presented at a timescale relevant to natural behavior (<500 ms). At this brief timescale, neuronal adaptation--time-dependent changes in the size of successive responses--occurs. We found that visual activity fluctuated with timing judgments in the prefrontal cortex but not in comparable midbrain areas. Surprisingly, only response strength, not timing, predicted task performance. Intervals perceived as longer were associated with larger visual responses and shorter intervals with smaller responses, matching the dynamics of adaptation. These results suggest that the magnitude of prefrontal activity may be read out to provide temporal information that contributes to judging the passage of time.
Resumo:
Our percept of visual stability across saccadic eye movements may be mediated by presaccadic remapping. Just before a saccade, neurons that remap become visually responsive at a future field (FF), which anticipates the saccade vector. Hence, the neurons use corollary discharge of saccades. Many of the neurons also decrease their response at the receptive field (RF). Presaccadic remapping occurs in several brain areas including the frontal eye field (FEF), which receives corollary discharge of saccades in its layer IV from a collicular-thalamic pathway. We studied, at two levels, the microcircuitry of remapping in the FEF. At the laminar level, we compared remapping between layers IV and V. At the cellular level, we compared remapping between different neuron types of layer IV. In the FEF in four monkeys (Macaca mulatta), we identified 27 layer IV neurons with orthodromic stimulation and 57 layer V neurons with antidromic stimulation from the superior colliculus. With the use of established criteria, we classified the layer IV neurons as putative excitatory (n = 11), putative inhibitory (n = 12), or ambiguous (n = 4). We found that just before a saccade, putative excitatory neurons increased their visual response at the RF, putative inhibitory neurons showed no change, and ambiguous neurons increased their visual response at the FF. None of the neurons showed presaccadic visual changes at both RF and FF. In contrast, neurons in layer V showed full remapping (at both the RF and FF). Our data suggest that elemental signals for remapping are distributed across neuron types in early cortical processing and combined in later stages of cortical microcircuitry.
Resumo:
The image on the retina may move because the eyes move, or because something in the visual scene moves. The brain is not fooled by this ambiguity. Even as we make saccades, we are able to detect whether visual objects remain stable or move. Here we test whether this ability to assess visual stability across saccades is present at the single-neuron level in the frontal eye field (FEF), an area that receives both visual input and information about imminent saccades. Our hypothesis was that neurons in the FEF report whether a visual stimulus remains stable or moves as a saccade is made. Monkeys made saccades in the presence of a visual stimulus outside of the receptive field. In some trials, the stimulus remained stable, but in other trials, it moved during the saccade. In every trial, the stimulus occupied the center of the receptive field after the saccade, thus evoking a reafferent visual response. We found that many FEF neurons signaled, in the strength and timing of their reafferent response, whether the stimulus had remained stable or moved. Reafferent responses were tuned for the amount of stimulus translation, and, in accordance with human psychophysics, tuning was better (more prevalent, stronger, and quicker) for stimuli that moved perpendicular, rather than parallel, to the saccade. Tuning was sometimes present as well for nonspatial transaccadic changes (in color, size, or both). Our results indicate that FEF neurons evaluate visual stability during saccades and may be general purpose detectors of transaccadic visual change.
Resumo:
Neuronal receptive fields (RFs) provide the foundation for understanding systems-level sensory processing. In early visual areas, investigators have mapped RFs in detail using stochastic stimuli and sophisticated analytical approaches. Much less is known about RFs in prefrontal cortex. Visual stimuli used for mapping RFs in prefrontal cortex tend to cover a small range of spatial and temporal parameters, making it difficult to understand their role in visual processing. To address these shortcomings, we implemented a generalized linear model to measure the RFs of neurons in the macaque frontal eye field (FEF) in response to sparse, full-field stimuli. Our high-resolution, probabilistic approach tracked the evolution of RFs during passive fixation, and we validated our results against conventional measures. We found that FEF neurons exhibited a surprising level of sensitivity to stimuli presented as briefly as 10 ms or to multiple dots presented simultaneously, suggesting that FEF visual responses are more precise than previously appreciated. FEF RF spatial structures were largely maintained over time and between stimulus conditions. Our results demonstrate that the application of probabilistic RF mapping to FEF and similar association areas is an important tool for clarifying the neuronal mechanisms of cognition.
Resumo:
Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
Resumo:
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.