8 resultados para Visual attention
em Duke University
Resumo:
All of us are taxed with juggling our inner mental lives with immediate external task demands. For many years, the temporary maintenance of internal information was considered to be handled by a dedicated working memory (WM) system. It has recently become increasingly clear, however, that such short-term internal activation interacts with attention focused on external stimuli. It is unclear, however, exactly why these two interact, at what level of processing, and to what degree. Because our internal maintenance and external attention processes co-occur with one another, the manner of their interaction has vast implications for functioning in daily life. The work described here has employed original experimental paradigms combining WM and attention task elements, functional magnetic resonance imaging (fMRI) to illuminate the associated neural processes, and transcranial magnetic stimulation (TMS) to clarify the causal substrates of attentional brain function. These studies have examined a mechanism that might explain why (and when) the content of WM can involuntarily capture visual attention. They have, furthermore, tested whether fundamental attentional selection processes operate within WM, and whether they are reciprocal with attention. Finally, they have illuminated the neural consequences of competing attentional demands. The findings indicate that WM shares representations, operating principles, and cognitive resources with externally-oriented attention.
Memory-Based Attentional Guidance: A Window to the Relationship between Working Memory and Attention
Resumo:
Attention, the cognitive means by which we prioritize the processing of a subset of information, is necessary for operating efficiently and effectively in the world. Thus, a critical theoretical question is how information is selected. In the visual domain, working memory (WM)—which refers to the short-term maintenance and manipulation of information that is no longer accessible by the senses—has been highlighted as an important determinant of what is selected by visual attention. Furthermore, although WM and attention have traditionally been conceived as separate cognitive constructs, an abundance of behavioral and neural evidence indicates that these two domains are in fact intertwined and overlapping. The aim of this dissertation is to better understand the nature of WM and attention, primarily through the phenomenon of memory-based attentional guidance, whereby the active maintenance of items in visual WM reliably biases the deployment of attention to memory-matching items in the visual environment. The research presented here employs a combination of behavioral, functional imaging, and computational modeling techniques that address: (1) WM guidance effects with respect to the traditional dichotomy of top-down versus bottom-up attentional control; (2) under what circumstances the contents of WM impact visual attention; and (3) the broader hypothesis of a predictive and competitive interaction between WM and attention. Collectively, these empirical findings reveal the importance of WM as a distinct factor in attentional control and support current models of multiple-state WM, which may have broader implications for how we select and maintain information.
Resumo:
The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated which promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests that behavioral signs can be observed late in the first year of life. Many of these studies involve extensive frame-by-frame video observation and analysis of a child's natural behavior. Although nonintrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are burdensome for clinical and large population research purposes. This work is a first milestone in a long-term project on non-invasive early observation of children in order to aid in risk detection and research of neurodevelopmental disorders. We focus on providing low-cost computer vision tools to measure and identify ASD behavioral signs based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure responses to general ASD risk assessment tasks and activities outlined by the AOSI which assess visual attention by tracking facial features. We show results, including comparisons with expert and nonexpert clinicians, which demonstrate that the proposed computer vision tools can capture critical behavioral observations and potentially augment the clinician's behavioral observations obtained from real in-clinic assessments.
Resumo:
The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a child's natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical and large population research purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by tracking facial features, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinician's behavioral observations obtained from real in-clinic assessments.
Resumo:
Emotional and attentional functions are known to be distributed along ventral and dorsal networks in the brain, respectively. However, the interactions between these systems remain to be specified. The present study used event-related functional magnetic resonance imaging (fMRI) to investigate how attentional focus can modulate the neural activity elicited by scenes that vary in emotional content. In a visual oddball task, aversive and neutral scenes were presented intermittently among circles and squares. The squares were frequent standard events, whereas the other novel stimulus categories occurred rarely. One experimental group [N=10] was instructed to count the circles, whereas another group [N=12] counted the emotional scenes. A main effect of emotion was found in the amygdala (AMG) and ventral frontotemporal cortices. In these regions, activation was significantly greater for emotional than neutral stimuli but was invariant to attentional focus. A main effect of attentional focus was found in dorsal frontoparietal cortices, whose activity signaled task-relevant target events irrespective of emotional content. The only brain region that was sensitive to both emotion and attentional focus was the anterior cingulate gyrus (ACG). When circles were task-relevant, the ACG responded equally to circle targets and distracting emotional scenes. The ACG response to emotional scenes increased when they were task-relevant, and the response to circles concomitantly decreased. These findings support and extend prominent network theories of emotion-attention interactions that highlight the integrative role played by the anterior cingulate.
Resumo:
Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
Resumo:
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.
Resumo:
For over 50 years, the Satisfaction of Search effect, and more recently known as the Subsequent Search Miss (SSM) effect, has plagued the field of radiology. Defined as a decrease in additional target accuracy after detecting a prior target in a visual search, SSM errors are known to underlie both real-world search errors (e.g., a radiologist is more likely to miss a tumor if a different tumor was previously detected) and more simplified, lab-based search errors (e.g., an observer is more likely to miss a target ‘T’ if a different target ‘T’ was previously detected). Unfortunately, little was known about this phenomenon’s cognitive underpinnings and SSM errors have proven difficult to eliminate. However, more recently, experimental research has provided evidence for three different theories of SSM errors: the Satisfaction account, the Perceptual Set account, and the Resource Depletion account. A series of studies examined performance in a multiple-target visual search and aimed to provide support for the Resource Depletion account—a first target consumes cognitive resources leaving less available to process additional targets.
To assess a potential mechanism underlying SSM errors, eye movements were recorded in a multiple-target visual search and were used to explore whether a first target may result in an immediate decrease in second-target accuracy, which is known as an attentional blink. To determine whether other known attentional distractions amplified the effects of finding a first target has on second-target detection, distractors within the immediate vicinity of the targets (i.e., clutter) were measured and compared to accuracy for a second target. To better understand which characteristics of attention were impacted by detecting a first target, individual differences within four characteristics of attention were compared to second-target misses in a multiple-target visual search.
The results demonstrated that an attentional blink underlies SSM errors with a decrease in second-target accuracy from 135ms-405ms after detection or re-fixating a first target. The effects of clutter were exacerbated after finding a first target causing a greater decrease in second-target accuracy as clutter increased around a second-target. The attentional characteristics of modulation and vigilance were correlated with second- target misses and suggest that worse attentional modulation and vigilance are predictive of more second-target misses. Taken together, these result are used as the foundation to support a new theory of SSM errors, the Flux Capacitor theory. The Flux Capacitor theory predicts that once a target is found, it is maintained as an attentional template in working memory, which consumes attentional resources that could otherwise be used to detect additional targets. This theory not only proposes why attentional resources are consumed by a first target, but encompasses the research in support of all three SSM theories in an effort to establish a grand, unified theory of SSM errors.