969 resultados para visual search
Resumo:
We tested the predictions of Attentional Control Theory (ACT) by examining how anxiety affects visual search strategies, performance efficiency, and performance effectiveness using a dynamic, temporal-constrained anticipation task. Higher and lower skilled players viewed soccer situations under 2 task constraints (near vs. far situation) and were tested under high (HA) and low (LA) anxiety conditions. Response accuracy (effectiveness) and response time, perceived mental effort, and eye-movements (all efficiency) were recorded. A significant increase in anxiety was evidenced by higher state anxiety ratings on the MRF-L scale. Increased anxiety led to decreased performance efficiency because response times and mental effort increased for both skill groups whereas response accuracy did not differ. Anxiety influenced search strategies, with higher skilled players showing a decrease in number of fixation locations for far situations under HA compared with LA condition when compared with lower skilled players. Findings provide support for ACT with anxiety impairing processing efficiency and, potentially, top-down attentional control across different task constraints.
Resumo:
Visual neglect is considerably exacerbated by increases in visual attentional load. These detrimental effects of attentional load are hypothesised to be dependent on an interplay between dysfunctional inter-hemispheric inhibitory dynamics and load-related modulation of activity in cortical areas such as the posterior parietal cortex (PPC). Continuous Theta Burst Stimulation (cTBS) over the contralesional PPC reduces neglect severity. It is unknown, however, whether such positive effects also operate in the presence of the detrimental effects of heightened attentional load. Here, we examined the effects of cTBS on neglect severity in overt visual search (i.e., with eye movements), as a function of high and low visual attentional load conditions. Performance was assessed on the basis of target detection rates and eye movements, in a computerised visual search task and in two paper-pencil tasks. cTBS significantly ameliorated target detection performance, independently of attentional load. These ameliorative effects were significantly larger in the high than the low load condition, thereby equating target detection across both conditions. Eye movement analyses revealed that the improvements were mediated by a redeployment of visual fixations to the contralesional visual field. These findings represent a substantive advance, because cTBS led to an unprecedented amelioration of overt search efficiency that was independent of visual attentional load.
Resumo:
Previous research in visual search indicates that animal fear-relevant deviants, snakes/spiders, are found faster among non fear-relevant backgrounds, flowers/mushrooms, than vice versa. Moreover, deviant absence was indicated faster among snakes/spiders and detection time for flower/mushroom deviants, but not for snake/spider deviants, increased in larger arrays. The current research indicates that the latter 2 results do not reflect on fear-relevance, but are found only with flower/mushroom controls. These findings may reflect on factors such as background homogeneity, deviant homogeneity, or background-deviant similarity. The current research removes contradictions between previous studies that used animal and social fear-relevant stimuli and indicates that apparent search advantages for fear-relevant deviants seem likely to reflect on delayed attentional disengagement from fear-relevance on control trials.
Resumo:
The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1, vertical canonical targets were detected faster when they were tilted (incongruent) than when they were vertical (congruent). This search asymmetry was reversed for tilted canonical targets. The effect of canonical orientation was partially preserved when objects were high-pass filtered, but it was eliminated when they were low-pass filtered, rendering them as unfamiliar shapes (Experiment 2). The effect of canonical orientation was also eliminated by inverting the objects (Experiment 3) and in a patient with visual agnosia (Experiment 4). These results indicate that orientation search with familiar objects can be modulated by canonical orientation, and they indicate a top-down influence on orientation processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Resumo:
Difficulties in visual attention are increasingly being linked to dyslexia. To date, the majority of studies have inferred functionality of attention from response times to stimuli presented for an indefinite duration. However, in paradigms that use reaction times to investigate the ability to orient attention, a delayed reaction time could also indicate difficulties in signal enhancement or noise exclusion once oriented. Thus, in order to investigate attention modulation and visual crowding effects in dyslexia, this study measured stimulus discrimination accuracy to rapidly presented displays. Adults with dyslexia (AwD) and controls discriminated the orientation of a target in an array of different numbers of - and differently spaced - vertically orientated distractors. Results showed that AwD: were disproportionately impacted by (i) close spacing and (ii) increased numbers of stimuli, (iii) did use pre-cues to modulate attention, but (iv) used cues less successfully to counter effects of increasing numbers of distractors. A greater dependence on pre-cues, larger effects of crowding and the impact of increased numbers of distractors all correlated significantly with measures of literacy. These findings extend previous studies of visual crowding of letters in dyslexia to non-complex stimuli. Overall, AwD do not use cues less, but they do use cues less successfully. We conclude that visual attention is an important factor to consider in the aetiology of dyslexia. The results challenge existing theoretical accounts of visual attention deficits, which alone are unable to comprehensively explain the pattern of findings demonstrated here.
Resumo:
The present thesis investigates pattern glare susceptibility following stroke and the immediate and prolonged impact of prescribing optimal spectral filters on reading speed, accuracy and visual search performance. Principal observations: A case report has shown that visual stress can occur following stroke. The use of spectral filters and precision tinted lenses proved to be a successful intervention in this case, although the parameters required modification following a further stroke episode. Stroke subjects demonstrate elevated levels of pattern glare compared to normative data values and a control group. Initial use of an optimal spectral filter in a stroke cohort increased reading speed by ~6% and almost halved error scores, findings not replicated in a control group. With the removal of migraine subjects reading speed increased by ~8% with an optimal filter and error scores almost halved. Prolonged use of an optimal spectral filter for stroke subjects, increased reading speed by >9% and error scores more than halved. When the same subjects switched to prolonged use of a grey filter, reading speed reduced by ~4% and error scores increased marginally. When a second group of stroke subjects used a grey filter first, reading speed decreased by ~3% but increased by ~3% with prolonged use of an optimal filter, with error scores almost halving; these findings persisted with migraine subjects excluded. Initial use of an optimal spectral filter improved visual search response time but not error scores in a stroke cohort with migraine subjects excluded. Neither prolonged use of an optimal nor grey filter improved response time or reduced error scores in a stroke group; these findings persisted with the exclusion of migraine subjects.
Resumo:
Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted. © 2013 a Pion publication.
Resumo:
Peer reviewed
Resumo:
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.
Resumo:
The appropriateness of applying drink driving legislation to motorcycle riding has been questioned as there may be fundamental differences in the effects of alcohol on driving and motorcycling. It has been suggested that alcohol may redirect riders’ focus from higher-order cognitive skills such as cornering, judgement and hazard perception, to more physical skills such as maintaining balance. To test this hypothesis, the effects of low doses of alcohol on balance ability were investigated in a laboratory setting. The static balance of twenty experienced and twenty novice riders was measured while they performed either no secondary task, a visual (search) task, or a cognitive (arithmetic) task following the administration of alcohol (0%, 0.02%, and 0.05% BAC). Subjective ratings of intoxication and balance impairment increased in a dose-dependent manner in both novice and experienced motorcycle riders, while a BAC of 0.05%, but not 0.02%, was associated with impairments in static balance ability. This balance impairment was exacerbated when riders performed a cognitive, but not a visual, secondary task. Likewise, 0.05% BAC was associated with impairments in novice and experienced riders’ performance of a cognitive, but not a visual, secondary task, suggesting that interactive processes underlie balance and cognitive task performance. There were no observed differences between novice vs. experienced riders on static balance and secondary task performance, either alone or in combination. Implications for road safety and future ‘drink riding’ policy considerations are discussed.
Resumo:
Purpose Optical blur and ageing are known to affect driving performance but their effects on drivers' eye movements are poorly understood. This study examined the effects of optical blur and age on eye movement patterns and performance on the DriveSafe slide recognition test which is purported to predict fitness to drive. Methods Twenty young (27.1 ± 4.6 years) and 20 older (73.3 ± 5.7 years) visually normal drivers performed the DriveSafe under two visual conditions: best-corrected vision and with +2.00 DS blur. The DriveSafe is a Visual Recognition Slide Test that consists of brief presentations of static, real-world driving scenes containing different road users (pedestrians, bicycles and vehicles). Participants reported the types, relative positions and direction of travel of the road users in each image; the score was the number of correctly reported items (maximum score of 128). Eye movements were recorded while participants performed the DriveSafe test using a Tobii TX300 eye tracking system. Results There was a significant main effect of blur on DriveSafe scores (best-corrected: 114.9 vs blur: 93.2; p < 0.001). There was also a significant age and blur interaction on the DriveSafe scores (p < 0.001) such that the young drivers were more negatively affected by blur than the older drivers (reductions of 22% and 13% respectively; p < 0.001): with best-corrected vision, the young drivers performed better than the older drivers (DriveSafe scores: 118.4 vs 111.5; p = 0.001), while with blur, the young drivers performed worse than the older drivers (88.6 vs 95.9; p = 0.009). For the eye movement patterns, blur significantly reduced the number of fixations on road users (best-corrected: 5.1 vs blur: 4.5; p < 0.001), fixation duration on road users (2.0 s vs 1.8 s; p < 0.001) and saccade amplitudes (7.4° vs 6.7°; p < 0.001). A main effect of age on eye movements was also found where older drivers made smaller saccades than the young drivers (6.7° vs 7.4°; p < 0.001). Conclusions Blur reduced DriveSafe scores for both age groups and this effect was greater for the young drivers. The decrease in number of fixations and fixation duration on road users, as well as the reduction in saccade amplitudes under the blurred condition, highlight the difficulty experienced in performing the task in the presence of optical blur, which suggests that uncorrected refractive errors may have a detrimental impact on aspects of driving performance.
Resumo:
My thesis studies how people pay attention to other people and the environment. How does the brain figure out what is important and what are the neural mechanisms underlying attention? What is special about salient social cues compared to salient non-social cues? In Chapter I, I review social cues that attract attention, with an emphasis on the neurobiology of these social cues. I also review neurological and psychiatric links: the relationship between saliency, the amygdala and autism. The first empirical chapter then begins by noting that people constantly move in the environment. In Chapter II, I study the spatial cues that attract attention during locomotion using a cued speeded discrimination task. I found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The more ecologically valid the motion features became (e.g., temporal expansion of each object, spatial depth structure implied by distribution of the size of the objects), the stronger the attentional effects. However, compared to inanimate objects and cues, people preferentially attend to animals and faces, a process in which the amygdala is thought to play an important role. To directly compare social cues and non-social cues in the same experiment and investigate the neural structures processing social cues, in Chapter III, I employ a change detection task and test four rare patients with bilateral amygdala lesions. All four amygdala patients showed a normal pattern of reliably faster and more accurate detection of animate stimuli, suggesting that advantageous processing of social cues can be preserved even without the amygdala, a key structure of the “social brain”. People not only attend to faces, but also pay attention to others’ facial emotions and analyze faces in great detail. Humans have a dedicated system for processing faces and the amygdala has long been associated with a key role in recognizing facial emotions. In Chapter IV, I study the neural mechanisms of emotion perception and find that single neurons in the human amygdala are selective for subjective judgment of others’ emotions. Lastly, people typically pay special attention to faces and people, but people with autism spectrum disorders (ASD) might not. To further study social attention and explore possible deficits of social attention in autism, in Chapter V, I employ a visual search task and show that people with ASD have reduced attention, especially social attention, to target-congruent objects in the search array. This deficit cannot be explained by low-level visual properties of the stimuli and is independent of the amygdala, but it is dependent on task demands. Overall, through visual psychophysics with concurrent eye-tracking, my thesis found and analyzed socially salient cues and compared social vs. non-social cues and healthy vs. clinical populations. Neural mechanisms underlying social saliency were elucidated through electrophysiology and lesion studies. I finally propose further research questions based on the findings in my thesis and introduce my follow-up studies and preliminary results beyond the scope of this thesis in the very last section, Future Directions.
Resumo:
A common approach to visualise multidimensional data sets is to map every data dimension to a separate visual feature. It is generally assumed that such visual features can be judged independently from each other. However, we have recently shown that interactions between features do exist [Hannus et al. 2004; van den Berg et al. 2005]. In those studies, we first determined individual colour and size contrast or colour and orientation contrast necessary to achieve a fixed level of discrimination performance in single feature search tasks. These contrasts were then used in a conjunction search task in which the target was defined by a combination of a colour and a size or a colour and an orientation. We found that in conjunction search, despite the matched feature discriminability, subjects significantly more often chose an item with the correct colour than one with correct size or orientation. This finding may have consequences for visualisation: the saliency of information coded by objects' size or orientation may change when there is a need to simultaneously search for colour that codes another aspect of the information. In the present experiment, we studied whether a colour bias can also be found in a more complex and continuous task, Subjects had to search for a target in a node-link diagram consisting of SO nodes, while their eye movements were being tracked, Each node was assigned a random colour and size (from a range of 10 possible values with fixed perceptual distances). We found that when we base the distances on the mean threshold contrasts that were determined in our previous experiments, the fixated nodes tend to resemble the target colour more than the target size (Figure 1a). This indicates that despite the perceptual matching, colour is judged with greater precision than size during conjunction search. We also found that when we double the size contrast (i.e. the distances between the 10 possible node sizes), this effect disappears (Figure 1b). Our findings confirm that the previously found decrease in salience of other features during colour conjunction search is also present in more complex (more 'visualisation- realistic') visual search tasks. The asymmetry in visual search behaviour can be compensated for by manipulating step sizes (perceptual distances) within feature dimensions. Our results therefore also imply that feature hierarchies are not completely fixed and may be adapted to the requirements of a particular visualisation. Copyright © 2005 by the Association for Computing Machinery, Inc.
Resumo:
A neural network theory of :3-D vision, called FACADE Theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a Boundary Contour System (BCS) and a Feature Contour System (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that arc mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, DaVinci stereopsis, a 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analysed. The BCS and FCS sub-systems model aspects of how the two parvocellular cortical processing streams that join the Lateral Geniculate Nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-Depth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact. with cortical mechanisms of spatial attention, attentive objcect learning, and visual search. Adaptive Resonance Theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal cortex (IT) for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular Motion BCS signals interact with the model Where stream. Reciprocal interactions between these visual, What, and Where mechanisms arc used to discuss data about visual search and saccadic eye movements, including fast search of conjunctive targets, search of 3-D surfaces, selective search of like-colored targets, attentive tracking of multi-element groupings, and recursive search of simultaneously presented targets.