50 resultados para visual-spatial attention
Resumo:
Based on neurophysiological findings and a grid to score binocular visual field function, two hypotheses concerning the spatial distribution of fixations during visual search were tested and confirmed in healthy participants and patients with homonymous visual field defects. Both groups showed significant biases of fixations and viewing time towards the centre of the screen and the upper screen half. Patients displayed a third bias towards the side of their field defect, which represents oculomotor compensation. Moreover, significant correlations between the extent of these three biases and search performance were found. Our findings suggest a new, more dynamic view of how functional specialisation of the visual field influences behaviour.
Resumo:
The aim of this study was to investigate how oculomotor behaviour depends on the availability of colour information in pictorial stimuli. Forty study participants viewed complex images in colour or grey-scale, while their eye movements were recorded. We found two major effects of colour. First, although colour increases the complexity of an image, fixations on colour images were shorter than on their grey-scale versions. This suggests that colour enhances discriminability and thus affects low-level perceptual processing. Second, colour decreases the similarity of spatial fixation patterns between participants. The role of colour on visual attention seems to be more important than previously assumed, in theoretical as well as methodological terms.
Resumo:
Cognitive-motivational theories of phobias propose that patients' behavior is characterized by a hypervigilance-avoidance pattern. This implies that phobics initially direct their attention towards fear-relevant stimuli, followed by avoidance that is thought to prevent objective evaluation and habituation. However, previous experiments with highly anxious individuals confirmed initial hypervigilance and yet failed to show subsequent avoidance. In the present study, we administered a visual task in spider phobics and controls, requiring participants to search for spiders. Analyzing eye movements during visual exploration allowed the examination of spatial as well as temporal aspects of phobic behavior. Confirming the hypervigilance-avoidance hypothesis as a whole, our results showed that, relative to controls, phobics detected spiders faster, fixated closer to spiders during the initial search phase and fixated further from spiders subsequently.
Resumo:
The present study shows that different neural activity during mental imagery and abstract mentation can be assigned to well-defined steps of the brain's information-processing. During randomized visual presentation of single, imagery-type and abstract-type words, 27 channel event-related potential (ERP) field maps were obtained from 25 subjects (sequence-divided into a first and second group for statistics). The brain field map series showed a sequence of typical map configurations that were quasi-stable for brief time periods (microstates). The microstates were concatenated by rapid map changes. As different map configurations must result from different spatial patterns of neural activity, each microstate represents different active neural networks. Accordingly, microstates are assumed to correspond to discrete steps of information-processing. Comparing microstate topographies (using centroids) between imagery- and abstract-type words, significantly different microstates were found in both subject groups at 286–354 ms where imagery-type words were more right-lateralized than abstract-type words, and at 550–606 ms and 606–666 ms where anterior-posterior differences occurred. We conclude that language-processing consists of several, well-defined steps and that the brain-states incorporating those steps are altered by the stimuli's capacities to generate mental imagery or abstract mentation in a state-dependent manner.
Resumo:
Prompted reports of recall of spontaneous, conscious experiences were collected in a no-input, no-task, no-response paradigm (30 random prompts to each of 13 healthy volunteers). The mentation reports were classified into visual imagery and abstract thought. Spontaneous 19-channel brain electric activity (EEG) was continuously recorded, viewed as series of momentary spatial distributions (maps) of the brain electric field and segmented into microstates, i.e. into time segments characterized by quasi-stable landscapes of potential distribution maps which showed varying durations in the sub-second range. Microstate segmentation used a data-driven strategy. Different microstates, i.e. different brain electric landscapes must have been generated by activity of different neural assemblies and therefore are hypothesized to constitute different functions. The two types of reported experiences were associated with significantly different microstates (mean duration 121 ms) immediately preceding the prompts; these microstates showed, across subjects, for abstract thought (compared to visual imagery) a shift of the electric gravity center to the left and a clockwise rotation of the field axis. Contrariwise, the microstates 2 s before the prompt did not differ between the two types of experiences. The results support the hypothesis that different microstates of the brain as recognized in its electric field implement different conscious, reportable mind states, i.e. different classes (types) of thoughts (mentations); thus, the microstates might be candidates for the `atoms of thought'.
Resumo:
Identifying a human body stimulus involves mentally rotating an embodied spatial representation of one's body (motoric embodiment) and projecting it onto the stimulus (spatial embodiment). Interactions between these two processes (spatial and motoric embodiment) may thus reveal cues about the underlying reference frames. The allocentric visual reference frame, and hence the perceived orientation of the body relative to gravity, was modulated using the York Tumbling Room, a fully furnished cubic room with strong directional cues that can be rotated around a participant's roll axis. Sixteen participants were seated upright (relative to gravity) in the Tumbling Room and made judgments about body and hand stimuli that were presented in the frontal plane at orientations of 0°, 90°, 180° (upside down), or 270° relative to them. Body stimuli have an intrinsic visual polarity relative to the environment whereas hands do not. Simultaneously the room was oriented 0°, 90°, 180° (upside down), or 270° relative to gravity resulting in sixteen combinations of orientations. Body stimuli were more accurately identified when room and body stimuli were aligned. However, such congruency did not facilitate identifying hand stimuli. We conclude that static allocentric visual cues can affect embodiment and hence performance in an egocentric mental transformation task. Reaction times to identify either hands or bodies showed no dependence on room orientation.
Resumo:
Transcranial magnetic stimulation (TMS) was used to study visuospatial attention processing in ten healthy volunteers. In a forced choice recognition task the subjects were confronted with two symbols simultaneously presented during 120 ms at random positions, one in the left and the other in the right visual field. The subject had to identify the presented pattern out of four possible combinations and to press the corresponding response key within 2 s. Double-pulse TMS (dTMS) with a 100-ms interstimulus interval (ISI) and an intensity of 80% of the stimulator output (corresponding to 110-120% of the motor threshold) was applied by a non-focal coil over the right or left posterior parietal cortex (PPC, corresponding to P3/P4 of the international 10-20 system) at different time intervals after onset of the visual stimulus (starting at 120 ms, 270 ms and 520 ms). Double-pulse TMS over the right PPC starting at 270 ms led to a significant increase in percentage of errors in the contralateral, left visual field (median: 23% with TMS vs 13% without TMS, P=0.0025). TMS applied earlier or later showed no effect. Furthermore, no significant increase in contra- or ipsilateral percentage of errors was found when the left parietal cortex was stimulated with the same timing. These data indicate that: (1) parietal influence on visuospatial attention is mainly controlled by the right lobe since the same stimulation over the left parietal cortex had no significant effect, and (2) there is a vulnerable time window to disturb this cortical process, since dTMS had a significant effect on the percentage of errors in the contralateral visual hemifield only when applied 270 ms after visual stimulus presentation.
Resumo:
BACKGROUND: Many patients with Posttraumatic Stress Disorder (PTSD) feel overwhelmed in situations with high levels of sensory input, as in crowded situations with complex sensory characteristics. These difficulties might be related to subtle sensory processing deficits similar to those that have been found for sounds in electrophysiological studies. METHOD: Visual processing was investigated with functional magnetic resonance imaging in trauma-exposed participants with (N = 18) and without PTSD (N = 21) employing a picture-viewing task. RESULTS: Activity observed in response to visual scenes was lower in PTSD participants 1) in the ventral stream of the visual system, including striate and extrastriate, inferior temporal, and entorhinal cortices, and 2) in dorsal and ventral attention systems (P < 0.05, FWE-corrected). These effects could not be explained by the emotional salience of the pictures. CONCLUSIONS: Visual processing was substantially altered in PTSD in the ventral visual stream, a component of the visual system thought to be responsible for object property processing. Together with previous reports of subtle auditory deficits in PTSD, these findings provide strong support for potentially important sensory processing deficits, whose origins may be related to dysfunctional attention processes.
Resumo:
Based on the Attentional Control Theory (ACT; Eysenck et al., 2007), performance efficiency is decreased in high-anxiety situations because worrying thoughts compete for attentional resources. A repeated-measures design (high/low state anxiety and high/low perceptual task demands) was used to test ACT explanations. Complex football situations were displayed to expert and non-expert football players in a decision making task in a controlled laboratory setting. Ratings of state anxiety and pupil diameter measures were used to check anxiety manipulations. Dependent variables were verbal response time and accuracy, mental effort ratings and visual search behavior (e.g., visual search rate). Results confirmed that an anxiety increase, indicated by higher state-anxiety ratings and larger pupil diameters, reduced processing efficiency for both groups (higher response times and mental effort ratings). Moreover, high task demands reduced the ability to shift attention between different locations for the expert group in the high anxiety condition only. Since particularly experts, who were expected to use more top-down strategies to guide visual attention under high perceptual task demands, showed less attentional shifts in the high compared to the low anxiety condition, as predicted by ACT, anxiety seems to impair the shifting function by interrupting the balance between top-down and bottom-up processes.
Resumo:
In order to bridge interdisciplinary differences in Presence research and to establish connections between Presence and “older” concepts of psychology and communication, a theoretical model of the formation of Spatial Presence is proposed. It is applicable to the exposure to different media and intended to unify the existing efforts to develop a theory of Presence. The model includes assumptions about attention allocation, mental models, and involvement, and considers the role of media factors and user characteristics as well, thus incorporating much previous work. It is argued that a commonly accepted model of Spatial Presence is the only solution to secure further progress within the international, interdisciplinary and multiple-paradigm community of Presence research.
Territorial Cohesion through Spatial Policies: An Analysis with Cultural Theory and Clumsy Solutions
Resumo:
The European Territorial Cohesion Policy has been the subject of numerous debates in recent years. Most contributions focus on understanding the term itself and figuring out what is behind it, or arguing for or against a stronger formal competence of the European Union in this field. This article will leave out these aspects and pay attention to (undefined and legally non-binding) conceptual elements of territorial cohesion, focusing on the challenge of linking it within spatial policies and organising the relations. Therefore, the theoretical approach of Cultural Theory and its concept of clumsy solution are applied to overcome the dilemma of typical dichotomies by adding a third and a fourth (but not a fifth) perspective. In doing so, normative contradictions between different rational approaches can be revealed, explained and approached with the concept of ‘clumsy solutions’. This contribution aims at discussing how this theoretical approach helps us explain and frame a coalition between the Territorial Cohesion Policy and spatial policies. This approach contributes to finding the best way of linking and organising policies, although the solution might be clumsy according to the different rationalities involved.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Visually impaired people show superior abilities in various perception tasks such as auditory attention, auditory temporal resolution, auditory spatial tuning, and odor discrimination. However, with the use of psychophysical methods, auditory and olfactory detection thresholds typically do not differ between visually impaired and sighted participants. Using a motion platform we investigated thresholds of passive whole-body motion discrimination in nine visually impaired participants and nine age-matched sighted controls. Participants were rotated in yaw, tilted in roll, and translated along the y-axis at two different frequencies (0.3 Hz and 2 Hz). An adaptive 3-down 1-up staircase procedure was used along with a two-alternative direction (leftward vs. rightward) discrimination task. Superior performance of visually impaired participants was found in the 0.3 Hz roll tilt condition. No differences between the visually impaired and controls were observed in all other types of motion. The superior performance in the 0.3 Hz roll tilt condition could reflect differences in the integration of extra-vestibular cues and increased sensitivity towards changes in the direction of the gravito-inertial force. In the absence of visual information, roll tilts entail a more pronounced risk of falling, and this could eventually account for the group difference. It is argued that differences in experimental procedures (i.e. detection vs. discrimination of stimuli) explain the discrepant findings across perceptual tasks comparing blind and sighted participants.
Resumo:
Dating past mass wasting with growth disturbances in trees is widely used in geochronology as the approach may yield dates of past process activity with up to subannual precision. Past work commonly focused on the extraction of increment cores, wedges, or stem cross sections. However, sampling has been shown to be constrained by sampling permissions, and the analysis of tree-ring samples requires considerable temporal efforts. To compensate for these shortcomings, we explore the potential of visual inspection of wound appearance for dating purposes. Based on a data set of 217 wood-penetrating wounds of known age inflicted to European larch (Larix decidua Mill.) by rockfall activity, we develop guidelines for the visual, noninvasive dating of wounds including (i) the counting of bark rings, (ii) a visual assessment of exposed wood and wound bark characteristics (such as the color and weathering status of wounds), and (iii) the relationship between wound age and tree diameter. A characterization of wounds based on photographs, randomly selected from the data set, reveals that young wounds typically can be dated with high precision, whereas dating errors gradually increase with increasing wound age. While visual dating does not reach the precision of dendrochronological dating, we clearly demonstrate that spatial patterns of and differences in rockfall activity can be reconstructed with both approaches. The introduction of visual dating approaches will facilitate fieldwork, especially in applied research, assist the conventional interpretation of tree-ring signals, and allow the reconstruction of geomorphic processes with considerably fewer temporal and financial efforts.
Resumo:
Phobic individuals display an attention bias to phobia-related information and biased expectancies regarding the likelihood of being faced with such stimuli. Notably, although attention and expectancy biases are core features in phobia and anxiety disorders, these biases have mostly been investigated separately and their causal impact has not been examined. We hypothesized that these biases might be causally related. Spider phobic and low spider fearful control participants performed a visual search task in which they specified whether the deviant animal in a search array was a spider or a bird. Shorter reaction times (RTs) for spiders than for birds in this task reflect an attention bias toward spiders. Participants' expectancies regarding the likelihood of these animals being the deviant in the search array were manipulated by presenting verbal cues. Phobics were characterized by a pronounced and persistent attention bias toward spiders; controls displayed slower RTs for birds than for spiders only when spider cues had been presented. More important, we found RTs for spider detections to be virtually unaffected by the expectancy cues in both groups, whereas RTs for bird detections showed a clear influence of the cues. Our results speak to the possibility that evolution has formed attentional systems that are specific to the detection of phylogenetically salient stimuli such as threatening animals; these systems may not be as penetrable to variations in (experimentally induced) expectancies as those systems that are used for the detection of non-threatening stimuli. In sum, our findings highlight the relation between expectancies and attention engagement in general. However, expectancies may play a greater role in attention engagement in safe environments than in threatening environments.