11 resultados para Attention Perception
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The present study was designed to investigate the influences of type of psychophysical task (two-alternative forced-choice [2AFC] and reminder tasks), type of interval (filled vs. empty), sensory modality (auditory vs. visual), and base duration (ranging from 100 through 1,000 ms) on performance on duration discrimination. All of these factors were systematically varied in an experiment comprising 192 participants. This approach allowed for obtaining information not only on the general (main) effect of each factor alone, but also on the functional interplay and mutual interactions of some or all of these factors combined. Temporal sensitivity was markedly higher for auditory than for visual intervals, as well as for the reminder relative to the 2AFC task. With regard to base duration, discrimination performance deteriorated with decreasing base durations for intervals below 400 ms, whereas longer intervals were not affected. No indication emerged that overall performance on duration discrimination was influenced by the type of interval, and only two significant interactions were apparent: Base Duration × Type of Interval and Base Duration × Sensory Modality. With filled intervals, the deteriorating effect of base duration was limited to very brief base durations, not exceeding 100 ms, whereas with empty intervals, temporal discriminability was also affected for the 200-ms base duration. Similarly, the performance decrement observed with visual relative to auditory intervals increased with decreasing base durations. These findings suggest that type of task, sensory modality, and base duration represent largely independent sources of variance for performance on duration discrimination that can be accounted for by distinct nontemporal mechanisms.
Resumo:
In the current study it is investigated whether peripheral vision can be used to monitor multi-ple moving objects and to detect single-target changes. For this purpose, in Experiment 1, a modified MOT setup with a large projection and a constant-position centroid phase had to be checked first. Classical findings regarding the use of a virtual centroid to track multiple ob-jects and the dependency of tracking accuracy on target speed could be successfully replicat-ed. Thereafter, the main experimental variations regarding the manipulation of to-be-detected target changes could be introduced in Experiment 2. In addition to a button press used for the detection task, gaze behavior was assessed using an integrated eye-tracking system. The anal-ysis of saccadic reaction times in relation to the motor response shows that peripheral vision is naturally used to detect motion and form changes in MOT because the saccade to the target occurred after target-change offset. Furthermore, for changes of comparable task difficulties, motion changes are detected better by peripheral vision than form changes. Findings indicate that capabilities of the visual system (e.g., visual acuity) affect change detection rates and that covert-attention processes may be affected by vision-related aspects like spatial uncertainty. Moreover, it is argued that a centroid-MOT strategy might reduce the amount of saccade-related costs and that eye-tracking seems to be generally valuable to test predictions derived from theories on MOT. Finally, implications for testing covert attention in applied settings are proposed.
Resumo:
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.
Resumo:
OBJECTIVE This study aimed to test the prediction from the Perception and Attention Deficit model of complex visual hallucinations (CVH) that impairments in visual attention and perception are key risk factors for complex hallucinations in eye disease and dementia. METHODS Two studies ran concurrently to investigate the relationship between CVH and impairments in perception (picture naming using the Graded Naming Test) and attention (Stroop task plus a novel Imagery task). The studies were in two populations-older patients with dementia (n = 28) and older people with eye disease (n = 50) with a shared control group (n = 37). The same methodology was used in both studies, and the North East Visual Hallucinations Inventory was used to identify CVH. RESULTS A reliable relationship was found for older patients with dementia between impaired perceptual and attentional performance and CVH. A reliable relationship was not found in the population of people with eye disease. CONCLUSIONS The results add to previous research that object perception and attentional deficits are associated with CVH in dementia, but that risk factors for CVH in eye disease are inconsistent, suggesting that dynamic rather than static impairments in attentional processes may be key in this population.
Resumo:
The aim of this study was to investigate how oculomotor behaviour depends on the availability of colour information in pictorial stimuli. Forty study participants viewed complex images in colour or grey-scale, while their eye movements were recorded. We found two major effects of colour. First, although colour increases the complexity of an image, fixations on colour images were shorter than on their grey-scale versions. This suggests that colour enhances discriminability and thus affects low-level perceptual processing. Second, colour decreases the similarity of spatial fixation patterns between participants. The role of colour on visual attention seems to be more important than previously assumed, in theoretical as well as methodological terms.
Resumo:
When we actively explore the visual environment, our gaze preferentially selects regions characterized by high contrast and high density of edges, suggesting that the guidance of eye movements during visual exploration is driven to a significant degree by perceptual characteristics of a scene. Converging findings suggest that the selection of the visual target for the upcoming saccade critically depends on a covert shift of spatial attention. However, it is unclear whether attention selects the location of the next fixation uniquely on the basis of global scene structure or additionally on local perceptual information. To investigate the role of spatial attention in scene processing, we examined eye fixation patterns of patients with spatial neglect during unconstrained exploration of natural images and compared these to healthy and brain-injured control participants. We computed luminance, colour, contrast, and edge information contained in image patches surrounding each fixation and evaluated whether they differed from randomly selected image patches. At the global level, neglect patients showed the characteristic ipsilesional shift of the distribution of their fixations. At the local level, patients with neglect and control participants fixated image regions in ipsilesional space that were closely similar with respect to their local feature content. In contrast, when directing their gaze to contralesional (impaired) space neglect patients fixated regions of significantly higher local luminance and lower edge content than controls. These results suggest that intact spatial attention is necessary for the active sampling of local feature content during scene perception.
Resumo:
We investigated the role of horizontal body motion on the processing of numbers. We hypothesized that leftward self-motion leads to shifts in spatial attention and therefore facilitates the processing of small numbers, and vice versa, we expected that rightward self-motion facilitates the processing of large numbers. Participants were displaced by means of a motion platform during a parity judgment task. We found a systematic influence of self-motion direction on number processing, suggesting that the processing of numbers is intertwined with the processing of self-motion perception. The results differed from known spatial numerical compatibility effects in that self-motion exerted a differential influence on inner and outer numbers of the given interval. The results highlight the involvement of sensory body motion information in higher-order spatial cognition.
Resumo:
Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements
Resumo:
Background: Nicotine use has been reported to ameliorate symptoms of Attention-Deficit/Hyperactivity Disorder (ADHD). Furthermore, adults with ADHD have a relatively high prevalence of cigarette smoking and greater difficulty abstaining from smoking. Overall, though, there is scant literature investigating the beliefs, perceptions and experiences of smokers with ADHD regarding smoking cessation and withdrawal. Methods: Our participants (n = 20) fulfilling criteria for ADHD and a past or current dependence from nicotine were recruited from the in- and outpatient clinic of the Zurich University Psychiatric Hospital and the Psychiatric Services Aargau (Switzerland). We conducted in-depth interviews to explore their motivations to quit, past experiences with and expectations about quitting using a purposeful sampling plan. The sample was selected to provide diversity in relation to level of nicotine dependence, participation in a smoking-cessation program, gender, age, martial status and social class. Mayring’s qualitative content analysis approach was used to evaluate findings. Results: Adult smokers with ADHD had made several attempts to quit, experienced intense withdrawal symptoms, and relapsed early and often. They also often perceived a worsening of ADHD symptoms with nicotine abstinence. We identified three motives to quit smoking: 1) health concerns, 2) the feeling of being addicted, and 3) social factors. Most participants favored a smoking cessation program specifically designed for individuals with ADHD because they thought ADHD complicated their nicotine withdrawal and that an ADHD-specific smoking cessation program should address specific symptoms of this disorder. Conclusions: Since treatment initiation and adherence associate closely with perception, we hope these findings will result in better cessation interventions for the vulnerable subgroup of smokers with ADHD. Keywords: ADHD, Nicotine, Withdrawal, Subjective, Qualitative, Narrative