144 resultados para Iris (Eye)
Resumo:
PURPOSE: We aimed at further elucidating whether aphasic patients' difficulties in understanding non-canonical sentence structures, such as Passive or Object-Verb-Subject sentences, can be attributed to impaired morphosyntactic cue recognition, and to problems in integrating competing interpretations. METHODS: A sentence-picture matching task with canonical and non-canonical spoken sentences was performed using concurrent eye tracking. Accuracy, reaction time, and eye tracking data (fixations) of 50 healthy subjects and 12 aphasic patients were analysed. RESULTS: Patients showed increased error rates and reaction times, as well as delayed fixation preferences for target pictures in non-canonical sentences. Patients' fixation patterns differed from healthy controls and revealed deficits in recognizing and immediately integrating morphosyntactic cues. CONCLUSION: Our study corroborates the notion that difficulties in understanding syntactically complex sentences are attributable to a processing deficit encompassing delayed and therefore impaired recognition and integration of cues, as well as increased competition between interpretations.
Resumo:
CONTEXT Chemical eye injuries are ophthalmological emergencies with a high risk of secondary complications and severe visual loss. Only limited epidemiological data for such injuries are available for many countries. PATIENTS AND METHODS We performed two independent studies. The cause of chemical eye injuries was assessed with a prospective questionnaire study. Questionnaires were sent to all ophthalmologists in Switzerland. A total of 163 patients (205 eyes) were included, between December 2012 and October 2014. Independent of the questionnaire study, the incidence of chemical eye injuries was assessed with a retrospective cohort study design using the database of the mandatory accident insurance. RESULTS Ophthalmological questionnaires revealed that plaster/cement (20.5%), alkaline (12.2%) and acid (10.2%) solutions caused the highest number of chemical injuries. Only 2% of all injuries were classified as grade III and none as grade IV (Roper-Hall classification). The official toxicological information phone-hotline was contacted in 4.3% of cases. Using data from the accident insurance, an incidence of chemical eye injuries of about 50/100 000/year was found in the working population. CONCLUSION Here, we present data on the involved agents of chemical eye injuries in Switzerland, and also the incidence of such injuries in the working population. This may also help to assess the need for further education programs and to improve and direct preventive measures.
Resumo:
Objective: Visual hallucinations (VH) most commonly occur in eye disease (ED), Parkinson’s disease (PD), and Lewy body dementia (LBD). The phenomenology of VH is likely to carry important information about the brain areas within the visual system generating them. Methods: Data from five controlled cross-sectional VH studies (164 controls, 135 ED, 156 PD, 79 (PDD 48 + DLB 31) LBD) were combined and analysed. The prevalence, phenomenology, frequency, duration, and contents of VH were compared across diseases and gender. Results: Simple VH were most common in ED patients (ED 65% vs. LBD 22% vs. PD 9%, Chi-square [χ2] test: χ2=31.43, df=2, p<0.001), whilst complex VH were more common in LBD (LBD 76% vs. ED 38%, vs PD 28%, Chi-square test: χ2=96.80, df=2, p<0.001). The phenomenology of complex VH was different across diseases and gender. ED patients reported more “flowers” (ED 21% vs. LBD 6% vs. PD 0%, Chi-square test: χ2=10.04, df=2, p=0.005) and “body parts” (ED 40% vs. LBD 17% vs. PD 13%, Chi-square test: χ2=11.14, df=2, p=0.004); in contrast LBD patients reported “people” (LBD 85% vs. ED 67% vs. PD 63%, Chi-square test: χ2=6.20, df=2, p=0.045) and “animals/insects” (LBD 50% vs. PD 42% vs. ED 21%, Chi-square test: χ2=9.76, df=2, p=0.008). Males reported more “machines” (13 % vs. 2%, Chi-square test: χ2=6.94, df=1, p=0.008), whilst females reported more “family members/children” (48% vs. 29%, Chi-square test: χ2=5.10, df=1, p=0.024). Conclusions: The phenomenology of VH is likely related to disease specific dysfunctions within the visual system and to past, personal experiences.
Resumo:
BACKGROUND: Crossing a street can be a very difficult task for older pedestrians. With increased age and potential cognitive decline, older people take the decision to cross a street primarily based on vehicles' distance, and not on their speed. Furthermore, older pedestrians tend to overestimate their own walking speed, and could not adapt it according to the traffic conditions. Pedestrians' behavior is often tested using virtual reality. Virtual reality presents the advantage of being safe, cost-effective, and allows using standardized test conditions. METHODS: This paper describes an observational study with older and younger adults. Street crossing behavior was investigated in 18 healthy, younger and 18 older subjects by using a virtual reality setting. The aim of the study was to measure behavioral data (such as eye and head movements) and to assess how the two age groups differ in terms of number of safe street crossings, virtual crashes, and missed street crossing opportunities. Street crossing behavior, eye and head movements, in older and younger subjects, were compared with non-parametric tests. RESULTS: The results showed that younger pedestrians behaved in a more secure manner while crossing a street, as compared to older people. The eye and head movements analysis revealed that older people looked more at the ground and less at the other side of the street to cross. CONCLUSIONS: The less secure behavior in street crossing found in older pedestrians could be explained by their reduced cognitive and visual abilities, which, in turn, resulted in difficulties in the decision-making process, especially under time pressure. Decisions to cross a street are based on the distance of the oncoming cars, rather than their speed, for both groups. Older pedestrians look more at their feet, probably because of their need of more time to plan precise stepping movement and, in turn, pay less attention to the traffic. This might help to set up guidelines for improving senior pedestrians' safety, in terms of speed limits, road design, and mixed physical-cognitive trainings.
Resumo:
BACKGROUND: Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. METHOD: Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. RESULTS: In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. CONCLUSION: Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
Resumo:
A large body of research demonstrated that participants preferably look back to the encoding location when retrieving visual information from memory. However, the role of this 'looking back to nothing' is still debated. The goal of the present study was to extend this line of research by examining whether an important area in the cortical representation of the oculomotor system, the frontal eye field (FEF), is involved in memory retrieval. To interfere with the activity of the FEF, we used inhibitory continuous theta burst stimulation (cTBS). Before stimulation was applied, participants encoded a complex scene and performed a short-term (immediately after encoding) or long-term (after 24 h) recall task, just after cTBS over the right FEF or sham stimulation. cTBS did not affect overall performance, but stimulation and statement type (object vs. location) interacted. cTBS over the right FEF tended to impair object recall sensitivity, whereas there was no effect on location recall sensitivity. These findings suggest that the FEF is involved in retrieving object information from scene memory, supporting the hypothesis that the oculomotor system contributes to memory recall.
Resumo:
The sleep electroencephalogram (EEG) spectrum is unique to an individual and stable across multiple baseline recordings. The aim of this study was to examine whether the sleep EEG spectrum exhibits the same stable characteristics after acute total sleep deprivation. Polysomnography (PSG) was recorded in 20 healthy adults across consecutive sleep periods. Three nights of baseline sleep [12 h time in bed (TIB)] following 12 h of wakefulness were interleaved with three nights of recovery sleep (12 h TIB) following 36 h of sustained wakefulness. Spectral analysis of the non-rapid eye movement (NREM) sleep EEG (C3LM derivation) was used to calculate power in 0.25 Hz frequency bins between 0.75 and 16.0 Hz. Intraclass correlation coefficients (ICCs) were calculated to assess stable individual differences for baseline and recovery night spectra separately and combined. ICCs were high across all frequencies for baseline and recovery and for baseline and recovery combined. These results show that the spectrum of the NREM sleep EEG is substantially different among individuals, highly stable within individuals and robust to an experimental challenge (i.e. sleep deprivation) known to have considerable impact on the NREM sleep EEG. These findings indicate that the NREM sleep EEG represents a trait.
Resumo:
Converging evidences from eye movement experiments indicate that linguistic contexts influence reading strategies. However, the question of whether different linguistic contexts modulate eye movements during reading in the same bilingual individuals remains unresolved. We examined reading strategies in a transparent (German) and an opaque (French) language of early, highly proficient French–German bilinguals: participants read aloud isolated French and German words and pseudo-words while the First Fixation Location (FFL), its duration and latency were measured. Since transparent linguistic contexts and pseudo-words would favour a direct grapheme/phoneme conversion, the reading strategy should be more local for German than for French words (FFL closer to the beginning) and no difference is expected in pseudo-words’ FFL between contexts. Our results confirm these hypotheses, providing the first evidence that the same individuals engage different reading strategy depending on language opacity, suggesting that a given brain process can be modulated by a given context.
Resumo:
Introduction: In team sports the ability to use peripheral vision is essential to track a number of players and the ball. By using eye-tracking devices it was found that players either use fixations and saccades to process information on the pitch or use smooth pursuit eye movements (SPEM) to keep track of single objects (Schütz, Braun, & Gegenfurtner, 2011). However, it is assumed that peripheral vision can be used best when the gaze is stable while it is unknown whether motion changes can be equally well detected when SPEM are used especially because contrast sensitivity is reduced during SPEM (Schütz, Delipetkose, Braun, Kerzel, & Gegenfurtner, 2007). Therefore, peripheral motion change detection will be examined by contrasting a fixation condition with a SPEM condition. Methods: 13 participants (7 male, 6 female) were presented with a visual display consisting of 15 white and 1 red square. Participants were instructed to follow the red square with their eyes and press a button as soon as a white square begins to move. White square movements occurred either when the red square was still (fixation condition) or moving in a circular manner with 6 °/s (pursuit condition). The to-be-detected white square movements varied in eccentricity (4 °, 8 °, 16 °) and speed (1 °/s, 2 °/s, 4 °/s) while movement time of white squares was constant at 500 ms. 180 events should be detected in total. A Vicon-integrated eye-tracking system and a button press (1000 Hz) was used to control for eye-movements and measure detection rates and response times. Response times (ms) and missed detections (%) were measured as dependent variables and analysed with a 2 (manipulation) x 3 (eccentricity) x 3 (speed) ANOVA with repeated measures on all factors. Results: Significant response time effects were found for manipulation, F(1,12) = 224.31, p < .01, ηp2 = .95, eccentricity, F(2,24) = 56.43; p < .01, ηp2 = .83, and the interaction between the two factors, F(2,24) = 64.43; p < .01, ηp2 = .84. Response times increased as a function of eccentricity for SPEM only and were overall higher than in the fixation condition. Results further showed missed events effects for manipulation, F(1,12) = 37.14; p < .01, ηp2 = .76, eccentricity, F(2,24) = 44.90; p < .01, ηp2 = .79, the interaction between the two factors, F(2,24) = 39.52; p < .01, ηp2 = .77 and the three-way interaction manipulation x eccentricity x speed, F(2,24) = 3.01; p = .03, ηp2 = .20. While less than 2% of events were missed on average in the fixation condition as well as at 4° and 8° eccentricity in the SPEM condition, missed events increased for SPEM at 16 ° eccentricity with significantly more missed events in the 4 °/s speed condition (1 °/s: M = 34.69, SD = 20.52; 2 °/s: M = 33.34, SD = 19.40; 4 °/s: M = 39.67, SD = 19.40). Discussion: It could be shown that using SPEM impairs the ability to detect peripheral motion changes at the far periphery and that fixations not only help to detect these motion changes but also to respond faster. Due to high temporal constraints especially in team sports like soccer or basketball, fast reaction are necessary for successful anticipation and decision making. Thus, it is advised to anchor gaze at a specific location if peripheral changes (e.g. movements of other players) that require a motor response have to be detected. In contrast, SPEM should only be used if a single object, like the ball in cricket or baseball, is necessary for a successful motor response. References: Schütz, A. C., Braun, D. I., & Gegenfurtner, K. R. (2011). Eye movements and perception: A selective review. Journal of Vision, 11, 1-30. Schütz, A. C., Delipetkose, E., Braun, D. I., Kerzel, D., & Gegenfurtner, K. R. (2007). Temporal contrast sensitivity during smooth pursuit eye movements. Journal of Vision, 7, 1-15.
Resumo:
People often make use of a spatial "mental time line" to represent events in time. We investigated whether the eyes follow such a mental time line during online language comprehension of sentences that refer to the past, present, and future. Participants' eye movements were measured on a blank screen while they listened to these sentences. Saccade direction revealed that the future is mapped higher up in space than the past. Moreover, fewer saccades were made when two events are simultaneously taking place at the present moment compared to two events that are happening in different points in time. This is the first evidence that oculomotor correlates reflect mental looking along an abstract invisible time line during online language comprehension about time. Our results support the idea that observing eye movements is likely to "detect" invisible spatial scaffoldings which are involved in cognitively processing abstract meaning, even when the abstract meaning lacks an explicit spatial correlate. Theoretical implications of these findings are discussed.
Resumo:
Spatial-numerical associations (small numbers-left/lower space and large numbers-right/upper space) are regularly found in simple number categorization tasks. These associations were taken as evidence for a spatially oriented mental number line. However, the role of spatial-numerical associations during more complex number processing, such as counting or mental arithmetic is less clear. Here, we investigated whether counting is associated with a movement along the mental number line. Participants counted aloud upward or downward in steps of 3 for 45 s while looking at a blank screen. Gaze position during upward counting shifted rightward and upward, while the pattern for downward counting was less clear. Our results, therefore, confirm the hypothesis of a movement along the mental number line for addition. We conclude that space is not only used to represent number magnitudes but also to actively operate on numbers in more complex tasks such as counting, and that the eyes reflect this spatial mental operation.