952 resultados para Eye-Tracking
Resumo:
In the present multi-modal study we aimed to investigate the role of visual exploration in relation to the neuronal activity and performance during visuospatial processing. To this end, event related functional magnetic resonance imaging er-fMRI was combined with simultaneous eye tracking recording and transcranial magnetic stimulation (TMS). Two groups of twenty healthy subjects each performed an angle discrimination task with different levels of difficulty during er-fMRI. The number of fixations as a measure of visual exploration effort was chosen to predict blood oxygen level-dependent (BOLD) signal changes using the general linear model (GLM). Without TMS, a positive linear relationship between the visual exploration effort and the BOLD signal was found in a bilateral fronto-parietal cortical network, indicating that these regions reflect the increased number of fixations and the higher brain activity due to higher task demands. Furthermore, the relationship found between the number of fixations and the performance demonstrates the relevance of visual exploration for visuospatial task solving. In the TMS group, offline theta bursts TMS (TBS) was applied over the right posterior parietal cortex (PPC) before the fMRI experiment started. Compared to controls, TBS led to a reduced correlation between visual exploration and BOLD signal change in regions of the fronto-parietal network of the right hemisphere, indicating a disruption of the network. In contrast, an increased correlation was found in regions of the left hemisphere, suggesting an intent to compensate functionality of the disturbed areas. TBS led to fewer fixations and faster response time while keeping accuracy at the same level, indicating that subjects explored more than actually needed.
Resumo:
We previously reported that nuclear grade assignment of prostate carcinomas is subject to a cognitive bias induced by the tumor architecture. Here, we asked whether this bias is mediated by the non-conscious selection of nuclei that "match the expectation" induced by the inadvertent glance at the tumor architecture. 20 pathologists were asked to grade nuclei in high power fields of 20 prostate carcinomas displayed on a computer screen. Unknown to the pathologists, each carcinoma was shown twice, once before a background of a low grade, tubule-rich carcinoma and once before the background of a high grade, solid carcinoma. Eye tracking allowed to identify which nuclei the pathologists fixated during the 8 second projection period. For all 20 pathologists, nuclear grade assignment was significantly biased by tumor architecture. Pathologists tended to fixate on bigger, darker, and more irregular nuclei when those were projected before kigh grade, solid carcinomas than before low grade, tubule-rich carcinomas (and vice versa). However, the morphometric differences of the selected nuclei accounted for only 11% of the architecture-induced bias, suggesting that it can only to a small part be explained by the unconscious fixation on nuclei that "match the expectation". In conclusion, selection of « matching nuclei » represents an unconscious effort to vindicate the gravitation of nuclear grades towards the tumor architecture.
Resumo:
Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.
Resumo:
This study focuses on relations between 7- and 9-year-old children’s and adults’ metacognitive monitoring and control processes. In addition to explicit confidence judgments (CJ), data for participants’ control behavior during learning and recall as well as implicit CJs were collected with an eye-tracking device (Tobii 1750). Results revealed developmental progression in both accuracy of implicit and explicit monitoring across age groups. In addition, efficiency of learning and recall strategies increases with age, as older participants allocate more fixation time to critical information and less time to peripheral or potentially interfering information. Correlational analyses, recall performance, metacognitive monitoring, and controlling indicate significant interrelations between all of these measures, with varying patterns of correlations within age groups. Results are discussed in regard to the intricate relationship between monitoring and recall and their relation to performance.
Resumo:
PURPOSE: We aimed at further elucidating whether aphasic patients' difficulties in understanding non-canonical sentence structures, such as Passive or Object-Verb-Subject sentences, can be attributed to impaired morphosyntactic cue recognition, and to problems in integrating competing interpretations. METHODS: A sentence-picture matching task with canonical and non-canonical spoken sentences was performed using concurrent eye tracking. Accuracy, reaction time, and eye tracking data (fixations) of 50 healthy subjects and 12 aphasic patients were analysed. RESULTS: Patients showed increased error rates and reaction times, as well as delayed fixation preferences for target pictures in non-canonical sentences. Patients' fixation patterns differed from healthy controls and revealed deficits in recognizing and immediately integrating morphosyntactic cues. CONCLUSION: Our study corroborates the notion that difficulties in understanding syntactically complex sentences are attributable to a processing deficit encompassing delayed and therefore impaired recognition and integration of cues, as well as increased competition between interpretations.
Resumo:
La integración de las nuevas tecnologías en el proceso de rehabilitación permite la generación de terapias personalizadas, ubicuas y basadas en la evidencia. Tecnologías como el vídeo interactivo son propicias para el desarrollo de entornos virtuales en los que el paciente se ve inmerso dentro de actividades de la vida diaria en los que tiene que lograr un objetivo ecológico en un contexto seguro, controlado y adaptado a su perfil disfuncional. Dentro de este marco de rehabilitación la interacción visual paciente-entorno virtual se entiende como el mecanismo de comunicación principal, siendo además la atención visual un reflejo del estado cognitivo del paciente. El trabajo presentado en este artículo permite la integración de un sistema de eye-tracking con un entorno de neurorrehabilitación basado en vídeo interactivo. El objetivo último del sistema es la monitorización en tiempo real de la atención visual del usuario durante el proceso de neurorrehabilitación. Esta monitorización permite no sólo reproducir la ejecución de la actividad junto con el foco de la mirada, sino también detectar faltas de atención por parte del usuario, que permiten al vídeo interactivo reaccionar y adaptar la presentación de estímulos para ayudar a centrar su atención y así completar el objetivo de la actividad.
Resumo:
El Daño Cerebral Adquirido (DCA) se ha convertido en una de las principales causas de discapacidad neurológica de las sociedades desarrolladas. La alteración de las funciones cognitivas como consecuencia del DCA, limita no sólo la calidad de vida del paciente sino también la de las persona de su entorno. Aunque la neurorrehabilitación permite recuperar algunas de las funciones alteradas aprovechando la naturaleza plástica del sistema nervioso, su práctica siguiendo procesos tradicionales no permiten en muchos casos ajustarse a las necesidades de cada individuo ni, en general, cubrir todos los aspectos necesarios que conviertan al proceso rehabilitador en un tratamiento realmente efectivo. La incorporación al proceso de rehabilitación de las nuevas tecnologías ha permitido aumentar la intensidad del tratamiento, personalizando y prolongándolo en el tiempo de forma sostenible. Los entornos virtuales (EV) apoyados en esta tendencia permiten reproducir Actividades de Vida Diaria (AVD) controladas que incrementan el valor ecológico de las terapias. Este Trabajo Fin de Grado aborda el uso pionero de la tecnología de Vídeo Interactivo (VI) para el desarrollo de dichos entornos en el campo de la rehabilitación cognitiva. En concreto, el objetivo del TFG es la evaluación de un EV de rehabilitación desarrollado mediante tecnología de VI e integrado con un sistema de Eye-Tracking, capaz de capturar y analizar la información referente al comportamiento visual del paciente. Para este fin, se realiza el diseño, implementación y evaluación de un estudio experimental que registre el comportamiento de diferentes sujetos ante dos modalidades de AVD.
Resumo:
Although advertising is pervasive in our daily, it proves to be not necessarily efficient all the times due to bad conditions or bad contexts of reception. Indeed, the communication process might be jeopardized at its very last stage because of advertising exposure quality. However critical it may be, ad exposure quality is not very much examined by researchers or practitioners. In this paper, we investigate how tiredness combined with ad complexity might influence the way consumers extract and process ad elements. Investigating tiredness is useful because it is a common daily state experienced by everyone at various moments of the day. And although it might drastically alter ad reception, it has not been studied in advertising for the moment. In this regards, we observe eye movement patterns of consumers viewing simple or complex advertisements being tired or not. We surprisingly find that tired subjects viewing complex ads don’t adopt a lessening effort visual strategy. They rather use a resource demanding one. We assume that the Sustained Attention strategy occurring is a kind of adaptive strategy allowing to deal with an anticipated lack of resource.
Resumo:
Background - Not only is compulsive checking the most common symptom in Obsessive Compulsive Disorder (OCD) with an estimated prevalence of 50–80% in patients, but approximately ~15% of the general population reveal subclinical checking tendencies that impact negatively on their performance in daily activities. Therefore, it is critical to understand how checking affects attention and memory in clinical as well as subclinical checkers. Eye fixations are commonly used as indicators for the distribution of attention but research in OCD has revealed mixed results at best. Methodology/Principal Finding - Here we report atypical eye movement patterns in subclinical checkers during an ecologically valid working memory (WM) manipulation. Our key manipulation was to present an intermediate probe during the delay period of the memory task, explicitly asking for the location of a letter, which, however, had not been part of the encoding set (i.e., misleading participants). Using eye movement measures we now provide evidence that high checkers’ inhibitory impairments for misleading information results in them checking the contents of WM in an atypical manner. Checkers fixate more often and for longer when misleading information is presented than non-checkers. Specifically, checkers spend more time checking stimulus locations as well as locations that had actually been empty during encoding. Conclusions/Significance - We conclude that these atypical eye movement patterns directly reflect internal checking of memory contents and we discuss the implications of our findings for the interpretation of behavioural and neuropsychological data. In addition our results highlight the importance of ecologically valid methodology for revealing the impact of detrimental attention and memory checking on eye movement patterns.
Resumo:
This study aimed to: i) determine if the attention bias towards angry faces reported in eating disorders generalises to a non-clinical sample varying in eating disorder-related symptoms; ii) examine if the bias occurs during initial orientation or later strategic processing; and iii) confirm previous findings of impaired facial emotion recognition in non-clinical disordered eating. Fifty-two females viewed a series of face-pairs (happy or angry paired with neutral) whilst their attentional deployment was continuously monitored using an eye-tracker. They subsequently identified the emotion portrayed in a separate series of faces. The highest (n=18) and lowest scorers (n=17) on the Eating Disorders Inventory (EDI) were compared on the attention and facial emotion recognition tasks. Those with relatively high scores exhibited impaired facial emotion recognition, confirming previous findings in similar non-clinical samples. They also displayed biased attention away from emotional faces during later strategic processing, which is consistent with previously observed impairments in clinical samples. These differences were related to drive-for-thinness. Although we found no evidence of a bias towards angry faces, it is plausible that the observed impairments in emotion recognition and avoidance of emotional faces could disrupt social functioning and act as a risk factor for the development of eating disorders.