952 resultados para Eye tracking


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last decade, multi-touch devices (MTD) have spread in a range of contexts. In the learning context, MTD accessibility leads more and more teachers to use them in their classroom, assuming that it will improve the learning activities. Despite a growing interest, only few studies have focused on the impacts of MTD use in terms of performance and suitability in a learning context.However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing), we notice that the use of MTD may lead to a less favorable outcome. More precisely, tasks that require users to generate complex and/or less common gestures may increase extrinsic cognitive load and impair performance, especially for intrinsically complex tasks. It is hypothesized that task and gesture complexity will affect users’ cognitive resources and decrease task efficacy and efficiency. Because MTD are supposed to be more appealing, it is assumed that it will also impact cognitive absorption. The present study also takes into account user’s prior knowledge concerning MTD use and gestures by using experience with MTD as a moderator. Sixty university students were asked to perform information search tasks on an online encyclopedia. Tasks were set up so that users had to generate the most commonly used mouse actions (e.g. left/right click, scrolling, zooming, text encoding…). Two conditions were created: MTD use and laptop use (with mouse and keyboard) in order to make a comparison between the two devices. An eye tracking device was used to measure user’s attention and cognitive load. Our study sheds light on some important aspects towards the use of MTD and the added value compared to a laptop in a student learning context.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While the number of traditional laptops and computers sold has dipped slightly year over year, manufacturers have developed new hybrid laptops with touch screens to build on the tactile trend. This market is moving quickly to make touch the rule rather than the exception and the sales of these devices have tripled since the launch of Windows 8 in 2012, to reach more than sixty million units sold in 2015. Unlike tablets, that benefit from easy-to-use applications specially designed for tactile interactions, hybrid laptops are intended to be used with regular user-interfaces. Hence, one could ask whether tactile interactions are suited for every task and activity performed with such interfaces. Since hybrid laptops are increasingly used in educational situations, this study focuses on information search tasks which are commonly performed for learning purposes. It is hypothesized that tasks that require complex and/or less common gestures will increase user's cognitive load and impair task performance in terms of efficacy and efficiency. A study was carried out in a usability laboratory with 30 participants for whom prior experience with tactile devices has been controlled. They were asked to perform information search tasks on an online encyclopaedia by using only the touch screen of and hybrid laptop. Tasks were selected with respect to their level of cognitive demand (amount of information that had to be maintained in working memory) and the complexity of gestures needed (left and/or right clicks, zoom, text selection and/or input.), and grouped into 4 sets accordingly. Task performance was measured by the number of tasks succeeded (efficacy) and time spent on each task (efficiency). Perceived cognitive load was assessed thanks to a questionnaire given after each set of tasks. An eye tracking device was used to monitor users' attention allocation and to provide objective cognitive load measures based on pupil dilation and the Index of Cognitive Activity. Each experimental run took approximately one hour. The results of this within-subjects design indicate that tasks involving complex gestures led to a lower efficacy, especially when the tasks were cognitively demanding. Regarding efficacy, there is no significant differences between sets of tasks excepted for tasks with low cognitive demand and complex gestures that required more time to be achieved. Surprisingly, users that declared the biggest experience with tactile devices spent more time than less frequent users. Cognitive load measures indicate that participants reported having devoted more mental effort in the interaction when they had to use complex gestures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Contrary to popular beliefs, a recent empirical study using eye tracking has shown that a non-clinical sample of socially anxious adults did not avoid the eyes during face scanning. Using eye-tracking measures, we sought to extend these findings by examining the relation between stable shyness and face scanning patterns in a non-clinical sample of 11-year-old children. We found that shyness was associated with longer dwell time to the eye region than the mouth, suggesting that some shy children were not avoiding the eyes. Shyness was also correlated with fewer first fixations to the nose, which is thought to reflect the typical global strategy of face processing. Present results replicate and extend recent work on social anxiety and face scanning in adults to shyness in children. These preliminary findings also provide support for the notion that some shy children may be hypersensitive to detecting social cues and intentions in others conveyed by the eyes. Theoretical and practical implications for understanding the social cognitive correlates and treatment of shyness are discussed. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous eye tracking research on the allocation of attention to social information by individuals with autism spectrum disorders is equivocal and may be in part a consequence of variation in stimuli used between studies. The current study explored attention allocation to faces, and within faces,
by individuals with Asperger syndrome using a range of static stimuli where faces were either viewed in isolation or viewed in the context of a social scene. Results showed that faces were viewed typically by the individuals with Asperger syndrome when presented in isolation, but attention to the eyes was significantly diminished in comparison to age and IQ-matched typical viewers when faces were viewed as part of social scenes. We show that when using static stimuli, there is evidence of atypicality for individuals with Asperger syndrome depending on the extent of social context. Our findings shed light on the previous explanations of gaze behaviour that have emphasised the role of movement in atypicalities of social attention in autism spectrum disorders and highlight the importance of consideration of the realistic portrayal of social information for future studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Autism is a neuro-developmental disorder defined by atypical social behaviour, of which atypical social attention behaviours are among the earliest clinical markers (Volkmar et al., 1997). Eye tracking studies using still images and movie clips have provided a method for the precise quantification of atypical social attention in ASD. This is generally characterised by diminished viewing of the most socially pertinent regions (eyes), and increased viewing of less socially informative regions (body, background, objects) (Klin et al., 2002; Riby & Hancock, 2008, 2009). Ecological validity within eye tracking studies has become an increasingly important issue. As of yet, however, little is known about the precise nature of the atypicalities of social attention in ASD in real-life. Objectives: To capture and quantify gaze patterns for children with an ASD within a real life setting, compared to two Typically Developing (TD) comparison groups. Methods: Nine children with an ASD were compared to two age matched TD groups – a verbal (N=9) and a non-verbal (N=9) comparison group. A real-life scenario was created involving an experimenter posing as a magician, and consisted of 3 segments: a conversation segment; a magic trick segment; and a puppet segment. The first segment explored children’s attentional preferences during a real-life conversation; the magic trick segment explored children’s use of the eyes as a communicative cue, and the puppet segment explored attention capture. Finally, part of the puppet section explored children’s use of facial information in response to an unexpected event. Results: The most striking difference between the groups was the diminished viewing of the eyes by the ASD group in comparison to both control groups. This was found particularly during the conversation segment, but also during the magic trick segment, and during the puppet segment. When in conversation, participants with ASD were found to spend a greater proportion time looking off-screen, in comparison to TD participants. There was also a tendency for the ASD group to spend a greater proportion of time looking to the mouth of the experimenter. During the magic trick segment, despite the fact that the eyes were not predictive of a correct location, both TD comparison groups continued to use the eyes as a communicative cue, whereas the ASD group did not. In the puppet segment, all three groups spent a similar amount of time looking between the puppet and regions of the experimenter’s face. However, in response to an unexpected event, the ASD group were significantly slower to fixate back on the experimenter’s face. Conclusions: The results demonstrate the reduced salience of socially pertinent information for children with ASD in real life, and they provide support for the findings from previous eye tracking studies involving scene viewing. However, the results also highlight a pattern looking off-screen for both the TD and ASD groups. This eye movement behaviour is likely to be associated specifically with real-life interaction, as it has functional relevance (Doherty-Sneddon et al., 2002). However, the fact that it is significantly increased in the ASD group has implications for their understanding of real life social interactions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: From a young age the typical development of social functioning relies upon the allocation of attention to socially relevant information, which in turn allows experience at processing such information and thus enhances social cognition. As such, research has attempted to identify the developmental processes that are derailed in some neuro-developmental disorders that impact upon social functioning. Williams syndrome (WS) and Autism are disorders of development that are characterized by atypical yet divergent social phenotypes and atypicalities of attention to people.

Methods: We used eye tracking to explore how individuals with WS and Autism attended to, and subsequently interpreted, an actor’s eye gaze cue within a social scene. Images were presented for three seconds, initially with an instruction simply to look at the picture. The images were then shown again, with the participant asked to identify the object being looked at. Allocation of eye-gaze in each condition was analyzed by ANOVA and accuracy of identification was compared with t-tests.

Results: Participants with WS allocated more gaze time to face and eyes than their matched controls both with and without being asked to identify the item being looked at; while participants with Autism spent less time on face and eyes in both conditions. When cued to follow gaze, participants with WS increased gaze to the correct targets, while those with Autism looked more at the face and eyes but did not increase gaze to the correct targets, while continuing to look much more than their controls at implausible targets. Both groups identified fewer objects than their controls.

Conclusions: The atypicalities found are likely to be entwined with the deficits shown in interpreting social cognitive cues from the images. WS and Autism are characterised by atypicalities of social attention that impact upon socio-cognitive expertise but importantly the type of atypicality is syndrome-specific.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eye-tracking studies have shown how people with autism spend significantly less time looking at socially relevant information on-screen compared to those developing typically. This has been suggested to impact on the development of socio-cognitive skills in autism. We present novel evidence of how attention atypicalities in children with autism extend to real-life interaction, in comparison to typically developing (TD) children and children with specific language impairment (SLI). We explored the allocation of attention during social interaction with an interlocutor, and how aspects of attention (awareness checking) related to traditional measures of social cognition (false belief attribution). We found divergent attention allocation patterns across the groups in relation to social cognition ability. Even though children with autism and SLI performed similarly on the socio- cognitive tasks, there were syndrome-specific atypicalities of their attention patterns. Children with SLI were most similar to TD children in terms of prioritising attention to socially pertinent information (eyes, face, awareness checking). Children with autism showed reduced attention to the eyes and face, and slower awareness checking. This study provides unique and timely insight into real-world social gaze (a)typicality in autism, SLI and typical development, its relationship to socio-cognitive ability, and raises important issues for intervention.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Identifying new and more robust assessments of proficiency/expertise (finding new "biomarkers of expertise") in histopathology is desirable for many reasons. Advances in digital pathology permit new and innovative tests such as flash viewing tests and eye tracking and slide navigation analyses that would not be possible with a traditional microscope. The main purpose of this study was to examine the usefulness of time-restricted testing of expertise in histopathology using digital images.
Methods: 19 novices (undergraduate medical students), 18 intermediates (trainees), and 19 experts (consultants) were invited to give their opinion on 20 general histopathology cases after 1 s and 10 s viewing times. Differences in performance between groups were measured and the internal reliability of the test was calculated.
Results: There were highly significant differences in performance between the groups using the Fisher's least significant difference method for multiple comparisons. Differences between groups were consistently greater in the 10-s than the 1-s test. The Kuder-Richardson 20 internal reliability coefficients were very high for both tests: 0.905 for the 1-s test and 0.926 for the 10-s test. Consultants had levels of diagnostic accuracy of 72% at 1 s and 83% at 10 s.
Conclusions: Time-restricted tests using digital images have the potential to be extremely reliable tests of diagnostic proficiency in histopathology. A 10-s viewing test may be more reliable than a 1-s test. Over-reliance on "at a glance" diagnoses in histopathology is a potential source of medical error due to over-confidence bias and premature closure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Among the many discussions and studies related to video games, one of the most recurrent, widely debated and important relates to the experience of playing video games. The gameplay experience – as appropriated in this study – is the result of the interplay between two essential elements: a video game and a player. Existing studies have explored the resulting experience of video game playing from the perspective of the video game or the player, but none appear to equally balance both of these elements. The study presented here contributes to the ongoing debate with a gameplay experience model. The proposed model, which looks to equally balance the video game and the player elements, considers the gameplay experience to be both an interactive experience (related to the process of playing the video game) and an emotional experience (related to the outcome of playing the video game). The mutual influence of these two experiences during video game play ultimately defines the gameplay experience. To this gameplay experience contributes several dimensions, related to both the video game and player: the video game includes a mechanics, interface and narrative dimension; the player includes a motivations, expectations and background dimension. Also, the gameplay experience is initially defined by a gameplay situation, conditioned by an ambient in which gameplay takes place and a platform on which the video game is played. In order to initially validate the proposed model and attempt to show a relationship among the multiple model dimensions, a multi-case study was carried out using two different video games and player samples. In one study, results show significant correlations between multiple model dimensions, and evidence that video game related changes influence player motivations as well as player visual behavior. In specific player related analysis, results show that while players may be different in terms of background and expectations regarding the game, their motivation to play are not necessarily different, even if their performance in the game is weak. While further validation is necessary, this model not only contributes to the gameplay experience debate, but also demonstrates in a given context how player and video game dimensions evolve during video game play.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431–1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1–14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, three main questions were addressed using event-related potentials (ERPs): (1) the timing of lexical semantic access, (2) the influence of "top-down" processes on visual word processing, and (3) the influence of "bottom-up" factors on visual word processing. The timing of lexical semantic access was investigated in two studies using different designs. In Study 1,14 participants completed two tasks: a standard lexical decision (LD) task which required a word/nonword decision to each target stimulus, and a semantically primed version (LS) of it using the same category of words (e.g., animal) within each block following which participants made a category judgment. In Study 2, another 12 participants performed a standard semantic priming task, where target stimulus words (e.g., nurse) could be either semantically related or unrelated to their primes (e.g., doctor, tree) but the order of presentation was randomized. We found evidence in both ERP studies that lexical semantic access might occur early within the first 200 ms (at about 170 ms for Study 1 and at about 160 ms for Study 2). Our results were consistent with more recent ERP and eye-tracking studies and are in contrast with the traditional research focus on the N400 component. "Top-down" processes, such as a person's expectation and strategic decisions, were possible in Study 1 because of the blocked design, but they were not for Study 2 with a randomized design. Comparing results from two studies, we found that visual word processing could be affected by a person's expectation and the effect occurred early at a sensory/perceptual stage: a semantic task effect in the PI component at about 100 ms in the ERP was found in Study 1 , but not in Study 2. Furthermore, we found that such "top-down" influence on visual word processing might be mediated through separate mechanisms depending on whether the stimulus was a word or a nonword. "Bottom-up" factors involve inherent characteristics of particular words, such as bigram frequency (the total frequency of two-letter combinations of a word), word frequency (the frequency of the written form of a word), and neighborhood density (the number of words that can be generated by changing one letter of an original word or nonword). A bigram frequency effect was found when comparing the results from Studies 1 and 2, but it was examined more closely in Study 3. Fourteen participants performed a similar standard lexical decision task but the words and nonwords were selected systematically to provide a greater range in the aforementioned factors. As a result, a total of 18 word conditions were created with 18 nonword conditions matched on neighborhood density and neighborhood frequency. Using multiple regression analyses, we foimd that the PI amplitude was significantly related to bigram frequency for both words and nonwords, consistent with results from Studies 1 and 2. In addition, word frequency and neighborhood frequency were also able to influence the PI amplitude separately for words and for nonwords and there appeared to be a spatial dissociation between the two effects: for words, the word frequency effect in PI was found at the left electrode site; for nonwords, the neighborhood frequency effect in PI was fovind at the right elecfrode site. The implications of otir findings are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers’ locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Question : Cette thèse comporte deux articles portant sur l’étude d’expressions faciales émotionnelles. Le processus de développement d’une nouvelle banque de stimuli émotionnels fait l’objet du premier article, alors que le deuxième article utilise cette banque pour étudier l’effet de l’anxiété de trait sur la reconnaissance des expressions statiques. Méthodes : Un total de 1088 clips émotionnels (34 acteurs X 8 émotions X 4 exemplaire) ont été alignés spatialement et temporellement de sorte que les yeux et le nez de chaque acteur occupent le même endroit dans toutes les vidéos. Les vidéos sont toutes d’une durée de 500ms et contiennent l’Apex de l’expression. La banque d’expressions statiques fut créée à partir de la dernière image des clips. Les stimuli ont été soumis à un processus de validation rigoureux. Dans la deuxième étude, les expressions statiques sont utilisées conjointement avec la méthode Bubbles dans le but d’étudier la reconnaissance des émotions chez des participants anxieux. Résultats : Dans la première étude, les meilleurs stimuli ont été sélectionnés [2 (statique & dynamique) X 8 (expressions) X 10 (acteurs)] et forment la banque d’expressions STOIC. Dans la deuxième étude, il est démontré que les individus présentant de l'anxiété de trait utilisent préférentiellement les basses fréquences spatiales de la région buccale du visage et ont une meilleure reconnaissance des expressions de peur. Discussion : La banque d’expressions faciales STOIC comporte des caractéristiques uniques qui font qu’elle se démarque des autres. Elle peut être téléchargée gratuitement, elle contient des vidéos naturelles et tous les stimuli ont été alignés, ce qui fait d’elle un outil de choix pour la communauté scientifique et les cliniciens. Les stimuli statiques de STOIC furent utilisés pour franchir une première étape dans la recherche sur la perception des émotions chez des individus présentant de l’anxiété de trait. Nous croyons que l’utilisation des basses fréquences est à la base des meilleures performances de ces individus, et que l’utilisation de ce type d’information visuelle désambigüise les expressions de peur et de surprise. Nous pensons également que c’est la névrose (chevauchement entre l'anxiété et la dépression), et non l’anxiété même qui est associée à de meilleures performances en reconnaissance d’expressions faciales de la peur. L’utilisation d’instruments mesurant ce concept devrait être envisagée dans de futures études.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal