911 resultados para swd: Eye Tracking Movement
Resumo:
The research project takes place within the technology acceptability framework which tries to understand the use made of new technologies, and concentrates more specifically on the factors that influence multi-touch devices’ (MTD) acceptance and intention to use. Why be interested in MTD? Nowadays, this technology is used in all kinds of human activities, e.g. leisure, study or work activities (Rogowski and Saeed, 2012). However, the handling or the data entry by means of gestures on multi-touch-sensitive screen imposes a number of constraints and consequences which remain mostly unknown (Park and Han, 2013). Currently, few researches in ergonomic psychology wonder about the implications of these new human-computer interactions on task fulfillment.This research project aims to investigate the cognitive, sensori-motor and motivational processes taking place during the use of those devices. The project will analyze the influences of the use of gestures and the type of gesture used: simple or complex gestures (Lao, Heng, Zhang, Ling, and Wang, 2009), as well as the personal self-efficacy feeling in the use of MTD on task engagement, attention mechanisms and perceived disorientation (Chen, Linen, Yen, and Linn, 2011) when confronted to the use of MTD. For that purpose, the various above-mentioned concepts will be measured within a usability laboratory (U-Lab) with self-reported methods (questionnaires) and objective indicators (physiological indicators, eye tracking). Globally, the whole research aims to understand the processes at stakes, as well as advantages and inconveniences of this new technology, to favor a better compatibility and adequacy between gestures, executed tasks and MTD. The conclusions will allow some recommendations for the use of the DMT in specific contexts (e.g. learning context).
Resumo:
Nowadays multi-touch devices (MTD) can be found in all kind of contexts. In the learning context, MTD availability leads many teachers to use them in their class room, to support the use of the devices by students, or to assume that it will enhance the learning processes. Despite the raising interest for MTD, few researches studying the impact in term of performance or the suitability of the technology for the learning context exist. However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing behaviour), we notice that the use of MTD may lead to a less favourable outcome. The complexity to generate an accurate fingers gesture and the split attention it requires (multi-tasking effect) make the use of gestures to interact with a touch-sensitive screen more difficult compared to the traditional laptop use. More precisely, it is hypothesized that efficacy and efficiency decreases, as well as the available cognitive resources making the users’ task engagement more difficult. Furthermore, the presented study takes into account the moderator effect of previous experiences with MTD. Two key factors of technology adoption theories were included in the study: familiarity and self-efficacy with the technology.Sixty university students, invited to a usability lab, are asked to perform information search tasks on an online encyclopaedia. The different tasks were created in order to execute the most commonly used mouse actions (e.g. right click, left click, scrolling, zooming, key words encoding…). Two different conditions were created: (1) MTD use and (2) laptop use (with keyboard and mouse). The cognitive load, self-efficacy, familiarity and task engagement scales were adapted to the MTD context. Furthermore, the eye-tracking measurement would offer additional information about user behaviours and their cognitive load.Our study aims to clarify some important aspects towards the usage of MTD and the added value compared to a laptop in a student learning context. More precisely, the outcomes will enhance the suitability of MTD with the processes at stakes, the role of previous knowledge in the adoption process, as well as some interesting insights into the user experience with such devices.
Resumo:
Over the last decade, multi-touch devices (MTD) have spread in a range of contexts. In the learning context, MTD accessibility leads more and more teachers to use them in their classroom, assuming that it will improve the learning activities. Despite a growing interest, only few studies have focused on the impacts of MTD use in terms of performance and suitability in a learning context.However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing), we notice that the use of MTD may lead to a less favorable outcome. More precisely, tasks that require users to generate complex and/or less common gestures may increase extrinsic cognitive load and impair performance, especially for intrinsically complex tasks. It is hypothesized that task and gesture complexity will affect users’ cognitive resources and decrease task efficacy and efficiency. Because MTD are supposed to be more appealing, it is assumed that it will also impact cognitive absorption. The present study also takes into account user’s prior knowledge concerning MTD use and gestures by using experience with MTD as a moderator. Sixty university students were asked to perform information search tasks on an online encyclopedia. Tasks were set up so that users had to generate the most commonly used mouse actions (e.g. left/right click, scrolling, zooming, text encoding…). Two conditions were created: MTD use and laptop use (with mouse and keyboard) in order to make a comparison between the two devices. An eye tracking device was used to measure user’s attention and cognitive load. Our study sheds light on some important aspects towards the use of MTD and the added value compared to a laptop in a student learning context.
Resumo:
While the number of traditional laptops and computers sold has dipped slightly year over year, manufacturers have developed new hybrid laptops with touch screens to build on the tactile trend. This market is moving quickly to make touch the rule rather than the exception and the sales of these devices have tripled since the launch of Windows 8 in 2012, to reach more than sixty million units sold in 2015. Unlike tablets, that benefit from easy-to-use applications specially designed for tactile interactions, hybrid laptops are intended to be used with regular user-interfaces. Hence, one could ask whether tactile interactions are suited for every task and activity performed with such interfaces. Since hybrid laptops are increasingly used in educational situations, this study focuses on information search tasks which are commonly performed for learning purposes. It is hypothesized that tasks that require complex and/or less common gestures will increase user's cognitive load and impair task performance in terms of efficacy and efficiency. A study was carried out in a usability laboratory with 30 participants for whom prior experience with tactile devices has been controlled. They were asked to perform information search tasks on an online encyclopaedia by using only the touch screen of and hybrid laptop. Tasks were selected with respect to their level of cognitive demand (amount of information that had to be maintained in working memory) and the complexity of gestures needed (left and/or right clicks, zoom, text selection and/or input.), and grouped into 4 sets accordingly. Task performance was measured by the number of tasks succeeded (efficacy) and time spent on each task (efficiency). Perceived cognitive load was assessed thanks to a questionnaire given after each set of tasks. An eye tracking device was used to monitor users' attention allocation and to provide objective cognitive load measures based on pupil dilation and the Index of Cognitive Activity. Each experimental run took approximately one hour. The results of this within-subjects design indicate that tasks involving complex gestures led to a lower efficacy, especially when the tasks were cognitively demanding. Regarding efficacy, there is no significant differences between sets of tasks excepted for tasks with low cognitive demand and complex gestures that required more time to be achieved. Surprisingly, users that declared the biggest experience with tactile devices spent more time than less frequent users. Cognitive load measures indicate that participants reported having devoted more mental effort in the interaction when they had to use complex gestures.
Resumo:
Contrary to popular beliefs, a recent empirical study using eye tracking has shown that a non-clinical sample of socially anxious adults did not avoid the eyes during face scanning. Using eye-tracking measures, we sought to extend these findings by examining the relation between stable shyness and face scanning patterns in a non-clinical sample of 11-year-old children. We found that shyness was associated with longer dwell time to the eye region than the mouth, suggesting that some shy children were not avoiding the eyes. Shyness was also correlated with fewer first fixations to the nose, which is thought to reflect the typical global strategy of face processing. Present results replicate and extend recent work on social anxiety and face scanning in adults to shyness in children. These preliminary findings also provide support for the notion that some shy children may be hypersensitive to detecting social cues and intentions in others conveyed by the eyes. Theoretical and practical implications for understanding the social cognitive correlates and treatment of shyness are discussed. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Background: From a young age the typical development of social functioning relies upon the allocation of attention to socially relevant information, which in turn allows experience at processing such information and thus enhances social cognition. As such, research has attempted to identify the developmental processes that are derailed in some neuro-developmental disorders that impact upon social functioning. Williams syndrome (WS) and Autism are disorders of development that are characterized by atypical yet divergent social phenotypes and atypicalities of attention to people.
Methods: We used eye tracking to explore how individuals with WS and Autism attended to, and subsequently interpreted, an actor’s eye gaze cue within a social scene. Images were presented for three seconds, initially with an instruction simply to look at the picture. The images were then shown again, with the participant asked to identify the object being looked at. Allocation of eye-gaze in each condition was analyzed by ANOVA and accuracy of identification was compared with t-tests.
Results: Participants with WS allocated more gaze time to face and eyes than their matched controls both with and without being asked to identify the item being looked at; while participants with Autism spent less time on face and eyes in both conditions. When cued to follow gaze, participants with WS increased gaze to the correct targets, while those with Autism looked more at the face and eyes but did not increase gaze to the correct targets, while continuing to look much more than their controls at implausible targets. Both groups identified fewer objects than their controls.
Conclusions: The atypicalities found are likely to be entwined with the deficits shown in interpreting social cognitive cues from the images. WS and Autism are characterised by atypicalities of social attention that impact upon socio-cognitive expertise but importantly the type of atypicality is syndrome-specific.
Resumo:
Eye-tracking studies have shown how people with autism spend significantly less time looking at socially relevant information on-screen compared to those developing typically. This has been suggested to impact on the development of socio-cognitive skills in autism. We present novel evidence of how attention atypicalities in children with autism extend to real-life interaction, in comparison to typically developing (TD) children and children with specific language impairment (SLI). We explored the allocation of attention during social interaction with an interlocutor, and how aspects of attention (awareness checking) related to traditional measures of social cognition (false belief attribution). We found divergent attention allocation patterns across the groups in relation to social cognition ability. Even though children with autism and SLI performed similarly on the socio- cognitive tasks, there were syndrome-specific atypicalities of their attention patterns. Children with SLI were most similar to TD children in terms of prioritising attention to socially pertinent information (eyes, face, awareness checking). Children with autism showed reduced attention to the eyes and face, and slower awareness checking. This study provides unique and timely insight into real-world social gaze (a)typicality in autism, SLI and typical development, its relationship to socio-cognitive ability, and raises important issues for intervention.
Resumo:
Background: Identifying new and more robust assessments of proficiency/expertise (finding new "biomarkers of expertise") in histopathology is desirable for many reasons. Advances in digital pathology permit new and innovative tests such as flash viewing tests and eye tracking and slide navigation analyses that would not be possible with a traditional microscope. The main purpose of this study was to examine the usefulness of time-restricted testing of expertise in histopathology using digital images.
Methods: 19 novices (undergraduate medical students), 18 intermediates (trainees), and 19 experts (consultants) were invited to give their opinion on 20 general histopathology cases after 1 s and 10 s viewing times. Differences in performance between groups were measured and the internal reliability of the test was calculated.
Results: There were highly significant differences in performance between the groups using the Fisher's least significant difference method for multiple comparisons. Differences between groups were consistently greater in the 10-s than the 1-s test. The Kuder-Richardson 20 internal reliability coefficients were very high for both tests: 0.905 for the 1-s test and 0.926 for the 10-s test. Consultants had levels of diagnostic accuracy of 72% at 1 s and 83% at 10 s.
Conclusions: Time-restricted tests using digital images have the potential to be extremely reliable tests of diagnostic proficiency in histopathology. A 10-s viewing test may be more reliable than a 1-s test. Over-reliance on "at a glance" diagnoses in histopathology is a potential source of medical error due to over-confidence bias and premature closure.
Resumo:
Among the many discussions and studies related to video games, one of the most recurrent, widely debated and important relates to the experience of playing video games. The gameplay experience – as appropriated in this study – is the result of the interplay between two essential elements: a video game and a player. Existing studies have explored the resulting experience of video game playing from the perspective of the video game or the player, but none appear to equally balance both of these elements. The study presented here contributes to the ongoing debate with a gameplay experience model. The proposed model, which looks to equally balance the video game and the player elements, considers the gameplay experience to be both an interactive experience (related to the process of playing the video game) and an emotional experience (related to the outcome of playing the video game). The mutual influence of these two experiences during video game play ultimately defines the gameplay experience. To this gameplay experience contributes several dimensions, related to both the video game and player: the video game includes a mechanics, interface and narrative dimension; the player includes a motivations, expectations and background dimension. Also, the gameplay experience is initially defined by a gameplay situation, conditioned by an ambient in which gameplay takes place and a platform on which the video game is played. In order to initially validate the proposed model and attempt to show a relationship among the multiple model dimensions, a multi-case study was carried out using two different video games and player samples. In one study, results show significant correlations between multiple model dimensions, and evidence that video game related changes influence player motivations as well as player visual behavior. In specific player related analysis, results show that while players may be different in terms of background and expectations regarding the game, their motivation to play are not necessarily different, even if their performance in the game is weak. While further validation is necessary, this model not only contributes to the gameplay experience debate, but also demonstrates in a given context how player and video game dimensions evolve during video game play.
Resumo:
The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants.
Resumo:
Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431–1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1–14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.
Resumo:
In this thesis, three main questions were addressed using event-related potentials (ERPs): (1) the timing of lexical semantic access, (2) the influence of "top-down" processes on visual word processing, and (3) the influence of "bottom-up" factors on visual word processing. The timing of lexical semantic access was investigated in two studies using different designs. In Study 1,14 participants completed two tasks: a standard lexical decision (LD) task which required a word/nonword decision to each target stimulus, and a semantically primed version (LS) of it using the same category of words (e.g., animal) within each block following which participants made a category judgment. In Study 2, another 12 participants performed a standard semantic priming task, where target stimulus words (e.g., nurse) could be either semantically related or unrelated to their primes (e.g., doctor, tree) but the order of presentation was randomized. We found evidence in both ERP studies that lexical semantic access might occur early within the first 200 ms (at about 170 ms for Study 1 and at about 160 ms for Study 2). Our results were consistent with more recent ERP and eye-tracking studies and are in contrast with the traditional research focus on the N400 component. "Top-down" processes, such as a person's expectation and strategic decisions, were possible in Study 1 because of the blocked design, but they were not for Study 2 with a randomized design. Comparing results from two studies, we found that visual word processing could be affected by a person's expectation and the effect occurred early at a sensory/perceptual stage: a semantic task effect in the PI component at about 100 ms in the ERP was found in Study 1 , but not in Study 2. Furthermore, we found that such "top-down" influence on visual word processing might be mediated through separate mechanisms depending on whether the stimulus was a word or a nonword. "Bottom-up" factors involve inherent characteristics of particular words, such as bigram frequency (the total frequency of two-letter combinations of a word), word frequency (the frequency of the written form of a word), and neighborhood density (the number of words that can be generated by changing one letter of an original word or nonword). A bigram frequency effect was found when comparing the results from Studies 1 and 2, but it was examined more closely in Study 3. Fourteen participants performed a similar standard lexical decision task but the words and nonwords were selected systematically to provide a greater range in the aforementioned factors. As a result, a total of 18 word conditions were created with 18 nonword conditions matched on neighborhood density and neighborhood frequency. Using multiple regression analyses, we foimd that the PI amplitude was significantly related to bigram frequency for both words and nonwords, consistent with results from Studies 1 and 2. In addition, word frequency and neighborhood frequency were also able to influence the PI amplitude separately for words and for nonwords and there appeared to be a spatial dissociation between the two effects: for words, the word frequency effect in PI was found at the left electrode site; for nonwords, the neighborhood frequency effect in PI was fovind at the right elecfrode site. The implications of otir findings are discussed.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Les données sont analysées par le logiciel conçu par François Courtemanche et Féthi Guerdelli. L'expérimentation des jeux a eu lieu au Laboratoire de recherche en communication multimédia de l'Université de Montréal.
Resumo:
De plus en plus de recherches sur les Interactions Humain-Machine (IHM) tentent d’effectuer des analyses fines de l’interaction afin de faire ressortir ce qui influence les comportements des utilisateurs. Tant au niveau de l’évaluation de la performance que de l’expérience des utilisateurs, on note qu’une attention particulière est maintenant portée aux réactions émotionnelles et cognitives lors de l’interaction. Les approches qualitatives standards sont limitées, car elles se fondent sur l’observation et des entrevues après l’interaction, limitant ainsi la précision du diagnostic. L’expérience utilisateur et les réactions émotionnelles étant de nature hautement dynamique et contextualisée, les approches d’évaluation doivent l’être de même afin de permettre un diagnostic précis de l’interaction. Cette thèse présente une approche d’évaluation quantitative et dynamique qui permet de contextualiser les réactions des utilisateurs afin d’en identifier les antécédents dans l’interaction avec un système. Pour ce faire, ce travail s’articule autour de trois axes. 1) La reconnaissance automatique des buts et de la structure de tâches de l’utilisateur, à l’aide de mesures oculométriques et d’activité dans l’environnement par apprentissage machine. 2) L’inférence de construits psychologiques (activation, valence émotionnelle et charge cognitive) via l’analyse des signaux physiologiques. 3) Le diagnostic de l‘interaction reposant sur le couplage dynamique des deux précédentes opérations. Les idées et le développement de notre approche sont illustrés par leur application dans deux contextes expérimentaux : le commerce électronique et l’apprentissage par simulation. Nous présentons aussi l’outil informatique complet qui a été implémenté afin de permettre à des professionnels en évaluation (ex. : ergonomes, concepteurs de jeux, formateurs) d’utiliser l’approche proposée pour l’évaluation d’IHM. Celui-ci est conçu de manière à faciliter la triangulation des appareils de mesure impliqués dans ce travail et à s’intégrer aux méthodes classiques d’évaluation de l’interaction (ex. : questionnaires et codage des observations).