966 resultados para Eye tracking


Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the great puzzles in the psychology of visual perception is that the visual world appears to be a coherent whole despite our viewing it through temporally discontinuous series of eye fixations. The investigators attempted to explain this puzzle from the perspective of sequential visual information integration. In recent years, investigators hypothesized that information maintained in the visual short-term memory (VSTM) could become visual mental images gradually during time delay in visual buffer and integrated with information perceived currently. Some elementary studies had been carried out to investigate the integration between VSTM and visual percepts, but further research is required to account for several questions on the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information. Based on the theory of similarity between visual mental image and visual perception, this research (including three studies) employed the temporal integration paradigm and empty cell localization task to further explore the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information (sequential arrays). The purpose of study 1 was to further explore the temporal characteristics of sequential visual information integration by examining the effects of encoding time of sequential stimuli on the integration of sequential visual information. The purpose of study 2 was to further explore the spatial characteristics of sequential visual information integration by investigating the effects of spatial characteristics change on the integration of sequential visual information. The purpose of study 3 was to explore the information representation of information maintained in the VSTM and integration mechanism in the process of integrating sequential visual information by employing the behavioral experiments and eye tracking technology. The results indicated that: (1) Sequential arrays could be integrated without strategic instruction. Increasing the duration of the first array could cause improvement in performance and increasing the duration of the second array could not improve the performance. Temporal correlation model was not fit to explain the sequential array integration under long-ISI conditions. (2) Stimuli complexity influenced not only the overall performance of sequential arrays but also the values of ISI at asymptotic level of performance. Sequential arrays still could be integrated when the spatial characteristics of sequential arrays changed. During ISI, constructing and manipulating of visual mental image of array 1 were two separate processing phases. (3) During integrating sequential arrays, people represented the pattern constituted by the objects' image maintained in the VSTM and the topological characteristics of the objects' image had some impact on fixation location. The image-perception integration hypothesis was supported when the number of dots in array 1 was less than empty cells, and the convert-and-compare hypothesis was supported when the number of the dot in array 1 was equal to or more than empty cells. These findings not only contribute to make people understand the process of sequential visual information integration better, but also have significant practical application in the design of visual interface.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As Levelt and Meyer (2000) noted, because studies of lexical access during multiword utterances production such as phrases and sentences, they raise two novel questions which studies of single word production do not. Firstly, does the access of different words in a sentence occur in a parallel or a serial fashion? Secondly, does the access of the different words in a sentence occur in an interactive or a discrete fashion? The latter question concerns the horizontal information flow (Smith & Wheeldon, 2004), which is a very important aspect of continuous speech production. A variant of the picture–word interference paradigm combining with eye-tracking technique and a dual task paradigm was used in 7 experiments to investigate the horizontal information flow of semantic and phonological information between nouns in spoken Mandarin Chinese sentences. The results suggested that: 1. Before speech onset, semantic information of different words accross the whole sentence has been activated, while phonological activation has been limited within the first phrase of the sentence. 2. Before speech onset, speaker will look ahead and check the semantic information of latter words as the first noun is beening processed, such looking ahead for phonological information can just occur within the first phrase of the sentence. 3. After speech onset, speaker will concentrate on the content words beyond the first one and will check the semantic information of other words with the same sentence. 4. The result suggested that the lexical accesses of multiple words during spoken sentence production are processed in a partly serial and partly parallel manner and stands for the Unit-by-Unit and Incremental view proposed by Levelt (2000). 5. The horizontal information flow during spoken sentence production is not an automatic process and is constrained by cognitive resource.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mobile devices offer a common platform for both leisure and work-related tasks but this has resulted in a blurred boundary between home and work. In this paper we explore the security implications of this blurred boundary, both for the worker and the employer. Mobile workers may not always make optimum security-related choices when ‘on the go’ and more impulsive individuals may be particularly affected as they are considered more vulnerable to distraction. In this study we used a task scenario, in which 104 users were asked to choose a wireless network when responding to work demands while out of the office. Eye-tracking data was obtained from a subsample of 40 of these participants in order to explore the effects of impulsivity on attention. Our results suggest that impulsive people are more frequent users of public devices and networks in their day-to-day interactions and are more likely to access their social networks on a regular basis. However they are also likely to make risky decisions when working on-the-go, processing fewer features before making those decisions. These results suggest that those with high impulsivity may make more use of the mobile Internet options for both work and private purposes but they also show attentional behavior patterns that suggest they make less considered security-sensitive decisions. The findings are discussed in terms of designs that might support enhanced deliberation, both in the moment and also in relation to longer term behaviors that would contribute to a better work-life balance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The advent of modern wireless technologies has seen a shift in focus towards the design and development of educational systems for deployment through mobile devices. The use of mobile phones, tablets and Personal Digital Assistants (PDAs) is steadily growing across the educational sector as a whole. Mobile learning (mLearning) systems developed for deployment on such devices hold great significance for the future of education. However, mLearning systems must be built around the particular learner’s needs based on both their motivation to learn and subsequent learning outcomes. This thesis investigates how biometric technologies, in particular accelerometer and eye-tracking technologies, could effectively be employed within the development of mobile learning systems to facilitate the needs of individual learners. The creation of personalised learning environments must enable the achievement of improved learning outcomes for users, particularly at an individual level. Therefore consideration is given to individual learning-style differences within the electronic learning (eLearning) space. The overall area of eLearning is considered and areas such as biometric technology and educational psychology are explored for the development of personalised educational systems. This thesis explains the basis of the author’s hypotheses and presents the results of several studies carried out throughout the PhD research period. These results show that both accelerometer and eye-tracking technologies can be employed as an Human Computer Interaction (HCI) method in the detection of student learning-styles to facilitate the provision of automatically adapted eLearning spaces. Finally the author provides recommendations for developers in the creation of adaptive mobile learning systems through the employment of biometric technology as a user interaction tool within mLearning applications. Further research paths are identified and a roadmap for future of research in this area is defined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Existing work in Computer Science and Electronic Engineering demonstrates that Digital Signal Processing techniques can effectively identify the presence of stress in the speech signal. These techniques use datasets containing real or actual stress samples i.e. real-life stress such as 911 calls and so on. Studies that use simulated or laboratory-induced stress have been less successful and inconsistent. Pervasive, ubiquitous computing is increasingly moving towards voice-activated and voice-controlled systems and devices. Speech recognition and speaker identification algorithms will have to improve and take emotional speech into account. Modelling the influence of stress on speech and voice is of interest to researchers from many different disciplines including security, telecommunications, psychology, speech science, forensics and Human Computer Interaction (HCI). The aim of this work is to assess the impact of moderate stress on the speech signal. In order to do this, a dataset of laboratory-induced stress is required. While attempting to build this dataset it became apparent that reliably inducing measurable stress in a controlled environment, when speech is a requirement, is a challenging task. This work focuses on the use of a variety of stressors to elicit a stress response during tasks that involve speech content. Biosignal analysis (commercial Brain Computer Interfaces, eye tracking and skin resistance) is used to verify and quantify the stress response, if any. This thesis explains the basis of the author’s hypotheses on the elicitation of affectively-toned speech and presents the results of several studies carried out throughout the PhD research period. These results show that the elicitation of stress, particularly the induction of affectively-toned speech, is not a simple matter and that many modulating factors influence the stress response process. A model is proposed to reflect the author’s hypothesis on the emotional response pathways relating to the elicitation of stress with a required speech content. Finally the author provides guidelines and recommendations for future research on speech under stress. Further research paths are identified and a roadmap for future research in this area is defined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research project takes place within the technology acceptability framework which tries to understand the use made of new technologies, and concentrates more specifically on the factors that influence multi-touch devices’ (MTD) acceptance and intention to use. Why be interested in MTD? Nowadays, this technology is used in all kinds of human activities, e.g. leisure, study or work activities (Rogowski and Saeed, 2012). However, the handling or the data entry by means of gestures on multi-touch-sensitive screen imposes a number of constraints and consequences which remain mostly unknown (Park and Han, 2013). Currently, few researches in ergonomic psychology wonder about the implications of these new human-computer interactions on task fulfillment.This research project aims to investigate the cognitive, sensori-motor and motivational processes taking place during the use of those devices. The project will analyze the influences of the use of gestures and the type of gesture used: simple or complex gestures (Lao, Heng, Zhang, Ling, and Wang, 2009), as well as the personal self-efficacy feeling in the use of MTD on task engagement, attention mechanisms and perceived disorientation (Chen, Linen, Yen, and Linn, 2011) when confronted to the use of MTD. For that purpose, the various above-mentioned concepts will be measured within a usability laboratory (U-Lab) with self-reported methods (questionnaires) and objective indicators (physiological indicators, eye tracking). Globally, the whole research aims to understand the processes at stakes, as well as advantages and inconveniences of this new technology, to favor a better compatibility and adequacy between gestures, executed tasks and MTD. The conclusions will allow some recommendations for the use of the DMT in specific contexts (e.g. learning context).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays multi-touch devices (MTD) can be found in all kind of contexts. In the learning context, MTD availability leads many teachers to use them in their class room, to support the use of the devices by students, or to assume that it will enhance the learning processes. Despite the raising interest for MTD, few researches studying the impact in term of performance or the suitability of the technology for the learning context exist. However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing behaviour), we notice that the use of MTD may lead to a less favourable outcome. The complexity to generate an accurate fingers gesture and the split attention it requires (multi-tasking effect) make the use of gestures to interact with a touch-sensitive screen more difficult compared to the traditional laptop use. More precisely, it is hypothesized that efficacy and efficiency decreases, as well as the available cognitive resources making the users’ task engagement more difficult. Furthermore, the presented study takes into account the moderator effect of previous experiences with MTD. Two key factors of technology adoption theories were included in the study: familiarity and self-efficacy with the technology.Sixty university students, invited to a usability lab, are asked to perform information search tasks on an online encyclopaedia. The different tasks were created in order to execute the most commonly used mouse actions (e.g. right click, left click, scrolling, zooming, key words encoding…). Two different conditions were created: (1) MTD use and (2) laptop use (with keyboard and mouse). The cognitive load, self-efficacy, familiarity and task engagement scales were adapted to the MTD context. Furthermore, the eye-tracking measurement would offer additional information about user behaviours and their cognitive load.Our study aims to clarify some important aspects towards the usage of MTD and the added value compared to a laptop in a student learning context. More precisely, the outcomes will enhance the suitability of MTD with the processes at stakes, the role of previous knowledge in the adoption process, as well as some interesting insights into the user experience with such devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last decade, multi-touch devices (MTD) have spread in a range of contexts. In the learning context, MTD accessibility leads more and more teachers to use them in their classroom, assuming that it will improve the learning activities. Despite a growing interest, only few studies have focused on the impacts of MTD use in terms of performance and suitability in a learning context.However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing), we notice that the use of MTD may lead to a less favorable outcome. More precisely, tasks that require users to generate complex and/or less common gestures may increase extrinsic cognitive load and impair performance, especially for intrinsically complex tasks. It is hypothesized that task and gesture complexity will affect users’ cognitive resources and decrease task efficacy and efficiency. Because MTD are supposed to be more appealing, it is assumed that it will also impact cognitive absorption. The present study also takes into account user’s prior knowledge concerning MTD use and gestures by using experience with MTD as a moderator. Sixty university students were asked to perform information search tasks on an online encyclopedia. Tasks were set up so that users had to generate the most commonly used mouse actions (e.g. left/right click, scrolling, zooming, text encoding…). Two conditions were created: MTD use and laptop use (with mouse and keyboard) in order to make a comparison between the two devices. An eye tracking device was used to measure user’s attention and cognitive load. Our study sheds light on some important aspects towards the use of MTD and the added value compared to a laptop in a student learning context.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While the number of traditional laptops and computers sold has dipped slightly year over year, manufacturers have developed new hybrid laptops with touch screens to build on the tactile trend. This market is moving quickly to make touch the rule rather than the exception and the sales of these devices have tripled since the launch of Windows 8 in 2012, to reach more than sixty million units sold in 2015. Unlike tablets, that benefit from easy-to-use applications specially designed for tactile interactions, hybrid laptops are intended to be used with regular user-interfaces. Hence, one could ask whether tactile interactions are suited for every task and activity performed with such interfaces. Since hybrid laptops are increasingly used in educational situations, this study focuses on information search tasks which are commonly performed for learning purposes. It is hypothesized that tasks that require complex and/or less common gestures will increase user's cognitive load and impair task performance in terms of efficacy and efficiency. A study was carried out in a usability laboratory with 30 participants for whom prior experience with tactile devices has been controlled. They were asked to perform information search tasks on an online encyclopaedia by using only the touch screen of and hybrid laptop. Tasks were selected with respect to their level of cognitive demand (amount of information that had to be maintained in working memory) and the complexity of gestures needed (left and/or right clicks, zoom, text selection and/or input.), and grouped into 4 sets accordingly. Task performance was measured by the number of tasks succeeded (efficacy) and time spent on each task (efficiency). Perceived cognitive load was assessed thanks to a questionnaire given after each set of tasks. An eye tracking device was used to monitor users' attention allocation and to provide objective cognitive load measures based on pupil dilation and the Index of Cognitive Activity. Each experimental run took approximately one hour. The results of this within-subjects design indicate that tasks involving complex gestures led to a lower efficacy, especially when the tasks were cognitively demanding. Regarding efficacy, there is no significant differences between sets of tasks excepted for tasks with low cognitive demand and complex gestures that required more time to be achieved. Surprisingly, users that declared the biggest experience with tactile devices spent more time than less frequent users. Cognitive load measures indicate that participants reported having devoted more mental effort in the interaction when they had to use complex gestures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Contrary to popular beliefs, a recent empirical study using eye tracking has shown that a non-clinical sample of socially anxious adults did not avoid the eyes during face scanning. Using eye-tracking measures, we sought to extend these findings by examining the relation between stable shyness and face scanning patterns in a non-clinical sample of 11-year-old children. We found that shyness was associated with longer dwell time to the eye region than the mouth, suggesting that some shy children were not avoiding the eyes. Shyness was also correlated with fewer first fixations to the nose, which is thought to reflect the typical global strategy of face processing. Present results replicate and extend recent work on social anxiety and face scanning in adults to shyness in children. These preliminary findings also provide support for the notion that some shy children may be hypersensitive to detecting social cues and intentions in others conveyed by the eyes. Theoretical and practical implications for understanding the social cognitive correlates and treatment of shyness are discussed. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous eye tracking research on the allocation of attention to social information by individuals with autism spectrum disorders is equivocal and may be in part a consequence of variation in stimuli used between studies. The current study explored attention allocation to faces, and within faces,
by individuals with Asperger syndrome using a range of static stimuli where faces were either viewed in isolation or viewed in the context of a social scene. Results showed that faces were viewed typically by the individuals with Asperger syndrome when presented in isolation, but attention to the eyes was significantly diminished in comparison to age and IQ-matched typical viewers when faces were viewed as part of social scenes. We show that when using static stimuli, there is evidence of atypicality for individuals with Asperger syndrome depending on the extent of social context. Our findings shed light on the previous explanations of gaze behaviour that have emphasised the role of movement in atypicalities of social attention in autism spectrum disorders and highlight the importance of consideration of the realistic portrayal of social information for future studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Autism is a neuro-developmental disorder defined by atypical social behaviour, of which atypical social attention behaviours are among the earliest clinical markers (Volkmar et al., 1997). Eye tracking studies using still images and movie clips have provided a method for the precise quantification of atypical social attention in ASD. This is generally characterised by diminished viewing of the most socially pertinent regions (eyes), and increased viewing of less socially informative regions (body, background, objects) (Klin et al., 2002; Riby & Hancock, 2008, 2009). Ecological validity within eye tracking studies has become an increasingly important issue. As of yet, however, little is known about the precise nature of the atypicalities of social attention in ASD in real-life. Objectives: To capture and quantify gaze patterns for children with an ASD within a real life setting, compared to two Typically Developing (TD) comparison groups. Methods: Nine children with an ASD were compared to two age matched TD groups – a verbal (N=9) and a non-verbal (N=9) comparison group. A real-life scenario was created involving an experimenter posing as a magician, and consisted of 3 segments: a conversation segment; a magic trick segment; and a puppet segment. The first segment explored children’s attentional preferences during a real-life conversation; the magic trick segment explored children’s use of the eyes as a communicative cue, and the puppet segment explored attention capture. Finally, part of the puppet section explored children’s use of facial information in response to an unexpected event. Results: The most striking difference between the groups was the diminished viewing of the eyes by the ASD group in comparison to both control groups. This was found particularly during the conversation segment, but also during the magic trick segment, and during the puppet segment. When in conversation, participants with ASD were found to spend a greater proportion time looking off-screen, in comparison to TD participants. There was also a tendency for the ASD group to spend a greater proportion of time looking to the mouth of the experimenter. During the magic trick segment, despite the fact that the eyes were not predictive of a correct location, both TD comparison groups continued to use the eyes as a communicative cue, whereas the ASD group did not. In the puppet segment, all three groups spent a similar amount of time looking between the puppet and regions of the experimenter’s face. However, in response to an unexpected event, the ASD group were significantly slower to fixate back on the experimenter’s face. Conclusions: The results demonstrate the reduced salience of socially pertinent information for children with ASD in real life, and they provide support for the findings from previous eye tracking studies involving scene viewing. However, the results also highlight a pattern looking off-screen for both the TD and ASD groups. This eye movement behaviour is likely to be associated specifically with real-life interaction, as it has functional relevance (Doherty-Sneddon et al., 2002). However, the fact that it is significantly increased in the ASD group has implications for their understanding of real life social interactions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: From a young age the typical development of social functioning relies upon the allocation of attention to socially relevant information, which in turn allows experience at processing such information and thus enhances social cognition. As such, research has attempted to identify the developmental processes that are derailed in some neuro-developmental disorders that impact upon social functioning. Williams syndrome (WS) and Autism are disorders of development that are characterized by atypical yet divergent social phenotypes and atypicalities of attention to people.

Methods: We used eye tracking to explore how individuals with WS and Autism attended to, and subsequently interpreted, an actor’s eye gaze cue within a social scene. Images were presented for three seconds, initially with an instruction simply to look at the picture. The images were then shown again, with the participant asked to identify the object being looked at. Allocation of eye-gaze in each condition was analyzed by ANOVA and accuracy of identification was compared with t-tests.

Results: Participants with WS allocated more gaze time to face and eyes than their matched controls both with and without being asked to identify the item being looked at; while participants with Autism spent less time on face and eyes in both conditions. When cued to follow gaze, participants with WS increased gaze to the correct targets, while those with Autism looked more at the face and eyes but did not increase gaze to the correct targets, while continuing to look much more than their controls at implausible targets. Both groups identified fewer objects than their controls.

Conclusions: The atypicalities found are likely to be entwined with the deficits shown in interpreting social cognitive cues from the images. WS and Autism are characterised by atypicalities of social attention that impact upon socio-cognitive expertise but importantly the type of atypicality is syndrome-specific.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eye-tracking studies have shown how people with autism spend significantly less time looking at socially relevant information on-screen compared to those developing typically. This has been suggested to impact on the development of socio-cognitive skills in autism. We present novel evidence of how attention atypicalities in children with autism extend to real-life interaction, in comparison to typically developing (TD) children and children with specific language impairment (SLI). We explored the allocation of attention during social interaction with an interlocutor, and how aspects of attention (awareness checking) related to traditional measures of social cognition (false belief attribution). We found divergent attention allocation patterns across the groups in relation to social cognition ability. Even though children with autism and SLI performed similarly on the socio- cognitive tasks, there were syndrome-specific atypicalities of their attention patterns. Children with SLI were most similar to TD children in terms of prioritising attention to socially pertinent information (eyes, face, awareness checking). Children with autism showed reduced attention to the eyes and face, and slower awareness checking. This study provides unique and timely insight into real-world social gaze (a)typicality in autism, SLI and typical development, its relationship to socio-cognitive ability, and raises important issues for intervention.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Identifying new and more robust assessments of proficiency/expertise (finding new "biomarkers of expertise") in histopathology is desirable for many reasons. Advances in digital pathology permit new and innovative tests such as flash viewing tests and eye tracking and slide navigation analyses that would not be possible with a traditional microscope. The main purpose of this study was to examine the usefulness of time-restricted testing of expertise in histopathology using digital images.
Methods: 19 novices (undergraduate medical students), 18 intermediates (trainees), and 19 experts (consultants) were invited to give their opinion on 20 general histopathology cases after 1 s and 10 s viewing times. Differences in performance between groups were measured and the internal reliability of the test was calculated.
Results: There were highly significant differences in performance between the groups using the Fisher's least significant difference method for multiple comparisons. Differences between groups were consistently greater in the 10-s than the 1-s test. The Kuder-Richardson 20 internal reliability coefficients were very high for both tests: 0.905 for the 1-s test and 0.926 for the 10-s test. Consultants had levels of diagnostic accuracy of 72% at 1 s and 83% at 10 s.
Conclusions: Time-restricted tests using digital images have the potential to be extremely reliable tests of diagnostic proficiency in histopathology. A 10-s viewing test may be more reliable than a 1-s test. Over-reliance on "at a glance" diagnoses in histopathology is a potential source of medical error due to over-confidence bias and premature closure.