15 resultados para Digital marketing,Eye tracking,Web usability,User Interface

em Universidade do Minho


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado em Média Interativos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eye tracking as an interface to operate a computer is under research for a while and new systems are still being developed nowadays that provide some encouragement to those bound to illnesses that incapacitates them to use any other form of interaction with a computer. Although using computer vision processing and a camera, these systems are usually based on head mount technology being considered a contact type system. This paper describes the implementation of a human-computer interface based on a fully non-contact eye tracking vision system in order to allow people with tetraplegia to interface with a computer. As an assistive technology, a graphical user interface with special features was developed including a virtual keyboard to allow user communication, fast access to pre-stored phrases and multimedia and even internet browsing. This system was developed with the focus on low cost, user friendly functionality and user independency and autonomy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relatório de estágio de mestrado em Ciências da Comunicação (área de especialização Publicidade e Relações Públicas)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An increasing number of m-Health applications are being developed benefiting health service delivery. In this paper, a new methodology based on the principle of calm computing applied to diagnostic and therapeutic procedure reporting is proposed. A mobile application was designed for the physicians of one of the Portuguese major hospitals, which takes advantage of a multi-agent interoperability platform, the Agency for the Integration, Diffusion and Archive (AIDA). This application allows the visualization of inpatients and outpatients medical reports in a quicker and safer manner, in addition to offer a remote access to information. This project shows the advantages in the use of mobile software in a medical environment but the first step is always to build or use an interoperability platform, flexible, adaptable and pervasive. The platform offers a comprehensive set of services that restricts the development of mobile software almost exclusively to the mobile user interface design. The technology was tested and assessed in a real context by intensivists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study investigated whether oculomotor behavior is influenced by attachment styles. The Relationship Scales Questionnaire was used to assess attachment styles of forty-eight voluntary university students and to classify them into attachment groups (secure, preoccupied, fearful, and dismissing). Eye-tracking was recorded while participants engaged in a 3-seconds free visual exploration of stimuli presenting either a positive or a negative picture together with a neutral picture, all depicting social interactions. The task consisted in identifying whether the two pictures depicted the same emotion. Results showed that the processing of negative pictures was impermeable to attachment style, while the processing of positive pictures was significantly influenced by individual differences in insecure attachment. The groups highly avoidant regarding to attachment (dismissing and fearful) showed reduced accuracy, suggesting a higher threshold for recognizing positive emotions compared to the secure group. The groups with higher attachment anxiety (preoccupied and fearful) showed differences in automatic capture of attention, in particular an increased delay preceding the first fixation to a picture of positive emotional valence. Despite lenient statistical thresholds induced by the limited sample size of some groups (p < 0.05 uncorrected for multiple comparisons), the current findings suggest that the processing of positive emotions is affected by attachment styles. These results are discussed within a broader evolutionary framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia de Telecomunicações e Informática

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research aims to advance blinking detection in the context of work activity. Rather than patients having to attend a clinic, blinking videos can be acquired in a work environment, and further automatically analyzed. Therefore, this paper presents a methodology to perform the automatic detection of eye blink using consumer videos acquired with low-cost web cameras. This methodology includes the detection of the face and eyes of the recorded person, and then it analyzes the low-level features of the eye region to create a quantitative vector. Finally, this vector is classified into one of the two categories considered —open and closed eyes— by using machine learning algorithms. The effectiveness of the proposed methodology was demonstrated since it provides unbiased results with classification errors under 5%