11 resultados para User-computer Interface
em Universidade do Minho
Resumo:
Eye tracking as an interface to operate a computer is under research for a while and new systems are still being developed nowadays that provide some encouragement to those bound to illnesses that incapacitates them to use any other form of interaction with a computer. Although using computer vision processing and a camera, these systems are usually based on head mount technology being considered a contact type system. This paper describes the implementation of a human-computer interface based on a fully non-contact eye tracking vision system in order to allow people with tetraplegia to interface with a computer. As an assistive technology, a graphical user interface with special features was developed including a virtual keyboard to allow user communication, fast access to pre-stored phrases and multimedia and even internet browsing. This system was developed with the focus on low cost, user friendly functionality and user independency and autonomy.
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
This paper reports on an innovative approach to measuring intraluminal pressure in the upper gastrointestinal (GI) tract, especially monitoring GI motility and peristaltic movements. The proposed approach relies on thin-film aluminum strain gauges deposited on top of a Kapton membrane, which in turn lies on top of an SU-8 diaphragm-like structure. This structure enables the Kapton membrane to bend when pressure is applied, thereby affecting the strain gauges and effectively changing their electrical resistance. The sensor, with an area of 3.4 mm2, is fabricated using photolithography and standard microfabrication techniques (wet etching). It features a linear response (R2 = 0.9987) and an overall sensitivity of 2.6 mV mmHg−1. Additionally, its topology allows a high integration capability. The strain gauges’ responses to pressure were studied and the fabrication process optimized to achieve high sensitivity, linearity, and reproducibility. The sequential acquisition of the different signals is carried out by a microcontroller, with a 10-bit ADC and a sample rate of 250 Hz. The pressure signals are then presented in a user-friendly interface, developed using the Integrated Development Environment software, QtCreator IDE, for better visualization by physicians.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Solar photovoltaic systems are an increasing option for electricity production, since they produce electrical energy from a clean renewable energy resource, and over the years, as a result of the research, their efficiency has been increasing. For the interface between the dc photovoltaic solar array and the ac electrical grid is necessary the use of an inverter (dc-ac converter), which should be optimized to extract the maximum power from the photovoltaic solar array. In this paper is presented a solution based on a current-source inverter (CSI) using continuous control set model predictive control (CCS-MPC). All the power circuits and respective control systems are described in detail along the paper and were tested and validated performing computer simulations. The paper shows the simulation results and are drawn several conclusions.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.
Resumo:
The present study proposes a dynamic constitutive material interface model that includes non-associated flow rule and high strain rate effects, implemented in the finite element code ABAQUS as a user subroutine. First, the model capability is validated with numerical simulations of unreinforced block work masonry walls subjected to low velocity impact. The results obtained are compared with field test data and good agreement is found. Subsequently, a comprehensive parametric analysis is accomplished with different joint tensile strengths and cohesion, and wall thickness to evaluate the effect of the parameter variations on the impact response of masonry walls.
Resumo:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e de Computadores