14 resultados para Informatics Engineering - Human Computer Interaction
em Universidade do Minho
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Graphics based systems of Augmented and Alternative Communication are widely used to promote communication in people with Autism Spectrum Disorders. This study discusses an integration of Augmented Reality in communication interventions, by relating elements of Augmented and Alternative Communication and Applied Behaviour Analysis strategies. An architecture for an Augmented Reality based interactive system to assist interventions is proposed. STAR provides an Augmented Reality tool to assist interventions performed by therapists and support for parents to join in and participate in the child’s intervention. Finally we report on the usage of the Augmented Reality tool in interventions with children with Autism Spectrum Disorders.
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
"Lecture notes in computer science series", ISSN 0302-9743, vol. 9121
Resumo:
Eye tracking as an interface to operate a computer is under research for a while and new systems are still being developed nowadays that provide some encouragement to those bound to illnesses that incapacitates them to use any other form of interaction with a computer. Although using computer vision processing and a camera, these systems are usually based on head mount technology being considered a contact type system. This paper describes the implementation of a human-computer interface based on a fully non-contact eye tracking vision system in order to allow people with tetraplegia to interface with a computer. As an assistive technology, a graphical user interface with special features was developed including a virtual keyboard to allow user communication, fast access to pre-stored phrases and multimedia and even internet browsing. This system was developed with the focus on low cost, user friendly functionality and user independency and autonomy.
Resumo:
Previously we have presented a model for generating human-like arm and hand movements on an unimanual anthropomorphic robot involved in human-robot collaboration tasks. The present paper aims to extend our model in order to address the generation of human-like bimanual movement sequences which are challenged by scenarios cluttered with obstacles. Movement planning involves large scale nonlinear constrained optimization problems which are solved using the IPOPT solver. Simulation studies show that the model generates feasible and realistic hand trajectories for action sequences involving the two hands. The computational costs involved in the planning allow for real-time human robot-interaction. A qualitative analysis reveals that the movements of the robot exhibit basic characteristics of human movements.
Resumo:
This article presents a work performed in the maintenance department of a furniture company in Portugal, in order to develop and implement autonomous maintenance. The main objective of the project was related to the objective to increase and make effective the autonomous maintenance tasks performed by production operators, and in this way avoiding unplanned downtime due to equipment failures. Although some autonomous maintenance tasks were already carried out within the company, a preliminary study revealed weaknesses in the application of this tool. In the initial phase of this pilot project, the main problems encountered at the level of autonomous maintenance were related to the lack of time to carry out these tasks, showing that the stipulated procedures were far from the real needs of the company. To solve these problems a pilot project was conducted, making several changes in the performance of autonomous maintenance tasks, making them standard and adapted to reality of each production line. There was a general improvement in the factory indicators, and essentially there was a behavioral change, since the operators felt that their opinions were taking into account and began to understand the importance of small tasks performed by them.
Resumo:
Companies from the motorcycles components branch are dealing with a dynamic environment, resulting from the introduction of new products and the increase of market demand. This dynamic environment requires frequent changes in production lines and requires flexibility in the processes, which can cause reductions in the level of quality and productivity. This paper presents a Lean Six Sigma improvement project performed in a production line of the company's machining sector, in order to eliminate losses that cause low productivity, affecting the fulfillment of the production plan and customer satisfaction. The use of Lean methodology following the DMAIC stages allowed analyzing the factors that influence the line productivity loss. The major problems and causes that contribute to a reduction on productivity and that were identified in this study are the lack of standardization in the setup activities and the excessive stoppages for adjustment of the processes that caused an increase of defects. Control charts, Pareto analysis and cause-and-effect diagrams were used to analyze the problem. On the improvement stage, the changes were based on the reconfiguration of the line layout as well as the modernization of the process. Overall, the project justified an investment in new equipment, the defective product units were reduced by 84% and an increase of 29% of line capacity was noticed.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)
Resumo:
The thymus is the central organ responsible for the generation of T lymphocytes (1). Various diseases cause the thymus to produce in- sufficient T cells, which can lead to immune-suppression (2). Since T cells are essential for the protection against pathogens, it is crucial to promote de novo differentiation of T cells on diseased individuals. The available clinical solutions are: 1) one protocol involving the transplant of thymic stroma from unrelated children only applicable for athymic children (3); 2) for patients with severe peripheral T cell depletion and reduced thymic activity, the administration of stimu- lating molecules stimulating the activity of the endogenous thymus (4). A scaffold (CellFoam) was suggested to support thymus regen- eration in vivo (5), although this research was discontinued. Herein, we propose an innovative strategy to generate a bioartificial thymus. We use a polycaprolactone nanofiber mesh (PCL-NFM) seeded and cultured with human thymic epithelial cells (hTECs). The cells were obtained from infant thymus collected during pediatric cardio-tho- racic surgeries. We report new data on the isolation and characterization of those cells and their interaction with PCL-NFM, by expanding hTECs into relevant numbers and by optimizing cell seeding methods.