3 resultados para Three-wave interaction
em Universidade do Minho
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
This study aims to (a) identify and profile groups of infants according to their behavioral and physiological characteristics, considering their neurobehavioral organization, social withdrawal behavior, and endocrine reactivity to stress, and to (b) analyze group differences in the quality of mother–infant interaction. Ninety seven 8-week-old infants were examined using the Neonatal Behavioral Assessment Scale and the Alarm Distress Baby Scale. Cortisol levels were measured both before and after routine inoculation between 8 and 12 weeks. At 12 to 16 weeks mother–infant interaction was assessed using the Global Rating Scales of Mother–Infant Interaction. Three groups of infants were identified: (a) ‘‘withdrawn’’; (b) ‘‘extroverted’’; (c) ‘‘underaroused.’’ Differences between them were found regarding both infant and mother behaviors in the interaction and the overall quality of mother–infant interaction. The identification of behavioral and physiological profiles in infants is an important step in the study of developmental pathways.