29 resultados para Hand-held devices
em Universidade do Minho
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
Implantable devices must exhibit mechanical properties similar to native tissues to promote appropriate cellular behavior and regeneration. Herein, we report a new membrane manufacture method based on the synthesis of polyelectrolyte complexes (PECs) that exhibit saloplasticity, i.e. variable physical-chemistry using salt as a plasticizer. This is a Green Chemistry approach, as PECs generate structures that are stabilized solely by reversible electrostatic interactions, avoiding the use of harmful crosslinkers completely. Furthermore, natural polyelectrolytes - chitosan and alginate - were used. Upon mixing them, membranes were obtained by drying the PECs at 37ºC, yielding compact PECs without resorting to organicsolvents. The plasticizing effect of salt after synthesis was shown by measuring tensile mechanical properties, which were lower when samples were immersed in high ionic strength solutions.Salt was also used during membrane synthesis in different quan- tities (0 M, 0.15 M and 0.5 M in NaCl) yielding structures with no significant differences in morphology and degradation (around 15% after 3 months in lysozyme). However, swelling was higher (about 10x) when synthesized in the presence of salt. In vitro cell studies using L929 fibroblasts showed that cells adhered and proliferated preferentially in membranes fabricated in the presence of salt (i.e. the membranes with lower tensile strength). Structures with physical-chemical properties controlled with precision open a path to tissue engineering strategies depending on fine tuning mechanical properties and cellular adhesion simply by changing ionic strength during membrane manufacture
Resumo:
Several studies have shown that people with disabilities benefit substantially from access to a means of independent mobility and assistive technology. Researchers are using technology originally developed for mobile robots to create easier to use wheelchairs. With this kind of technology people with disabilities can gain a degree of independence in performing daily life activities. In this work a computer vision system is presented, able to drive a wheelchair with a minimum number of finger commands. The user hand is detected and segmented with the use of a kinect camera, and fingertips are extracted from depth information, and used as wheelchair commands.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Biometric systems are increasingly being used as a means for authentication to provide system security in modern technologies. The performance of a biometric system depends on the accuracy, the processing speed, the template size, and the time necessary for enrollment. While much research has focused on the first three factors, enrollment time has not received as much attention. In this work, we present the findings of our research focused upon studying user’s behavior when enrolling in a biometric system. Specifically, we collected information about the user’s availability for enrollment in respect to the hand recognition systems (e.g., hand geometry, palm geometry or any other requiring positioning the hand on an optical scanner). A sample of 19 participants, chosen randomly apart their age, gender, profession and nationality, were used as test subjects in an experiment to study the patience of users enrolling in a biometric hand recognition system.
Resumo:
This study presents the results of preliminary test on the interaction between fingertip and touch screen. The objective of this study is to identify the fingertip posture when interacting with touch screen devices. Ten participants, 7 males and 3 females, participated in this study. The participants were asked to touch targets on the mobile devices screen by tapping them sequentially and connecting them. The participants performed the tasks in a sitting posture. A tablet with 10 inches screen and a mobile phone with 4 inches screen were used in the study. The results showed that all participants dominantly used their thumb to interact with the mobile phone in single and two hands postures. The common thumb posture adopted by the participants is the combination of the 60° pitch and 0° roll angles. While for interaction with tablet in various postures observed in the study, the participants commonly used their index fingers in the combination of 60° pitch and 0° roll angles. This study also observed the participant with long finger nails touched targets on the mobile devices screen by using her index or middle fingers very low pitch.
Resumo:
Coagulase-negative staphylococci (CoNS) are common bacterial colonisers of the human skin. They are often involved in nosocomial infections due to biofilm formation in indwelling medical devices. While biofilm formation has been extensively studied in Staphylococcus epidermidis, little is known regarding other CoNS species. Here, biofilms from six different CoNS species were characterised in terms of biofilm composition and architecture. Interestingly, the ability to form a thick biofilm was not associated with any particular species, and high variability on biofilm accumulation was found within the same species. Cell viability assays also revealed different proportions of live and dead cells within biofilms formed by different species, although this parameter was particularly similar at the intra-species level. On the other hand, biofilm disruption assays demonstrated important inter- and intra-species differences regarding extracellular matrix composition. Lastly, confocal laser scanning microscopy (CLSM) experiments confirmed this variability, highlighting important differences and common features of CoNS biofilms. We hypothesised that the biofilm formation heterogeneity observed was rather associated with biofilm matrix composition than with cells themselves. Additionally, our results indicate that polysaccharides, DNA and proteins are fundamental pieces in the process of CoNS biofilm formation.
Resumo:
In previous work we have presented a model capable of generating human-like movements for a dual arm-hand robot involved in human-robot cooperative tasks. However, the focus was on the generation of reach-to-grasp and reach-to-regrasp bimanual movements and no synchrony in timing was taken into account. In this paper we extend the previous model in order to accomplish bimanual manipulation tasks by synchronously moving both arms and hands of an anthropomorphic robotic system. Specifically, the new extended model has been designed for two different tasks with different degrees of difficulty. Numerical results were obtained by the implementation of the IPOPT solver embedded in our MATLAB simulator.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão Industrial
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)