51 resultados para Gesture interfaces
em Universidade do Minho
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
Tissue-to-tissue interfaces are commonly present in all tissues exhibiting structural, biological and chemical gradients serving a wide range of physiological functions. These interfaces are responsible for mediation of load transfer between two adjacent tissues. They are also important structures in sustaining the cellular communications to retain tissueâ s functional integration and homeostasis. [1] All cells have the capacity to sense and respond to physical and chemical stimulus and when cultured in three-dimensional (3D) environments they tend to perform their function better than in two-dimensional (2D) environments. Spatial and temporal 3D gradient hydrogels better resemble the natural environment of cells in mimicking their extracellular matrix. [2] In this study we hypothesize that differential functional properties can be engineered by modulation of macromolecule gradients in a cell seeded threedimensional hydrogel system. Specifically, differential paracrine secretory profiles can be engineered using human Bone Marrow Stem Cells (hBMSCâ s). Hence, the specific objectives of this study are to: assemble the macromolecular gradient hydrogels to evaluate the suitablity for hBMSCâ s encapsulation by cellular viability and biofunctionality by assessing the paracrine secretion of hBMSCâ s over time. The gradient hydrogels solutions were prepared by blend of macromolecules in one solution such as hyaluronic (HA) acid and collagen (Col) at different ratios. The gradient hydrogels were fabricated into cylindrical silicon moulds with higher ratio solutions assembled at the bottom of the mould and adding the two solutions consecutively on top of each other. The labelling of the macromolecules was performed to confirm the gradient through fluorescence microscopy. Additionally, AFM was conducted to assess the gradient hydrogels stiffness. Gradient hydrogels characterization was performed by HA and Col degradation assay, degree of crosslinking and stability. hBMSCâ s at P3 were encapsulated into each batch solution at 106 cells/ml solution and gradient hydrogels were produced as previously described. The hBMSCâ s were observed under confocal microscopy to assess viability by Live/Dead® staining. Cellular behaviour concerning proliferation and matrix deposition was also performed. Secretory cytokine measurement for pro-inflammatory and angiogenesis factors was carried out using ELISA. At genomic level, qPCR was carried out. The 3D gradient hydrogels platform made of different macromolecules showed to be a suitable environment for hBMSCâ s. The hBMSCâ s gradient hydrogels supported high cell survival and exhibited biofunctionality. Besides, the 3D gradient hydrogels demonstrated differentially secretion of pro-inflammatory and angiogenic factors by the encapsulated hBMSCâ s. References: 1. Mikos, AG. et al., Engineering complex tissues. Tissue Engineering 12,3307, 2006 2. Phillips, JE. et al., Proc Natl Acad Sci USA, 26:12170-5, 2008
Resumo:
The authors thank the federal agency CAPES and the Foundation for Research Support of the state of Sao Paulo, Brazil (FAPESP) for providing a PhD scholarship, and the University of Minho, in Portugal, for the international collaboration.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Dissertação de mestrado integrado em Psicologia
Resumo:
The bottom of the Red Sea harbors over 25 deep hypersaline anoxic basins that are geochemically distinct and characterized by vertical gradients of extreme physicochemical conditions. Because of strong changes in density, particulate and microbial debris get entrapped in the brine-seawater interface (BSI), resulting in increased dissolved organic carbon, reduced dissolved oxygen toward the brines and enhanced microbial activities in the BSI. These features coupled with the deep-sea prevalence of ammonia-oxidizing archaea (AOA) in the global ocean make the BSI a suitable environment for studying the osmotic adaptations and ecology of these important players in the marine nitrogen cycle. Using phylogenomic-based approaches, we show that the local archaeal community of five different BSI habitats (with up to 18.2% salinity) is composed mostly of a single, highly abundant Nitrosopumilus-like phylotype that is phylogenetically distinct from the bathypelagic thaumarchaea; ammonia-oxidizing bacteria were absent. The composite genome of this novel Nitrosopumilus-like subpopulation (RSA3) co-assembled from multiple single-cell amplified genomes (SAGs) from one such BSI habitat further revealed that it shares [sim]54% of its predicted genomic inventory with sequenced Nitrosopumilus species. RSA3 also carries several, albeit variable gene sets that further illuminate the phylogenetic diversity and metabolic plasticity of this genus. Specifically, it encodes for a putative proline-glutamate 'switch' with a potential role in osmotolerance and indirect impact on carbon and energy flows. Metagenomic fragment recruitment analyses against the composite RSA3 genome, Nitrosopumilus maritimus, and SAGs of mesopelagic thaumarchaea also reiterate the divergence of the BSI genotypes from other AOA.
Resumo:
The MAP-i doctoral program of the Universities of Minho, Aveiro and Porto
Resumo:
Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.
Resumo:
This work was supported by FCT (Fundação para a Ciência e Tecnologia) within Project Scope (UID/CEC/00319/2013), by LIP (Laboratório de Instrumentação e Física Experimental de Partículas) and by Project Search-ON2 (NORTE-07-0162- FEDER-000086), co-funded by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework, through the European Regional Development Fund.
Resumo:
Nowadays, the vulgarization of information and communication technologies has reached to a level that the majority of people spend a lot of time using software to do regular tasks, ranging from games and ordinary time and weather utilities to some more sophisticated ones, like retail or banking applications. This new way of life is supported by the Internet or by specific applications that changed the image people had about using information and communication technologies. All over the world, the first cycle of studies of educational systems also has been addressed with the justification that this encourages the development of children. Taking this into consideration, we design and develop a visual explorer system for relational databases that can be used by everyone, from “7 to 77”, in an intuitive and easy way, getting immediate results – a new database querying experience. Thus, in this paper we will expose the main characteristics and features of this visual database explorer, showing how it works and how it can be used to execute the most current data manipulation operations over a database.