988 resultados para recognition interaction


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The preserved activity of immobilized biomolecules in layer-by-layer (LbL) films can be exploited in various applications. including biosensing. In this study, cholesterol oxidase (COX) layers were alternated with layers of poly(allylamine hydrochloride) (PAH) in LbL films whose morphology was investigated with atomic force microscopy (AFM). The adsorption kinetics of COX layers comprised two regimes, a fast, first-order kinetics process followed by a slow process fitted with a Johnson-Mehl-Avrami (JMA) function. with exponent similar to 2 characteristic of aggregates growing as disks. The concept based on the use of sensor arrays to increase sensitivity, widely employed in electronic tongues, was extended to biosensing with impedance spectroscopy measurements. Using three sensing units, made of LbL films of PAH/COX and PAHIPVS (polyvinyl sulfonic acid) and a bare gold interdigitated electrode, we were able to detect cholesterol in aqueous solutions down to the 10(-6) M level. This high sensitivity is attributed to the molecular-recognition interaction between COX and cholesterol, and opens the way for clinical tests to be made with low cost. fast experimental procedures. (C) 2008 Published by Elsevier B.V.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The interaction between natural and sexual selection is central to many theories of how mate choice and reproductive isolation evolve, but their joint effect on the evolution of mate recognition has not, to my knowledge, been investigated in an evolutionary experiment. Natural and sexual selection were manipulated in interspecific hybrid populations of Drosophila to determine their effects on the evolution of a mate recognition system comprised of cuticular hydrocarbons (CHCs). The effect of natural selection in isolation indicated that CHCs were costly for males and females to produce. The effect of sexual selection in isolation indicated that females preferred males with a particular CHC composition. However, the interaction between natural and sexual selection had a greater effect on the evolution of the mate recognition system than either process in isolation. When natural and sexual selection were permitted to operate in combination, male CHCs became exaggerated to a greater extent than in the presence of sexual selection alone, and female CHCs evolved against the direction of natural selection. This experiment demonstrated that the interaction between natural and sexual selection is critical in determining the direction and magnitude of the evolutionary response of the mate recognition system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sendo uma forma natural de interação homem-máquina, o reconhecimento de gestos implica uma forte componente de investigação em áreas como a visão por computador e a aprendizagem computacional. O reconhecimento gestual é uma área com aplicações muito diversas, fornecendo aos utilizadores uma forma mais natural e mais simples de comunicar com sistemas baseados em computador, sem a necessidade de utilização de dispositivos extras. Assim, o objectivo principal da investigação na área de reconhecimento de gestos aplicada à interacção homemmáquina é o da criação de sistemas, que possam identificar gestos específicos e usálos para transmitir informações ou para controlar dispositivos. Para isso as interfaces baseados em visão para o reconhecimento de gestos, necessitam de detectar a mão de forma rápida e robusta e de serem capazes de efetuar o reconhecimento de gestos em tempo real. Hoje em dia, os sistemas de reconhecimento de gestos baseados em visão são capazes de trabalhar com soluções específicas, construídos para resolver um determinado problema e configurados para trabalhar de uma forma particular. Este projeto de investigação estudou e implementou soluções, suficientemente genéricas, com o recurso a algoritmos de aprendizagem computacional, permitindo a sua aplicação num conjunto alargado de sistemas de interface homem-máquina, para reconhecimento de gestos em tempo real. A solução proposta, Gesture Learning Module Architecture (GeLMA), permite de forma simples definir um conjunto de comandos que pode ser baseado em gestos estáticos e dinâmicos e que pode ser facilmente integrado e configurado para ser utilizado numa série de aplicações. É um sistema de baixo custo e fácil de treinar e usar, e uma vez que é construído unicamente com bibliotecas de código. As experiências realizadas permitiram mostrar que o sistema atingiu uma precisão de 99,2% em termos de reconhecimento de gestos estáticos e uma precisão média de 93,7% em termos de reconhecimento de gestos dinâmicos. Para validar a solução proposta, foram implementados dois sistemas completos. O primeiro é um sistema em tempo real capaz de ajudar um árbitro a arbitrar um jogo de futebol robótico. A solução proposta combina um sistema de reconhecimento de gestos baseada em visão com a definição de uma linguagem formal, o CommLang Referee, à qual demos a designação de Referee Command Language Interface System (ReCLIS). O sistema identifica os comandos baseados num conjunto de gestos estáticos e dinâmicos executados pelo árbitro, sendo este posteriormente enviado para um interface de computador que transmite a respectiva informação para os robôs. O segundo é um sistema em tempo real capaz de interpretar um subconjunto da Linguagem Gestual Portuguesa. As experiências demonstraram que o sistema foi capaz de reconhecer as vogais em tempo real de forma fiável. Embora a solução implementada apenas tenha sido treinada para reconhecer as cinco vogais, o sistema é facilmente extensível para reconhecer o resto do alfabeto. As experiências também permitiram mostrar que a base dos sistemas de interação baseados em visão pode ser a mesma para todas as aplicações e, deste modo facilitar a sua implementação. A solução proposta tem ainda a vantagem de ser suficientemente genérica e uma base sólida para o desenvolvimento de sistemas baseados em reconhecimento gestual que podem ser facilmente integrados com qualquer aplicação de interface homem-máquina. A linguagem formal de definição da interface pode ser redefinida e o sistema pode ser facilmente configurado e treinado com um conjunto de gestos diferentes de forma a serem integrados na solução final.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tese de Doutoramento em Engenharia de Eletrónica e de Computadores

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2013

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The visualization of tools and manipulable objects activates motor-related areas in the cortex, facilitating possible actions toward them. This pattern of activity may underlie the phenomenon of object affordance. Some cortical motor neurons are also covertly activated during the recognition of body parts such as hands. One hypothesis is that different subpopulations of motor neurons in the frontal cortex are activated in each motor program; for example, canonical neurons in the premotor cortex are responsible for the affordance of visual objects, while mirror neurons support motor imagery triggered during handedness recognition. However, the question remains whether these subpopulations work independently. This hypothesis can be tested with a manual reaction time (MRT) task with a priming paradigm to evaluate whether the view of a manipulable object interferes with the motor imagery of the subject's hand. The MRT provides a measure of the course of information processing in the brain and allows indirect evaluation of cognitive processes. Our results suggest that canonical and mirror neurons work together to create a motor plan involving hand movements to facilitate successful object manipulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is composed of three stages: detection, tracking and recognition. This system is based on machine learning methods and pattern recognition techniques, which have been integrated together with other image processing approaches to get a high recognition accuracy and a low computational cost. Regarding pattern recongition techniques, several algorithms and strategies have been designed and implemented, which are applicable to color images and video sequences. The design of these algorithms has the purpose of extracting spatial and spatio-temporal features from static and dynamic hand gestures, in order to identify them in a robust and reliable way. Finally, a visual database containing the necessary vocabulary of gestures for interacting with the computer has been created.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most secretory and membrane proteins are sorted by signal sequences to the endoplasmic reticulum (ER) membrane early during their synthesis. Targeting of the ribosome-nascent chain complex (RNC) involves the binding of the signal sequence to the signal recognition particle (SRP), followed by an interaction of ribosome-bound SRP with the SRP receptor. However, ribosomes can also independently bind to the ER translocation channel formed by the Sec61p complex. To explain the specificity of membrane targeting, it has therefore been proposed that nascent polypeptide-associated complex functions as a cytosolic inhibitor of signal sequence- and SRP-independent ribosome binding to the ER membrane. We report here that SRP-independent binding of RNCs to the ER membrane can occur in the presence of all cytosolic factors, including nascent polypeptide-associated complex. Nontranslating ribosomes competitively inhibit SRP-independent membrane binding of RNCs but have no effect when SRP is bound to the RNCs. The protective effect of SRP against ribosome competition depends on a functional signal sequence in the nascent chain and is also observed with reconstituted proteoliposomes containing only the Sec61p complex and the SRP receptor. We conclude that cytosolic factors do not prevent the membrane binding of ribosomes. Instead, specific ribosome targeting to the Sec61p complex is provided by the binding of SRP to RNCs, followed by an interaction with the SRP receptor, which gives RNC–SRP complexes a selective advantage in membrane targeting over nontranslating ribosomes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Drosophila CF2II protein, which contains zinc fingers of the Cys2His2 type and recognizes an A+T-rich sequence, behaves in cell culture as an activator of a reporter chloramphenicol acetyltransferase gene. This activity depends on C-terminal but not N-terminal zinc fingers, as does in vitro DNA binding. By site-specific mutagenesis and binding site selection, we define the critical amino acid-base interactions. Mutations of single amino acid residues at the leading edge of the recognition helix are rarely neutral: many result in a slight change in affinity for the ideal DNA target site; some cause major loss of affinity; and others change specificity for as many as two bases in the target site. Compared to zinc fingers that recognize G+C-rich DNA, CF2II fingers appear to bind to A+T-rich DNA in a generally similar manner, but with additional flexibility and amino acid-base interactions. The results illustrate how zinc fingers may be evolving to recognize an unusually diverse set of DNA sequences.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The need to digitise music scores has led to the development of Optical Music Recognition (OMR) tools. Unfortunately, the performance of these systems is still far from providing acceptable results. This situation forces the user to be involved in the process due to the need of correcting the mistakes made during recognition. However, this correction is performed over the output of the system, so these interventions are not exploited to improve the performance of the recognition. This work sets the scenario in which human and machine interact to accurately complete the OMR task with the least possible effort for the user.