882 resultados para Hand gesture recognition


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor’s use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories – up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e Computadores

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A interação humano-computador passou a desempenhar um papel fundamental no mundo atual. Esta forma de comunicar continua a evoluir, introduzindo novas formas de interação, como por exemplo, a interação natural. Este estilo de interação começou por estar presente na área de jogos. No entanto, atualmente está a ser explorada noutras áreas. Esta dissertação tem como propósito investigar a utilidade das interfaces naturais encontradas em consolas de jogos e conjugar com a área educativa, nomeadamente, o ensino e a aprendizagem dos fundamentos de Matemática. O desenvolvimento deste projeto baseou-se no estudo dos conteúdos programáticos de Matemática referentes ao 1º ciclo do ensino básico, de várias aplicações já existentes que estão relacionadas com o tema abordado e de alguns dispositivos de interação natural. De forma a avaliar a ideia proposta, foi desenvolvido um protótipo, designado Matemática Interativa, no sentido de permitir ao utilizador enriquecer a aprendizagem e também o interesse pela disciplina. São descritas, de uma forma mais aprofundada, as funcionalidades do dispositivo escolhido, o Kinect, de modo a tirar proveito das suas potencialidades e desenvolver um motor de reconhecimento de gestos e respetiva avaliação. Por fim, é feita uma discussão dos resultados de uma avaliação de usabilidade com o objetivo de validar a aplicação Matemática Interativa. Os resultados desta avaliação sugerem que a aplicação foi bem-sucedida e revelam ainda capacidades de melhoria.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work done in this master's thesis, presents a new system for the recognition of human actions from a video sequence. The system uses, as input, a video sequence taken by a static camera. A binary segmentation method of the the video sequence is first achieved, by a learning algorithm, in order to detect and extract the different people from the background. To recognize an action, the system then exploits a set of prototypes generated from an MDS-based dimensionality reduction technique, from two different points of view in the video sequence. This dimensionality reduction technique, according to two different viewpoints, allows us to model each human action of the training base with a set of prototypes (supposed to be similar for each class) represented in a low dimensional non-linear space. The prototypes, extracted according to the two viewpoints, are fed to a $K$-NN classifier which allows us to identify the human action that takes place in the video sequence. The experiments of our model conducted on the Weizmann dataset of human actions provide interesting results compared to the other state-of-the art (and often more complicated) methods. These experiments show first the sensitivity of our model for each viewpoint and its effectiveness to recognize the different actions, with a variable but satisfactory recognition rate and also the results obtained by the fusion of these two points of view, which allows us to achieve a high performance recognition rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta investigación analiza la construcción del límite sur de la zona patrimonial de Bogotá de los años 80, a partir de dos procesos simultáneos: la demolición de un sector del barrio Santa Bárbara y la patrimonialización de La Candelaria. A pesar de que estos dos sectores de la ciudad hicieron parte de la ciudad antigua y compartieron un origen común, solo La Candelaria logró hacer parte de la zona considerada de valor histórico. Por lo tanto, este proyecto de grado se interesa, principalmente, por conocer por qué el barrio Santa Bárbara fue excluido del relato histórico de la ciudad, a partir del estudio de la discusión sobre su valor histórico que dio paso al uso público del pasado y a una “lucha por la memoria”, en palabras de Elizabeth Jelin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta dissertação apresenta a implementação de navegação no ambiente virtual, reconhecimento de gestos e controle de interface, feitos através do dispositivo Kinect, no Sistema ITV: um sistema de treinamento de operadores e mantenedores de usinas hidrelétricas e subestações elétricas. São mostrados, também, determinados aperfeiçoamentos recentes, como conversão em vídeo, telas de alarmes sonoros e visuais, ambientação sonora em três dimensões e narração do processo. Além da apresentação do Sistema ITV, são expostos o dispositivo Kinect e o algoritmo utilizado na comparação dos padrões de movimento, o DTW. Em seguida, são abordados em detalhes o projeto e a implementação da navegação, do reconhecimento de gestos e do controle de interface. Como estudo de caso, é exibida uma Instrução Técnica Virtual (ITV), elaborada especialmente para testar e avaliar a nova interface proposta. Posteriormente, são apresentados os resultados, considerados satisfatórios, obtidos através da análise de questionários qualitativos aplicados a estudantes da Universidade Federal do Pará. Por fim, são realizadas as considerações referentes a este trabalho e expostas idéias de trabalhos futuros.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES]The aim of the Kinship Verification in the Wild Evaluation (held in conjunction with the 2015 IEEE International Conference on Automatic Face and Gesture Recognition, Ljubljana, Slovenia) was to evaluate different kinship verification algorithms. For this task, two datasets were made available and three possible experimental protocols (unsupervised, image-restricted, and image-unrestricted) were designed. Five institutions submitted their results to the evaluation: (i) Politecnico di Torino, Italy; (ii) LIRIS-University of Lyon, France; (iii) Universidad de Las Palmas de Gran Canaria, Spain; (iv) Nanjing University of Aeronautics and Astronautics, China; and (v) Bar Ilan University, Israel. Most of the participants tackled the image-restricted challenge and experimental results demonstrated better kinship verification performance than the baseline methods provided by the organizers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Riconoscere un gesto, tracciarlo ed identificarlo è una operazione complessa ed articolata. Negli ultimi anni, con l’avvento massivo di interfacce interattive sempre più sofisticate, si sono ampliati gli approcci nell’interazione tra uomo e macchina. L’obiettivo comune, è quello di avere una comunicazione “trasparente” tra l’utente e il computer, il quale, deve interpretare gesti umani tramite algoritmi matematici. Il riconoscimento di gesti è un modo per iniziare a comprendere il linguaggio del corpo umano da parte della macchina. Questa disciplina, studia nuovi modi di interazione tra questi due elementi e si compone di due macro obiettivi : (a) tracciare i movimenti di un particolare arto; (b) riconoscere tale tracciato come un gesto identificativo. Ognuno di questi due punti, racchiude in sé moltissimi ambiti di ricerca perché moltissimi sono gli approcci proposti negli anni. Non si tratta di semplice cattura dell’immagine, è necessario creare un supporto, a volte molto articolato, nel quale i dati grezzi provenienti dalla fotocamera, necessitano di filtraggi avanzati e trattamenti algoritmici, in modo tale da trasformare informazioni grezze, in dati utilizzabili ed affidabili. La tecnologia riguardo la gesture recognition è rilevante come l’introduzione delle interfacce tattili sui telefoni intelligenti. L’industria oggi ha iniziato a produrre dispositivi in grado di offrire una nuova esperienza, la più naturale possibile, agli utenti. Dal videogioco, all’esperienza televisiva gestita con dei piccoli gesti, all’ambito biomedicale, si sta introducendo una nuova generazione di dispositivi i cui impieghi sono innumerevoli e, per ogni ambito applicativo, è necessario studiare al meglio le peculiarità, in modo tale da produrre un qualcosa di nuovo ed efficace. Questo lavoro di tesi ha l’obiettivo di apportare un contributo a questa disciplina. Ad oggi, moltissime applicazioni e dispositivi associati, si pongono l’obiettivo di catturare movimenti ampi: il gesto viene eseguito con la maggior parte del corpo e occupa una posizione spaziale rilevante. Questa tesi vuole proporre invece un approccio, nel quale i movimenti da seguire e riconoscere sono fatti “nel piccolo”. Si avrà a che fare con gesti classificati fini, dove i movimenti delle mani sono compiuti davanti al corpo, nella zona del torace, ad esempio. Gli ambiti applicativi sono molti, in questo lavoro si è scelto ed adottato l’ambito artigianale.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Negli ultimi anni si è assistito ad una radicale rivoluzione nell’ambito dei dispositivi di interazione uomo-macchina. Da dispositivi tradizionali come il mouse o la tastiera si è passati allo sviluppo di nuovi sistemi capaci di riconoscere i movimenti compiuti dall’utente (interfacce basate sulla visione o sull’uso di accelerometri) o rilevare il contatto (interfacce di tipo touch). Questi sistemi sono nati con lo scopo di fornire maggiore naturalezza alla comunicazione uomo-macchina. Le nuove interfacce sono molto più espressive di quelle tradizionali poiché sfruttano le capacità di comunicazione naturali degli utenti, su tutte il linguaggio gestuale. Essere in grado di riconoscere gli esseri umani, in termini delle azioni che stanno svolgendo o delle posture che stanno assumendo, apre le porte a una serie vastissima di interessanti applicazioni. Ad oggi sistemi di riconoscimento delle parti del corpo umano e dei gesti sono ampiamente utilizzati in diversi ambiti, come l’interpretazione del linguaggio dei segni, in robotica per l’assistenza sociale, per indica- re direzioni attraverso il puntamento, nel riconoscimento di gesti facciali [1], interfacce naturali per computer (valida alternativa a mouse e tastiera), ampliare e rendere unica l’esperienza dei videogiochi (ad esempio Microsoft 1 Introduzione Kinect© e Nintendo Wii©), nell’affective computing1 . Mostre pubbliche e musei non fanno eccezione, assumendo un ruolo cen- trale nel coadiuvare una tecnologia prettamente volta all’intrattenimento con la cultura (e l’istruzione). In questo scenario, un sistema HCI deve cercare di coinvolgere un pubblico molto eterogeneo, composto, anche, da chi non ha a che fare ogni giorno con interfacce di questo tipo (o semplicemente con un computer), ma curioso e desideroso di beneficiare del sistema. Inoltre, si deve tenere conto che un ambiente museale presenta dei requisiti e alcune caratteristiche distintive che non possono essere ignorati. La tecnologia immersa in un contesto tale deve rispettare determinati vincoli, come: - non può essere invasiva; - deve essere coinvolgente, senza mettere in secondo piano gli artefatti; - deve essere flessibile; - richiedere il minor uso (o meglio, la totale assenza) di dispositivi hardware. In questa tesi, considerando le premesse sopracitate, si presenta una sistema che può essere utilizzato efficacemente in un contesto museale, o in un ambiente che richieda soluzioni non invasive. Il metodo proposto, utilizzando solo una webcam e nessun altro dispositivo personalizzato o specifico, permette di implementare i servizi di: (a) rilevamento e (b) monitoraggio dei visitatori, (c) riconoscimento delle azioni.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The area of Human-Machine Interface is growing fast due to its high importance in all technological systems. The basic idea behind designing human-machine interfaces is to enrich the communication with the technology in a natural and easy way. Gesture interfaces are a good example of transparent interfaces. Such interfaces must identify properly the action the user wants to perform, so the proper gesture recognition is of the highest importance. However, most of the systems based on gesture recognition use complex methods requiring high-resource devices. In this work, we propose to model gestures capturing their temporal properties, which significantly reduce storage requirements, and use clustering techniques, namely self-organizing maps and unsupervised genetic algorithm, for their classification. We further propose to train a certain number of algorithms with different parameters and combine their decision using majority voting in order to decrease the false positive rate. The main advantage of the approach is its simplicity, which enables the implementation using devices with limited resources, and therefore low cost. The testing results demonstrate its high potential.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La rápida evolución de tecnologías que mejoran la interacción con los ordenadores ha cambiado la forma de pensar de la sociedad actual. Una de las últimas alternativas se centra en el reconocimiento facial y de gestos, el cual ha dado muy buenos resultados, acercando ordenadores y humanos. El objetivo de este proyecto era desarrollar un prototipo que permita el reconocimiento de gestos mediante el dispositivo Kinect, el cual usa una de las tecnologías mencionadas anteriormente. Para la realización de este prototipo, fue necesario el estudio del funcionamiento de Kinect. Una vez realizado, el siguiente paso consistió en la búsqueda de librerías que facilitaran el reconocimiento de gestos, para compararlas y finalmente escoger una de ellas. Al ser una tecnología reciente, no existen demasiadas aplicaciones que utilicen Kinect y por lo tanto, la búsqueda de la librería era una parte fundamental del proyecto. Tras elegir la librería, se analizaron las características que debía tener el prototipo para realizar el diseño del mismo. Este diseño se realizó de forma iterativa y con numerosos cambios para otorgar al prototipo la mayor abstracción posible y una mayor orientación a objetos, para de esta manera, favorecer los posibles cambios que sean necesarios al ser una tecnología nueva y muy cambiante. Finalmente, el paso final fue la implementación del prototipo bajo el diseño propuesto, consiguiendo un prototipo funcional capaz de reconocer, almacenar, mostrar y definir diferentes gestos. Los resultados ofrecidos por Kinect son muy satisfactorios, con una gran precisión y eficiencia, demostrando el potencial de las nuevas tecnologías de interacción sin contacto, cuyo único defecto se encuentra en ser una tecnología poco madura y en evolución. ---------------------------------------------------------------------------------------------------------- The fast development of technologies that enhance interaction with computers has changed the mindset of the society. One of the latest alternatives focuses on face and gesture recognition, which has been very successful, bringing closer computers and humans. The purpose of this project was to develop a gesture recognition prototype with the Kinect device, which uses one of the technologies mentioned above. For the realization of this prototype, it was necessary to study how Kinect works. Once done, the next step was to search for libraries to facilitate the gesture recognition, to compare them and finally choose one of them. Because it is a recent technology, there are not many applications that use Kinect and therefore, the library search was an essential part of the project. After selecting the library, the characteristics that the prototype should have were analyzed for the design. This design was performed iteratively with numerous changes to give the prototype abstraction and object orientation, because as a new technology, it changes fast. Therefore, with this design it would be easier to deal with possible changes. Finally, the final step was the implementation of the prototype under the proposed design, getting a functional prototype able to recognize, store, display and define different gestures. The results offered by Kinect are very satisfactory, with high accuracy and efficiency, demonstrating the potential of the new interaction technologies without contact, whose only fault is to be a little mature technology and evolving.