8 resultados para grasping
em Universidad Politécnica de Madrid
Resumo:
This paper describes the design of a modular multi-finger haptic device for virtual object manipulation. Mechanical structures are based on one module per finger and can be scaled up to three fingers. Mechanical configurations for two and three fingers are based on the use of one and two redundant axes, respectively. As demonstrated, redundant axes significantly increase workspace and prevent link collisions, which is their main asset with respect to other multi-finger haptic devices. The location of redundant axes and link dimensions have been optimized in order to guarantee a proper workspace, manipulability, force capability, and inertia for the device. The mechanical haptic device design and a thimble adaptable to different finger sizes have also been developed for virtual object manipulation.
Resumo:
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP , in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e., distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. Further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.
Resumo:
Purpose: Surgical simulators are currently essential within any laparoscopic training program because they provide a low-stakes, reproducible and reliable environment to acquire basic skills. The purpose of this study is to determine the training learning curve based on different metrics corresponding to five tasks included in SINERGIA laparoscopic virtual reality simulator. Methods: Thirty medical students without surgical experience participated in the study. Five tasks of SINERGIA were included: Coordination, Navigation, Navigation and touch, Accurate grasping and Coordinated pulling. Each participant was trained in SINERGIA. This training consisted of eight sessions (R1–R8) of the five mentioned tasks and was carried out in two consecutive days with four sessions per day. A statistical analysis was made, and the results of R1, R4 and R8 were pair-wise compared with Wilcoxon signed-rank test. Significance is considered at P value <0.005. Results: In total, 84.38% of the metrics provided by SINERGIA and included in this study show significant differences when comparing R1 and R8. Metrics are mostly improved in the first session of training (75.00% when R1 and R4 are compared vs. 37.50% when R4 and R8 are compared). In tasks Coordination and Navigation and touch, all metrics are improved. On the other hand, Navigation just improves 60% of the analyzed metrics. Most learning curves show an improvement with better results in the fulfillment of the different tasks. Conclusions: Learning curves of metrics that assess the basic psychomotor laparoscopic skills acquired in SINERGIA virtual reality simulator show a faster learning rate during the first part of the training. Nevertheless, eight repetitions of the tasks are not enough to acquire all psychomotor skills that can be trained in SINERGIA. Therefore, and based on these results together with previous works, SINERGIA could be used as training tool with a properly designed training program.
Resumo:
To perform advanced manipulation of remote environments such as grasping, more than one finger is required implying higher requirements for the control architecture. This paper presents the design and control of a modular 3-finger haptic device that can be used to interact with virtual scenarios or to teleoperate dexterous remote hands. In a modular haptic device, each module allows the interaction with a scenario by using a single finger; hence, multi-finger interaction can be achieved by adding more modules. Control requirements for a multifinger haptic device are analyzed and new hardware/software architecture for these kinds of devices is proposed. The software architecture described in this paper is distributed and the different modules communicate to allow the remote manipulation. Moreover, an application in which this haptic device is used to interact with a virtual scenario is shown.
Resumo:
En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.
Resumo:
Independientemente de la existencia de técnicas altamente sofisticadas y capacidades de cómputo cada vez más elevadas, los problemas asociados a los robots que interactúan con entornos no estructurados siguen siendo un desafío abierto en robótica. A pesar de los grandes avances de los sistemas robóticos autónomos, hay algunas situaciones en las que una persona en el bucle sigue siendo necesaria. Ejemplos de esto son, tareas en entornos de fusión nuclear, misiones espaciales, operaciones submarinas y cirugía robótica. Esta necesidad se debe a que las tecnologías actuales no pueden realizar de forma fiable y autónoma cualquier tipo de tarea. Esta tesis presenta métodos para la teleoperación de robots abarcando distintos niveles de abstracción que van desde el control supervisado, en el que un operador da instrucciones de alto nivel en la forma de acciones, hasta el control bilateral, donde los comandos toman la forma de señales de control de bajo nivel. En primer lugar, se presenta un enfoque para llevar a cabo la teleoperación supervisada de robots humanoides. El objetivo es controlar robots terrestres capaces de ejecutar tareas complejas en entornos de búsqueda y rescate utilizando enlaces de comunicación limitados. Esta propuesta incorpora comportamientos autónomos que el operador puede utilizar para realizar tareas de navegación y manipulación mientras se permite cubrir grandes áreas de entornos remotos diseñados para el acceso de personas. Los resultados experimentales demuestran la eficacia de los métodos propuestos. En segundo lugar, se investiga el uso de dispositivos rentables para telemanipulación guiada. Se presenta una aplicación que involucra un robot humanoide bimanual y un traje de captura de movimiento basado en sensores inerciales. En esta aplicación, se estudian las capacidades de adaptación introducidas por el factor humano y cómo estas pueden compensar la falta de sistemas robóticos de alta precisión. Este trabajo es el resultado de una colaboración entre investigadores del Biorobotics Laboratory de la Universidad de Harvard y el Centro de Automática y Robótica UPM-CSIC. En tercer lugar, se presenta un nuevo controlador háptico que combina velocidad y posición. Este controlador bilateral híbrido hace frente a los problemas relacionados con la teleoperación de un robot esclavo con un gran espacio de trabajo usando un dispositivo háptico pequeño como maestro. Se pueden cubrir amplias áreas de trabajo al cambiar automáticamente entre los modos de control de velocidad y posición. Este controlador háptico es ideal para sistemas maestro-esclavo con cinemáticas diferentes, donde los comandos se transmiten en el espacio de la tarea del entorno remoto. El método es validado para realizar telemanipulación hábil de objetos con un robot industrial. Por último, se introducen dos contribuciones en el campo de la manipulación robótica. Por un lado, se presenta un nuevo algoritmo de cinemática inversa, llamado método iterativo de desacoplamiento cinemático. Este método se ha desarrollado para resolver el problema cinemático inverso de un tipo de robot de seis grados de libertad donde una solución cerrada no está disponible. La eficacia del método se compara con métodos numéricos convencionales. Además, se ha diseñado una taxonomía robusta de agarres que permite controlar diferentes manos robóticas utilizando una correspondencia, basada en gestos, entre los espacios de trabajo de la mano humana y de la mano robótica. El gesto de la mano humana se identifica mediante la lectura de los movimientos relativos del índice, el pulgar y el dedo medio del usuario durante las primeras etapas del agarre. ABSTRACT Regardless of the availability of highly sophisticated techniques and ever increasing computing capabilities, the problems associated with robots interacting with unstructured environments remains an open challenge. Despite great advances in autonomous robotics, there are some situations where a humanin- the-loop is still required, such as, nuclear, space, subsea and robotic surgery operations. This is because the current technologies cannot reliably perform all kinds of task autonomously. This thesis presents methods for robot teleoperation strategies at different levels of abstraction ranging from supervisory control, where the operator gives high-level task actions, to bilateral teleoperation, where the commands take the form of low-level control inputs. These strategies contribute to improve the current human-robot interfaces specially in the case of slave robots deployed at large workspaces. First, an approach to perform supervisory teleoperation of humanoid robots is presented. The goal is to control ground robots capable of executing complex tasks in disaster relief environments under constrained communication links. This proposal incorporates autonomous behaviors that the operator can use to perform navigation and manipulation tasks which allow covering large human engineered areas of the remote environment. The experimental results demonstrate the efficiency of the proposed methods. Second, the use of cost-effective devices for guided telemanipulation is investigated. A case study involving a bimanual humanoid robot and an Inertial Measurement Unit (IMU) Motion Capture (MoCap) suit is introduced. Herein, it is corroborated how the adaptation capabilities offered by the human-in-the-loop factor can compensate for the lack of high-precision robotic systems. This work is the result of collaboration between researchers from the Harvard Biorobotics Laboratory and the Centre for Automation and Robotics UPM-CSIC. Thirdly, a new haptic rate-position controller is presented. This hybrid bilateral controller copes with the problems related to the teleoperation of a slave robot with large workspace using a small haptic device as master. Large workspaces can be covered by automatically switching between rate and position control modes. This haptic controller is ideal to couple kinematic dissimilar master-slave systems where the commands are transmitted in the task space of the remote environment. The method is validated to perform dexterous telemanipulation of objects with a robotic manipulator. Finally, two contributions for robotic manipulation are introduced. First, a new algorithm, the Iterative Kinematic Decoupling method, is presented. It is a numeric method developed to solve the Inverse Kinematics (IK) problem of a type of six-DoF robotic arms where a close-form solution is not available. The effectiveness of this IK method is compared against conventional numerical methods. Second, a robust grasp mapping has been conceived. It allows to control a wide range of different robotic hands using a gesture based correspondence between the human hand space and the robotic hand space. The human hand gesture is identified by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of grasping.
Resumo:
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. In robotics a similar role has been played by modules that fit point cloud data to the superquadric family of shapes and its various extensions. We developed a model of shape tuning in AIP based on cosine tuning to superquadric parameters. However, the model did not fit the data well, and we also found that it was difficult to accurately reproduce these parameters using neural networks with the appropriate inputs (modelled on the caudal intraparietal area, CIP). The latter difficulty was related to the fact that there are large discontinuities in the superquadric parameters between very similar shapes. To address these limitations we adopted an alternative shape parameterization based on an Isomap nonlinear dimension reduction. The Isomap was built using gradients and curvatures of object surface depth. This alternative parameterization was low-dimensional (like superquadrics), but data-driven (similar to an alternative clustering approach that is also sometimes used in robotics) and lacked large discontinuities. Isomaps with 16 or more dimensions reproduced the AIP data fairly well. Moreover, we found that the Isomap parameters could be approximated from CIP-like input much more accurately than the superquadric parameters. We conclude that Isomaps, or perhaps alternative dimension reductions of CIP signals, provide a promising model of AIP tuning. We have now started to integrate our model with a robot hand, to explore the efficacy of Isomap shape reductions in grasp planning. Future work will consider dynamics of spike responses and integration with related visual and motor area models.
Resumo:
El uso de técnicas para la monitorización del movimiento humano generalmente permite a los investigadores analizar la cinemática y especialmente las capacidades motoras en aquellas actividades de la vida cotidiana que persiguen un objetivo concreto como pueden ser la preparación de bebidas y comida, e incluso en tareas de aseo. Adicionalmente, la evaluación del movimiento y el comportamiento humanos en el campo de la rehabilitación cognitiva es esencial para profundizar en las dificultades que algunas personas encuentran en la ejecución de actividades diarias después de accidentes cerebro-vasculares. Estas dificultades están principalmente asociadas a la realización de pasos secuenciales y al reconocimiento del uso de herramientas y objetos. La interpretación de los datos sobre la actitud de este tipo de pacientes para reconocer y determinar el nivel de éxito en la ejecución de las acciones, y para ampliar el conocimiento en las enfermedades cerebrales, sus consecuencias y severidad, depende totalmente de los dispositivos usados para la captura de esos datos y de la calidad de los mismos. Más aún, existe una necesidad real de mejorar las técnicas actuales de rehabilitación cognitiva contribuyendo al diseño de sistemas automáticos para crear una especie de terapeuta virtual que asegure una vida más independiente de estos pacientes y reduzca la carga de trabajo de los terapeutas. Con este objetivo, el uso de sensores y dispositivos para obtener datos en tiempo real de la ejecución y estado de la tarea de rehabilitación es esencial para también contribuir al diseño y entrenamiento de futuros algoritmos que pudieran reconocer errores automáticamente para informar al paciente acerca de ellos mediante distintos tipos de pistas como pueden ser imágenes, mensajes auditivos o incluso videos. La tecnología y soluciones existentes en este campo no ofrecen una manera totalmente robusta y efectiva para obtener datos en tiempo real, por un lado, porque pueden influir en el movimiento del propio paciente en caso de las plataformas basadas en el uso de marcadores que necesitan sensores pegados en la piel; y por otro lado, debido a la complejidad o alto coste de implantación lo que hace difícil pensar en la idea de instalar un sistema en el hospital o incluso en la casa del paciente. Esta tesis presenta la investigación realizada en el campo de la monitorización del movimiento de pacientes para proporcionar un paso adelante en términos de detección, seguimiento y reconocimiento del comportamiento de manos, gestos y cara mediante una manera no invasiva la cual puede mejorar la técnicas actuales de rehabilitación cognitiva para la adquisición en tiempo real de datos sobre el comportamiento del paciente y la ejecución de la tarea. Para entender la importancia del marco de esta tesis, inicialmente se presenta un resumen de las principales enfermedades cognitivas y se introducen las consecuencias que tienen en la ejecución de tareas de la vida diaria. Más aún, se investiga sobre las metodologías actuales de rehabilitación cognitiva. Teniendo en cuenta que las manos son la principal parte del cuerpo para la ejecución de tareas manuales de la vida cotidiana, también se resumen las tecnologías existentes para la captura de movimiento de manos. Una de las principales contribuciones de esta tesis está relacionada con el diseño y evaluación de una solución no invasiva para detectar y seguir las manos durante la ejecución de tareas manuales de la vida cotidiana que a su vez involucran la manipulación de objetos. Esta solución la cual no necesita marcadores adicionales y está basada en una cámara de profundidad de bajo coste, es robusta, precisa y fácil de instalar. Otra contribución presentada se centra en el reconocimiento de gestos para detectar el agarre de objetos basado en un sensor infrarrojo de última generación, y también complementado con una cámara de profundidad. Esta nueva técnica, y también no invasiva, sincroniza ambos sensores para seguir objetos específicos además de reconocer eventos concretos relacionados con tareas de aseo. Más aún, se realiza una evaluación preliminar del reconocimiento de expresiones faciales para analizar si es adecuado para el reconocimiento del estado de ánimo durante la tarea. Por su parte, todos los componentes y algoritmos desarrollados son integrados en un prototipo simple para ser usado como plataforma de monitorización. Se realiza una evaluación técnica del funcionamiento de cada dispositivo para analizar si es adecuada para adquirir datos en tiempo real durante la ejecución de tareas cotidianas reales. Finalmente, se estudia la interacción con pacientes reales para obtener información del nivel de usabilidad del prototipo. Dicha información es esencial y útil para considerar una rehabilitación cognitiva basada en la idea de instalación del sistema en la propia casa del paciente al igual que en el hospital correspondiente. ABSTRACT The use of human motion monitoring techniques usually let researchers to analyse kinematics, especially in motor strategies for goal-oriented activities of daily living, such as the preparation of drinks and food, and even grooming tasks. Additionally, the evaluation of human movements and behaviour in the field of cognitive rehabilitation is essential to deep into the difficulties some people find in common activities after stroke. This difficulties are mainly associated with sequence actions and the recognition of tools usage. The interpretation of attitude data of this kind of patients in order to recognize and determine the level of success of the execution of actions, and to broaden the knowledge in brain diseases, consequences and severity, depends totally on the devices used for the capture of that data and the quality of it. Moreover, there is a real need of improving the current cognitive rehabilitation techniques by contributing to the design of automatic systems to create a kind of virtual therapist for the improvement of the independent life of these stroke patients and to reduce the workload of the occupational therapists currently in charge of them. For this purpose, the use of sensors and devices to obtain real time data of the execution and state of the rehabilitation task is essential to also contribute to the design and training of future smart algorithms which may recognise errors to automatically provide multimodal feedback through different types of cues such as still images, auditory messages or even videos. The technology and solutions currently adopted in the field don't offer a totally robust and effective way for obtaining real time data, on the one hand, because they may influence the patient's movement in case of marker-based platforms which need sensors attached to the skin; and on the other hand, because of the complexity or high cost of implementation, which make difficult the idea of installing a system at the hospital or even patient's home. This thesis presents the research done in the field of user monitoring to provide a step forward in terms of detection, tracking and recognition of hand movements, gestures and face via a non-invasive way which could improve current techniques for cognitive rehabilitation for real time data acquisition of patient's behaviour and execution of the task. In order to understand the importance of the scope of the thesis, initially, a summary of the main cognitive diseases that require for rehabilitation and an introduction of the consequences on the execution of daily tasks are presented. Moreover, research is done about the actual methodology to provide cognitive rehabilitation. Considering that the main body members involved in the completion of a handmade daily task are the hands, the current technologies for human hands movements capture are also highlighted. One of the main contributions of this thesis is related to the design and evaluation of a non-invasive approach to detect and track user's hands during the execution of handmade activities of daily living which involve the manipulation of objects. This approach does not need the inclusion of any additional markers. In addition, it is only based on a low-cost depth camera, it is robust, accurate and easy to install. Another contribution presented is focused on the hand gesture recognition for detecting object grasping based on a brand new infrared sensor, and also complemented with a depth camera. This new, and also non-invasive, solution which synchronizes both sensors to track specific tools as well as recognize specific events related to grooming is evaluated. Moreover, a preliminary assessment of the recognition of facial expressions is carried out to analyse if it is adequate for recognizing mood during the execution of task. Meanwhile, all the corresponding hardware and software developed are integrated in a simple prototype with the purpose of being used as a platform for monitoring the execution of the rehabilitation task. Technical evaluation of the performance of each device is carried out in order to analyze its suitability to acquire real time data during the execution of real daily tasks. Finally, a kind of healthcare evaluation is also presented to obtain feedback about the usability of the system proposed paying special attention to the interaction with real users and stroke patients. This feedback is quite useful to consider the idea of a home-based cognitive rehabilitation as well as a possible hospital installation of the prototype.