133 resultados para Kinect


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several studies have shown that people with disabilities benefit substantially from access to a means of independent mobility and assistive technology. Researchers are using technology originally developed for mobile robots to create easier to use wheelchairs. With this kind of technology people with disabilities can gain a degree of independence in performing daily life activities. In this work a computer vision system is presented, able to drive a wheelchair with a minimum number of finger commands. The user hand is detected and segmented with the use of a kinect camera, and fingertips are extracted from depth information, and used as wheelchair commands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a simple low-cost approach toadding an element of haptic interaction within a virtualenvironment. Using off-the-shelf hardware and software wedescribe a simple setup that can be used to explore physically virtual objects in space. This setup comprises of a prototype glove with a number of vibrating actuators to provide the haptic feedback, a Kinect camera for the tracking of the user's hand and a virtual reality development environment. As proof of concept and to test the efficiency of the system as well as its potential applications, we developed a simple application where we created 4 different shapes within a virtual environment in order to try toexplore them and guess their shape through touch alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of depth sensors has made it possible to track – not only monocular cues – but also the actual depth values of the environment. This is especially useful in augmented reality solutions, where the position and orientation (pose) of the observer need to be accurately determined. This allows virtual objects to be installed to the view of the user through, for example, a screen of a tablet or augmented reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have been physically quite large, the size of these sensors is decreasing, and possibly – eventually – a 3D sensor could be embedded – for example – to augmented reality glasses. The wider subject area considered in this review is 3D SLAM methods, which take advantage of the 3D information available by modern RGB-D sensors, such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization and Mapping) and 3D tracking in augmented reality is a timely subject. We also try to find out the limitations and possibilities of different tracking methods, and how they should be improved, in order to allow efficient integration of the methods to the augmented reality solutions of the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les chutes chez les personnes âgées représentent un problème important de santé publique. Des études montrent qu’environ 30 % des personnes âgées de 65 ans et plus chutent chaque année au Canada, entraînant des conséquences néfastes sur les plans individuel, familiale et sociale. Face à une telle situation la vidéosurveillance est une solution efficace assurant la sécurité de ces personnes. À ce jour de nombreux systèmes d’assistance de services à la personne existent. Ces dispositifs permettent à la personne âgée de vivre chez elle tout en assurant sa sécurité par le port d'un capteur. Cependant le port du capteur en permanence par le sujet est peu confortable et contraignant. C'est pourquoi la recherche s’est récemment intéressée à l’utilisation de caméras au lieu de capteurs portables. Le but de ce projet est de démontrer que l'utilisation d'un dispositif de vidéosurveillance peut contribuer à la réduction de ce fléau. Dans ce document nous présentons une approche de détection automatique de chute, basée sur une méthode de suivi 3D du sujet en utilisant une caméra de profondeur (Kinect de Microsoft) positionnée à la verticale du sol. Ce suivi est réalisé en utilisant la silhouette extraite en temps réel avec une approche robuste d’extraction de fond 3D basée sur la variation de profondeur des pixels dans la scène. Cette méthode se fondera sur une initialisation par une capture de la scène sans aucun sujet. Une fois la silhouette extraite, les 10% de la silhouette correspondant à la zone la plus haute de la silhouette (la plus proche de l'objectif de la Kinect) sera analysée en temps réel selon la vitesse et la position de son centre de gravité. Ces critères permettront donc après analyse de détecter la chute, puis d'émettre un signal (courrier ou texto) vers l'individu ou à l’autorité en charge de la personne âgée. Cette méthode a été validée à l’aide de plusieurs vidéos de chutes simulées par un cascadeur. La position de la caméra et son information de profondeur réduisent de façon considérable les risques de fausses alarmes de chute. Positionnée verticalement au sol, la caméra permet donc d'analyser la scène et surtout de procéder au suivi de la silhouette sans occultation majeure, qui conduisent dans certains cas à des fausses alertes. En outre les différents critères de détection de chute, sont des caractéristiques fiables pour différencier la chute d'une personne, d'un accroupissement ou d'une position assise. Néanmoins l'angle de vue de la caméra demeure un problème car il n'est pas assez grand pour couvrir une surface conséquente. Une solution à ce dilemme serait de fixer une lentille sur l'objectif de la Kinect permettant l’élargissement de la zone surveillée.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’analyse de la marche a émergé comme l’un des domaines médicaux le plus im- portants récemment. Les systèmes à base de marqueurs sont les méthodes les plus fa- vorisées par l’évaluation du mouvement humain et l’analyse de la marche, cependant, ces systèmes nécessitent des équipements et de l’expertise spécifiques et sont lourds, coûteux et difficiles à utiliser. De nombreuses approches récentes basées sur la vision par ordinateur ont été développées pour réduire le coût des systèmes de capture de mou- vement tout en assurant un résultat de haute précision. Dans cette thèse, nous présentons notre nouveau système d’analyse de la démarche à faible coût, qui est composé de deux caméras vidéo monoculaire placées sur le côté gauche et droit d’un tapis roulant. Chaque modèle 2D de la moitié du squelette humain est reconstruit à partir de chaque vue sur la base de la segmentation dynamique de la couleur, l’analyse de la marche est alors effectuée sur ces deux modèles. La validation avec l’état de l’art basée sur la vision du système de capture de mouvement (en utilisant le Microsoft Kinect) et la réalité du ter- rain (avec des marqueurs) a été faite pour démontrer la robustesse et l’efficacité de notre système. L’erreur moyenne de l’estimation du modèle de squelette humain par rapport à la réalité du terrain entre notre méthode vs Kinect est très prometteur: les joints des angles de cuisses (6,29◦ contre 9,68◦), jambes (7,68◦ contre 11,47◦), pieds (6,14◦ contre 13,63◦), la longueur de la foulée (6.14cm rapport de 13.63cm) sont meilleurs et plus stables que ceux de la Kinect, alors que le système peut maintenir une précision assez proche de la Kinect pour les bras (7,29◦ contre 6,12◦), les bras inférieurs (8,33◦ contre 8,04◦), et le torse (8,69◦contre 6,47◦). Basé sur le modèle de squelette obtenu par chaque méthode, nous avons réalisé une étude de symétrie sur différentes articulations (coude, genou et cheville) en utilisant chaque méthode sur trois sujets différents pour voir quelle méthode permet de distinguer plus efficacement la caractéristique symétrie / asymétrie de la marche. Dans notre test, notre système a un angle de genou au maximum de 8,97◦ et 13,86◦ pour des promenades normale et asymétrique respectivement, tandis que la Kinect a donné 10,58◦et 11,94◦. Par rapport à la réalité de terrain, 7,64◦et 14,34◦, notre système a montré une plus grande précision et pouvoir discriminant entre les deux cas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le mouvement de la marche est un processus essentiel de l'activité humaine et aussi le résultat de nombreuses interactions collaboratives entre les systèmes neurologiques, articulaires et musculo-squelettiques fonctionnant ensemble efficacement. Ceci explique pourquoi une analyse de la marche est aujourd'hui de plus en plus utilisée pour le diagnostic (et aussi la prévention) de différents types de maladies (neurologiques, musculaires, orthopédique, etc.). Ce rapport présente une nouvelle méthode pour visualiser rapidement les différentes parties du corps humain liées à une possible asymétrie (temporellement invariante par translation) existant dans la démarche d'un patient pour une possible utilisation clinique quotidienne. L'objectif est de fournir une méthode à la fois facile et peu dispendieuse permettant la mesure et l'affichage visuel, d'une manière intuitive et perceptive, des différentes parties asymétriques d'une démarche. La méthode proposée repose sur l'utilisation d'un capteur de profondeur peu dispendieux (la Kinect) qui est très bien adaptée pour un diagnostique rapide effectué dans de petites salles médicales car ce capteur est d'une part facile à installer et ne nécessitant aucun marqueur. L'algorithme que nous allons présenter est basé sur le fait que la marche saine possède des propriétés de symétrie (relativement à une invariance temporelle) dans le plan coronal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La marche occupe un rôle important dans la vie quotidienne. Ce processus apparaît comme facile et naturel pour des gens en bonne santé. Cependant, différentes sortes de maladies (troubles neurologiques, musculaires, orthopédiques...) peuvent perturber le cycle de la marche à tel point que marcher devient fastidieux voire même impossible. Ce projet utilise l'application de Poincaré pour évaluer l'asymétrie de la marche d'un patient à partir d'une carte de profondeur acquise avec un senseur Kinect. Pour valider l'approche, 17 sujets sains ont marché sur un tapis roulant dans des conditions différentes : marche normale et semelle de 5 cm d'épaisseur placée sous l'un des pieds. Les descripteurs de Poincaré sont appliqués de façon à évaluer la variabilité entre un pas et le cycle complet de la marche. Les résultats montrent que la variabilité ainsi obtenue permet de discriminer significativement une marche normale d'une marche avec semelle. Cette méthode, à la fois simple à mettre en oeuvre et suffisamment précise pour détecter une asymétrie de la marche, semble prometteuse pour aider dans le diagnostic clinique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analysis of human behaviour through visual information has been a highly active research topic in the computer vision community. This was previously achieved via images from a conventional camera, but recently depth sensors have made a new type of data available. This survey starts by explaining the advantages of depth imagery, then describes the new sensors that are available to obtain it. In particular, the Microsoft Kinect has made high-resolution real-time depth cheaply available. The main published research on the use of depth imagery for analysing human activity is reviewed. Much of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions. The publicly available datasets that include depth imagery are listed, as are the software libraries that can acquire it from a sensor. This survey concludes by summarising the current state of work on this topic, and pointing out promising future research directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-report underpins our understanding of falls among people with Parkinson’s (PwP) as they largely happen unwitnessed at home. In this qualitative study, we used an ethnographic approach to investigate which in-home sensors, in which locations, could gather useful data about fall risk. Over six weeks, we observed five independently mobile PwP at high risk of falling, at home. We made field notes about falls (prior events and concerns) and recorded movement with video, Kinect, and wearable sensors. The three women and two men (aged 71 to 79 years) having moderate or severe Parkinson’s were dependent on others and highly sedentary. We most commonly noted balance protection, loss, and restoration during chair transfers, walks across open spaces and through gaps, turns, steps up and down, and tasks in standing (all evident walking between chair and stairs, e.g.). Our unobtrusive sensors were acceptable to participants: they could detect instability during everyday activity at home and potentially guide intervention. Monitoring the route between chair and stairs is likely to give information without invading the privacy of people at high risk of falling, with very limited mobility, who spend most of the day in their sitting rooms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Desenhar e avaliar experiências interativas digitais para museus, são processos complexos que requerem muita atenção, sobretudo nos aspectos que contribuem para que a mesma seja uma experiência de aprendizagem rica e cativante. Por outro lado, a avaliação de instalações interativas em museus é uma tarefa muito exigente devido à interseção de múltiplos campos de pesquisa, como a interação humano computador, design e multimédia, museologia, estudo de audiências e outros. Neste projeto foi criada uma plataforma com dez parâmetros específicos que permite avaliar e guiar o desenho de instalações interativas, concebidos especificamente para museus. A conceção da plataforma foi baseada em métodos e princípios de avaliação de usabilidade atuais, bem como na literatura sobre museologia. Seguidamente foi refinada de forma iterativa e, por fim, foi validada através de um estudo de longo prazo, sobre a interatividade do em todos os museus interativos da Ilha da Madeira. Além disso, e em paralelo com a criação desta plataforma de avaliação, foi criada, desenhada e desenvolvida uma instalação interativa cujo objetivo principal seria oferecer uma experiência de visita ao museu completamente diferente do que os visitantes estão habituados, na Madeira. Foi instalada na Casa-Museu Frederico de Freitas e apresentou um feedback muito positivo por parte dos visitantes do museu, que afirmaram ter gostado da experiência e ter adquirido algum conhecimento sobre a história e cultura da Madeira. A criação desta instalação permitiu aplicar a plataforma de avaliação criada, de forma iterativa ao longo de todo o processo de design e desenvolvimento, guiando e apoiando todas as fases e tomadas de decisão. Os resultados obtidos enfatizam a importância de adotar a plataforma proposta para futuras pesquisas sobre avaliação museus interativos. Além disso, demonstram a utilidade desta plataforma no design e desenvolvimento de instalações interativas que, por sua vez, revelaram ter um impacto positivo na integração nos museus da Região.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Cerebral Vascular Accident (CVA) is the leading cause of motor disability in adults and elderly and that is why it still needs effective interventions that contribute to motor recovery. Objective: This study was aimed to evaluate the performance of stroke patients in chronic stage using a virtual reality game. Method: 20 patients (10 with injury to the left and 10 to the right side), right-handed, average age 50.6 ± 9.2 years, and 20 healthy subjects with average age of 50.9 ± 8.8, also right-handed participated. The patients had a motor (Fugl-Meyer) and muscle tone assessment (Ashworth). All participants made a kinematic evaluation of the drinking water activity and then underwent training with the table tennis game on XBOX 360 Kinect®, 2 sets of 10 attempts for 45 seconds, 15 minutes rest between sets, giving a total of 30 minutes session. After training the subjects underwent another kinematic evaluation. The patients trained with the right and left hemiparect upper limb and the healthy ones with the right and left upper limb. Data were analyzed by ANOVA, t Student test and Pearson correlation. Results: There was significant difference in the number of hits between the patients and healthy groups, in which patients had a lower performance in all the attempts (p = 0.008), this performance was related to a higher level of spasticity (r = - 0.44, p = 0.04) and greater motor impairment (r = 0.59, p = 0.001). After training, patients with left hemiparesis had improved shoulder and elbow angles during the activity of drinking water, approaching the pattern of motion of the left arm of healthy subjects (p < 0.05), especially when returning the glass to the table, and patients with right hemiparesis did not obtain improved pattern of movement (p > 0.05). Conclusion: The stroke patients improved their performance over the game attempts, however, only patients with left hemiparesis were able to increase the angle of the shoulder and elbow during the functional activity execution, better responding to virtual reality game, which should be taken into consideration in motor rehabilitation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Control and automation of residential environments domotics is emerging area of computing application. The development of computational systems for domotics is complex, due to the diversity of potential users, and because it is immerse in a context of emotional relationships and familiar construction. Currently, the focus of the development of this kind of system is directed, mainly, to physical and technological aspects. Due to the fact, gestural interaction in the present research is investigated under the view of Human-Computer Interaction (HCI). First, we approach the subject through the construction of a conceptual framework for discussion of challenges from the area, integrated to the dimensions: people, interaction mode and domotics. A further analysis of the domain is accomplished using the theoretical-methodological referential of Organizational Semiotics. After, we define recommendations to the diversity that base/inspire the inclusive design, guided by physical, perceptual and cognitive abilities, which aim to better represent the concerned diversity. Although developers have the support of gestural recognition technologies that help a faster development, these professionals face another difficulty by not restricting the gestural commands of the application to the standard gestures provided by development frameworks. Therefore, an abstraction of the gestural interaction was idealized through a formalization, described syntactically by construction blocks that originates a grammar of the gestural interaction and, semantically, approached under the view of the residential system. So, we define a set of metrics grounded in the recommendations that are described with information from the preestablished grammar, and still, we conceive and implement in Java, under the foundation of this grammar, a residential system based on gestural interaction for usage with Microsoft Kinect. Lastly, we accomplish an experiment with potential end users of the system, aiming to better analyze the research results

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the development of an application created to assist the teaching of dental structures, generate rich content information and different manners of interaction. An ontology was created to provide semantics informations for virtual models. We also used two devices gesture-based interaction: Kinect and Wii Remote. It was developed a system which use intuitive interaction, and it is able to generate three dimensional images, making the experience of teaching / learning motivating. The projection environment used by the system was called Mini CAVE. © 2012 IEEE.