4 resultados para capteur de profondeur Kinect

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain injuries, including stroke, can be debilitating incidents with potential for severe long term effects; many people stop making significant progress once leaving in-patient medical care and are unable to fully restore their quality of life when returning home. The aim of this collaborative project, between the Royal Berkshire NHS Foundation Trust and the University of Reading, is to provide a low cost portable system that supports a patient's condition and their recovery in hospital or at home. This is done by providing engaging applications with targeted gameplay that is individually tailored to the rehabilitation of the patient's symptoms. The applications are capable of real-time data capture and analysis in order to provide information to therapists on patient progress and to further improve the personalized care that an individual can receive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analysis of human behaviour through visual information has been a highly active research topic in the computer vision community. This was previously achieved via images from a conventional camera, but recently depth sensors have made a new type of data available. This survey starts by explaining the advantages of depth imagery, then describes the new sensors that are available to obtain it. In particular, the Microsoft Kinect has made high-resolution real-time depth cheaply available. The main published research on the use of depth imagery for analysing human activity is reviewed. Much of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions. The publicly available datasets that include depth imagery are listed, as are the software libraries that can acquire it from a sensor. This survey concludes by summarising the current state of work on this topic, and pointing out promising future research directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-report underpins our understanding of falls among people with Parkinson’s (PwP) as they largely happen unwitnessed at home. In this qualitative study, we used an ethnographic approach to investigate which in-home sensors, in which locations, could gather useful data about fall risk. Over six weeks, we observed five independently mobile PwP at high risk of falling, at home. We made field notes about falls (prior events and concerns) and recorded movement with video, Kinect, and wearable sensors. The three women and two men (aged 71 to 79 years) having moderate or severe Parkinson’s were dependent on others and highly sedentary. We most commonly noted balance protection, loss, and restoration during chair transfers, walks across open spaces and through gaps, turns, steps up and down, and tasks in standing (all evident walking between chair and stairs, e.g.). Our unobtrusive sensors were acceptable to participants: they could detect instability during everyday activity at home and potentially guide intervention. Monitoring the route between chair and stairs is likely to give information without invading the privacy of people at high risk of falling, with very limited mobility, who spend most of the day in their sitting rooms.